content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
Methods: Seven focus groups were conducted with social workers (n=46) who were purposively sampled from six urban hospitals. These hospitals were identified due to their range of teams that included social workers and organizational focus on IPC. Each focus group was 45-65 minutes with 4-10 participants. A semi-structured interview protocol probed questions related to participant experiences in IPC teams, perspectives of IPC team functioning, and the contribution of social work to IPC. Sampling continued until theoretical saturation was achieved through no new codes or categories emerging from the data (Charmaz, 2014). Grounded theory was utilized via three independent coders using open and line-by-line coding and constant comparison within and across data until themes emerged. Thick description and an audit trail were employed as trustworthiness measures. Results: All participants were Master-level social workers with experience ranging from 1-44 years (M=13.5 years). Most identified as female (89%, n=41), Caucasian (78%, n=36) and ranged in age from 30-66. Participants worked in mental health, cardiac care, oncology, palliative care, neurology, complex care, surgery, rehabilitation, internal medicine, intensive care, family health, and emergency care. The key finding was that social workers are Empowering Collaboration by Actively Communicating (building relationships, holding information, and filling gaps), Proactively Educating (training the team, advocating for patients, and teaching about systems) and Managing Risk (troubleshooting discharge and avoiding liability). Collaboration was ‘empowered’ by social workers, meaning that participants often identified collaborative methods to mitigate potential risks, described by participants as being effective in determining a team approach to problem-solving. Communication was identified as ‘active’ and ‘engaged’ due to participant experiences of initiating IPC conversations (e.g., advocating for patients) and focusing on problem-solving (e.g., proactive education) which contributed to overall better patient care. Conclusions and Implications: These results have contributed to an integrated model of IPC that builds on the key functions of health social workers on interprofessional teams. Building on research that health social workers are highly skilled communicators (McCormick et al., 2007), study participants described that they actively facilitated effective communication which in turn, contributed to enhanced team functioning. Implications for social work research and education as well as practice with interprofessional teams will be provided.
https://sswr.confex.com/sswr/2020/webprogram/Paper37236.html
Welcome to the all new PlusTwo Physics! Wednesday, May 4, 2011 Electric Generator An electric generator works on the principle of electromagnetic induction. When a coil is rotated in the presence of a strong magnet, electricity is produced. See the video below for demonstrating a simple electric generator. Posted by AskPhysics at 3:20:00 PM Labels: Coil , EMI , generator , INDUCTION , Plus Two Physics , presence , principle No comments: Post a Comment Do not spam. Spammers will be banned from this site Newer Post Older Post Home Subscribe to:
http://blog.plustwophysics.com/2011/05/electric-generator.html
KUALA LUMPUR, June 19 — The Malaysian Medical Association (MMA) announced today it would invite the public and over 30 medical and health groups to sign a statement promoting quality health care for everyone. MMA launched the Titiwangsa Declaration 2019 last Saturday to express its willingness to work together with the Health Ministry to achieve universal health coverage, the theme for the World Health Organization (WHO) this year. Universal health coverage means that everyone can access quality health services that they need without getting into financial hardship. “Malaysia is ranked high by WHO for our quality of health care both regionally and internationally and we must take this to greater heights and push to our highest potential. “Ensuring no one is left behind, that all have equitable access to health care is a worthy goal to pursue,” Dr Teoh Siang Chin, chairman of MMA’s National Health Policy Committee, said at the launch of the Titiwangsa Declaration here. Health Minister Dzulkefly Ahmad said last month at the 72nd World Health Assembly, an annual gathering by WHO, that Malaysia planned to strengthen universal health coverage, citing the Peka B40 health screening programme for the poor.
https://codeblue.galencentre.org/2019/06/19/mma-invites-medical-groups-malaysians-to-support-health-for-all/
7.5 cm (h) x 88.8 cm (w) x 50.8 cm (d) IMPORTANT: Please measure your appliance cavity prior to purchase to ensure your chosen appliance will fit. A restocking fee may apply for returned appliances due to incorrect size purchase. Model overview: 88.8 cm wide Simple controls -single-handed operation via dial controls 5 burners including 1 dual wok GasStop function - Extremely safe Easy cleaning - dishwasher-safe ComfortClean pot rests Stainless-steel Low profile design Electro-mechanical controls Solid metal knobs ComfortClean dishwasher safe trivets Please click here for more information.
https://shop.miele.com.au/en/kitchen/cooktops-and-combisets/gas-cooktops/km-2354-gas-cooktop-zid09323230/
Geode Cave Crafting Material. Delicate, shimmery ore harvested only in Tiers 1-3 of Moonglow Grotto. Designed by: Trove Team No images have been added yet, upload one! Obtained We have determined the following methods can be used to obtain this item, however there may be others we haven't listed yet. Found in the following Lootboxes: The following lootboxes are currently obtainable. - 2 found as Common in Refurbished Crystallogy Crate - 10 found as Uncommon in Refurbished Crystallogy Crate - 7 found as Common in Moonglow Grotto Crate - 40-60 found as Common in Glowing Reliquary - 52-78 found as Uncommon in Greater Crystal Cache - 52-78 found as Uncommon in Starter Greater Crystal Cache This item is used as an ingredient to make recipes on the following benches: Crafted at Crystallogy Workbench Crafted at Module Workbench Crafted at Geodian Workbench Imported in Patch: Geode Blueprint: item_crafting_moonstone.blueprint To Create Link: Moonstone Use A Tag Content Forum posts that mention "Moonstone" - Dino Forts and Temples – What do you think of my new dungeons? Posted by Stedms 5 years ago You must be logged in to add a comment. No comments or likes yet!
https://trovesaurus.com/item/crafting/moonstone
While one goes to sleep, he is still alert and conscious. The active brain produces a kind of fast and small waves known as beta waves which slows down to be alpha waves while the brain begins to relax and goes to sleep. You will experience certain extraordinary and entirely glowing sensations called hypnagogic hallucinations, till your brain has not gone to sleep completely, which can be experienced by hearing certain sounds while you initially go to sleep. A myoclonic jerk is also a common even that is experienced severally when one suddenly wakes up without any apparent reason in the initial stage of sleep. The sleep can be divided into five stages. It is the light stage of sleep which occurs in the initial stage when brain is neither at sleep nor fully awake but somewhere in between. Brain produces theta waves of high amplitude, the extremely slow waves of the brain, which lasts for not more than 10 minutes. The person in this stage poses to be not sleeping, if awaken. In the second stage of sleep the brain produces large amount of rhythmic and rapid waves which activates the sleep and this stage lasts till 20 minutes. The temperature of the body starts reducing and the beat of heart slows down at this stage. The brain produces delta waves, deep but slow waves, which help in deepening the sleep as the person at this stage is between the light and deep sleep. The brain, at this stage also, goes on producing delta waves and the person remains at deep sleep. Sleep walking or bed wetting are usually experienced in this forth stage of sleep. This stage of sleep is known as REM and the dreams occur at this stage. The activity of brain increases at this stage with a retrospective effect on respiration and eye movement. This stage is also known as paradoxical sleeping stage when brain is most active, when dreaming gets possible, but the muscles remain extremely relaxed. The sleep do not progress sequentially from stage 1 to stage 5 but it repeats several times throughout the sleep in a cyclic way, from stage 1 to 4 and returns to stage 2, to prove REM which last for about 90 minutes every time. The period elongates with every rotation of the sleep cycle.
https://trivology.com/what-are-the-different-stages-of-sleep/
City to hold free health clinics next week LAREDO, Tex. (KGNS) - A variety of free health screenings will be offered for free next week. Starting on Monday, Operation Lone Star will provide residents in Laredo and surrounding areas the opportunity to get checkups. After a year of placing the operation on pause, health providers are back and will set up at the Lara Academy on East Travis Street, offering a variety of services. “You will be able to get all sorts of medical services for free,” said Noraida Negron, City Of Laredo Communications, “We’re talking about any kinds of screenings, we mentioned the vaccinations, the eye exams, you can get dental services. I mean anything for the entire family.” No documentation is required to receive any of these medical services. Copyright 2021 KGNS. All rights reserved.
https://www.kgns.tv/2021/07/22/city-hold-free-health-clinics-next-week/
We have recently changed the layout of the car park. We now have a Staff only area and a Patient only area. This has increased the number of spaces for patients to 6, including 1 disabled bay. If the car park is full there is alternative parking nearby at Sainsburys, Maryport Street, Monday Market Street or Sidmouth Street. Thank you very much in advance for your co-operation in this matter. St James Surgery The practice nursing team monitor patients with chronic illness such as asthma, diabetes, thyroid, epilepsy and coronary heart disease on a regular basis. They also currently offer the following services: The health care assistants are responsible for the anticoagulant clinics as well as ECGs, dressings, NHS Health Checks, Smoke-Stop clinics and Phlebotomy. The Clinical Commissioning Group employs community nurses who provide skilled nursing care to those patients who are housebound. This includes chronic illness, wound management, palliative and terminal care. They work alongside the surgery team and hospital-based nurses, in order to plan and deliver the care needed. We offer a full range of confidential contraceptive advice and services. This includes the combined "Pill", the "Mini-pill", injections, implants, IUDs (coil) and IUS (Mirena coil). Emergency contraception is offered also, either as an IUD, or as the "Morning-after pill" Health visitors are registered nurses with additional training in preventative care, health promotion and child health for those under the age of five. Health visitors can offer advice, information and support about a range of issues: Health visitors visit at home to talk in private and also run child health clinics and parents and baby groups. Health visitors can be contacted on 01380 732565. They are now based at Devizes Community Hospital. All physiotherapy appointments are now held at Devizes Community Hospital. Counsellors visit the surgery several times a week. Referral is made only through a doctor. We provide Minor Surgery on the premises. Appointments are made by referral from one of the doctors. If you require any vaccinations relating to foreign travel you need to make an appointment with the practice nurse to discuss your travel arrangements. This will include providing information about which countries (and areas within countries) that you are visiting, to determine what vaccinations are required. Information about countries and vaccinations required can be found on the links below: It is important to make the initial appointment as early as possible - at least 6 weeks before you travel - a second and even a third appointment will be required with the practice nurse to actually receive the vaccinations. These vaccines have to be ordered as they are not a stock vaccine. Your second/third appointment needs to be several weeks before you travel to allow the vaccines time to work. If you are thinking of booking a travel consultation and vaccination appointment with the surgery, you can download your Pre-travel Questionnaire Form here, or drop into the surgery to collect one. Please complete the form and take to your travel consultation appointment. Alternatively please complete the form and e-mail back to [email protected] before your appointment date. You do NOT need a medical certificate (Med3) from a doctor unless you have been unwell for more than seven days. If you are under hospital outpatient care, please obtain your certificate from the hospital doctor. If you are off work for four, or more, days in a row you need to fill out a self-certificate form SC2. You can obtain this from reception, the Post Office, DSS or on the HMRC website . It does not require a doctor’s signature. If you are self-employed you can also obtain a copy of SC2 from the local DSS office. For any illness lasting longer than seven days you will need to see or speak with a doctor for him/her to issue a sickness certificate (F.med3) and for any subsequent renewal of the certificate. If your employer insists on a sickness certificate for 7 days or less a charge will be levied. Please note that we are unable to pre-date medical certificates. There are many services that are not provided by the NHS. A fee will therefore be charged for the services listed below: Please note:This is only a basic list covering the non-NHS services that we find are in most demand; there will be other similar services which will also attract a fee. Medical Examinations take at least half an hour. Those which involve driving licence renewal should be booked well in advance of the licence expiry date and the patient should make an appointment with an Optician to have that part of the paperwork completed before attending their examination appointment at the surgery. A list of the fees that the surgery charges is displayed at the reception desk as well as on this website. *To be reviewed JULY 2021 ** Charge applies to non UK residents from countries without a reciprocal arrangement ROYAL UNITED HOSPITAL BATH / NHS TRUST / FROME / COMMUNITY CLINICS 2014-15 Community Clinics 2014-15 Wiltshire (pdf) Enter all or part of your postcode in the box below and click one of the buttons to find those services that are local to you.
https://stjamessurgerydevizes.co.uk/clinics-and-services.aspx?t=2
We are looking for an Asia-Pacific based Project Manager to join our team. You’ll join our team of Project Managers, managing enterprise and publishing projects in the APAC region. As a Senior Project Manager you have day-to-day responsibility for the smooth and effective running of any client or internal projects to which you are assigned. As a focus of delivery knowledge you are a key person whose opinion and advice on project delivery is actively sought by clients and agency members. You should be able to effectively set up a project, control budgets, track and report on projects, and carry out risk and issue analysis and quality assurance. You should have a thorough understanding of the principles and processes of Project Management, and those methodologies adopted at Human Made. As it is essential that you build a good rapport with both your internal team and with the client, you should be an excellent communicator, sensitive to the sometimes competing needs of all persons involved with a project. You are responsible for ensuring that clients and account team members are kept up-to-date on project developments, including changes in timings, costs, and any issues. You should set expectations realistically, be proactive in dealing with problems and not avoid difficult conversations, including pushing back on scope creep. Human Made works with a diverse set of clients across big media, publishing, and enterprise. You should be capable of leading large, complex 6+ month projects with large budgets in a remote setting. Your involvement with a project goes from pre-sales to post-mortem. You should assist the sales team to define scope, requirements and costs for projects. For the project duration, you will drive delivery, managing timescales and facilitating communication between internal and external partners. Human Made is a remote company so you should have excellent written communication skills that enable you to work with a diverse group of people from around the world. You should be able to work in a self-directed manner, addressing problems as they come up, and identifying issues in your area of work and fixing them. You should be able to prioritise your tasks and confident about speaking up to ask for help when you need it. Skills - Qualifications or equivalent work experience - Experience in using a wide range of project management tools such as, budget/time tracking tools and issue tracking software with the depth of knowledge to recommend the right tools for a project - Experience in an enterprise environment - Deep knowledge of office tools - Broad experience in project management methodologies such as Agile and Waterfall and their application to different projects, also attendant frameworks such as Scrum. Responsibilities - Lead large, complex, 6+ month technology/web projects with large budgets. - Deliver projects to deadlines and within budget. - Set up the project in project management software and maintain throughout the lifecycle. - Analyse areas that are at risk in the project and plan what you will do to mitigate against them. - Maintain a general oversight of timings, key milestones, budgets, resourcing, risks, etc. - Obtain internal and client sign-offs are obtained on key project deliverables and maintain an audit trail of sign-offs. - Assess the time and financial impact of changes, and work with the Account Manager to strategise and communicate mitigation plans with the client. - Be aware of the status of your project’s budget; Works in Progress should be updated weekly and any variations from budget accounted for and mitigated before they happen. - Produce realistic timings in the chosen PM software with clearly flagged milestones. - Manage the people and relationships on a project in a way that is constructive and collaborative. Learn more You can learn more about Human Made and our employee benefits on our hiring page. And you can learn more about working at Human Made and our hiring process in the company handbook. To apply Send us an email and CV to [email protected], and tell us why you think you’d be a great fit for the company.
https://humanmade.com/2018/05/21/join-human-made-as-a-project-manager-apac/
Andre Fenton wants readers of his debut novel Worthy of Love to know there's more to someone than their physical appearance. Having his own struggles with body image, he hopes to bring a new voice to the literary world. "I thought it was an important story to tell for youth that feel underrepresented in the young-adult fiction genre," says Fenton. "There aren't enough books for young men, especially Black boys, who are struggling with poor body image, so I wanted to touch on that in the book." Worthy of Love follows Adrian Carter, a mixed-raced teen from the north end who struggles with poor self-image and bullying. He decides to lose weight. When he falls for Mel Woods, a girl with a passion for fitness, he begins taking dangerous weight-loss measures and expressing his self-esteem issues in unhealthy ways which strain their relationship. "Adrian believes his body image and how he looks makes him worthy," says Fenton. "Throughout the story, he's trying to unlearn that and know that there's so much more to him besides the numbers on the scale." Fenton, an accomplished poet and aspiring filmmaker, has been developing Worthy of Love's characters since he was in high school. "I think that growing up while writing these characters," he says, "helped me to learn a lot about myself and I use those life lessons in the stories." Entering a new and exciting chapter, Fenton is looking forward to a future he hopes includes new stories, characters and voices in Canadian literature. "I hope they take away the idea of empathy," says Fenton of his own readers. "And also realizing that the things you do, and who you unload on, impacts other people."
https://www.thecoast.ca/halifax/andre-fentons-love-wins/Content?oid=19553573
VUIT will perform a reboot one of the email servers (EM103) at 8:00 PM tonight as part of the troubleshooting process for the latency issue experienced today. Users of this server can expect an interruption to service of approximately 20 minutes. VUIT communicated directly with users on this server advising them of the outage.
https://my.vanderbilt.edu/vuitoutages/2014/02/breakfix-on-one-email-server-tonight/
VSAT terminals are communication terminals that transmit and receive text, audio, and video data using satellite broadband Internet services. The APAC region was the highest revenue contributing region in the global VSAT market and is expected to continue its dominance over the forecast period. Being a disaster-prone region, most countries in the APAC region require satellite broadband communication to provide timely and accurate disaster management. In addition, satellite broadband communication helps to develop extensive GIS data, which is used for analyzing and managing various hazardous activities such as mining. The region is anticipated to witness significant growth leading to the increasing adoption advanced satellite technology over the next few years. The global Very Small Aperture Terminals market was valued at xx million US$ in 2018 and will reach xx million US$ by the end of 2025, growing at a CAGR of xx% during 2019-2025. This report focuses on Very Small Aperture Terminals volume and value at global level, regional level and company level. From a global perspective, this report represents overall Very Small Aperture Terminals market size by analyzing historical data and future prospect. Regionally, this report categorizes the production, apparent consumption, export and import of Very Small Aperture Terminals in North America, Europe, China, Japan, Southeast Asia and India. For each manufacturer covered, this report analyzes their Very Small Aperture Terminals manufacturing sites, capacity, production, ex-factory price, revenue and market share in global market. The following manufacturers are covered: Gilat Satellite Networks Harris CapRock Hughes Network Systems LLC VT iDirect ViaSat Inmarsat KVH Industries Bharti Airtel Limited Embratel Participacoes S.A HCL Comnet ND SatCom GmbH PolarSat Primesys Solulles Empresariais S.A Signalhorn AG Mitsubishi Electric Norsat International Iridium Communications Segment by Regions North America Europe China Japan Southeast Asia India Segment by Type Single Channel Per Carrier (SCPC) Multiple Channels Per Carrier (MCPC) Segment by Application Time Division Multiple Access (TDMA) Demand Assigned Multiple Access (DAMA) Summary: Get latest Market Research Reports on Very Small Aperture Terminals. Industry analysis & Market Report on Very Small Aperture Terminals is a syndicated market report, published as Global Very Small Aperture Terminals Market Professional Survey Report 2019. It is complete Research Study and Industry Analysis of Very Small Aperture Terminals market, to understand, Market Demand, Growth, trends analysis and Factor Influencing market.
https://www.reportsandmarkets.com/reports/global-very-small-aperture-terminals-market-professional-survey-report-2019
The Oklahoma State Department of Health (OSDH) relies on a state public health veterinarian for the surveillance, prevention and control of zoonotic diseases such as rabies and tularemia, which are spread from animals to humans. A veterinarian in this position also provides technical assistance in the agency’s preparedness and response efforts for bioterrorism threats such as anthrax and plague. LeMac’ Morris recently joined the OSDH to serve as the state public health veterinarian after many years of practicing veterinary medicine in Sulphur. After leaving private practice, he went back to school to pursue a master’s degree in public health from the University of Iowa. While enrolled in the program, he worked for the Center for Food Security and Public Health, which is a specialty center for the Center for Disease Control and Prevention (CDC). Upon graduating, he became a technical advisor working for companies manufacturing animal health pharmaceuticals and biologics. As the state public health veterinarian, his duties will focus on zoonotic diseases and how they impact public health. “A large portion of my responsibilities involve working with our team of epidemiologists evaluating the risk of exposure in rabies cases involving both humans and animals,” said Morris. “Perhaps one of my most important responsibilities is gathering and conveying pertinent information regarding zoonotic diseases to the veterinary community, to the public, and when needed, responding to assist in controlling disease outbreaks.” He will work with teams performing mosquito surveillance to monitor diseases such as West Nile virus. His duties also consist of consulting with county health departments, health care providers, laboratory personnel and animal disease experts at Oklahoma State University and the United States Department of Agriculture (USDA). Morris also works closely with local, state and federal partners to formulate and interpret laws, rules, and regulations for administration and enforcement of communicable and zoonotic disease intervention and control efforts.
https://oklahoma.gov/health/newsroom/2019/november/public-health-veterinarian-plays-vital-role-in-diseaseinvestiga.html
One of the foremost dancers of the Odissi format will perform in Auckland later in the year. Madhavi Mudgal, known the world-over for her impeccable style and exquisite footwork, will present her dance concert on Saturday, November 3, 2018 at the Green Bay High School Performing Arts Centre, located at 161, Godley Road, Greenbay. Called’ Arpan,’ (‘Divine Offering’), the programme is being brought to Australia and New Zealand by Dr Ghulla (Sam) Goraya, a famous Odissi Dancer based in Melbourne. Prior to arriving in New Zealand, Madhavi Mudgal will perform in Adelaide on October 20 (at The Parks Recreation and Sports Centre, 46 Cowan Street, Angle Park) and in Melbourne on October 26 and 27, 2018 (at Southbank Theatre located at The Lawler). A prime disciple of the legendary Guru Kelucharan Mohapatra, Madhavi Mudgal is credited with bringing a greatly refined sensibility to her art form. Festival (Mexico), Festival de la Mer (Mauritius), Vienna Dance Festival (Austria), Festival of Indian Dance (South Africa), Festival of Indian Culture (Sao Paulo, Brazil), Days of Indian Culture (Hungary), Festival of Indian Arts (London), the Avignon and Montpellier Festivals (France), Pina Bausch’s Festival (Wuppertal and Berlin Festpiele, Germany), Theatre de la Ville (Paris), Lyon Biennale (France) and at festivals of dance in Spain, Morocco, Laos, Vietnam, Malaysia, Japan and the Indian Subcontinent. A Melbourne based Odissi dancer who has many years of performance and production experience, Dr Sam Goraya produces powerful performances that invoke the latent energy that resides within the human system. His works are confined to themes that are spiritually uplifting and possibly use consciousness concepts which are highly complex in nature and, at times, difficult to comprehend. He always attempts to present these concepts with care and clarity, using Odissi as the medium of expression. With four postgraduate (Masters’) degrees and PhD in Mathematics and Oceanography, his career is in the telecommunication industry. Odishi, also referred as ‘Orissi’ in older literature, is a major ancient Indian Classical dance that originated in the Hindu temples of Odisha – an Eastern Coastal State of India. Odissi, in its history, was performed predominantly by women, and expressed religious stories and spiritual ideas, particularly of Vaishnavism (Lord Vishnu as Lord Jagannath). Odissi performances have also expressed ideas of other traditions such as those related to Hindu Gods Shiva and Surya, as well as Hindu Goddess Shakti and her forms. Modern Odissi is performed by children and adults, as solo or group performances. The theoretical foundations of Odissi trace to the ancient Sanskrit text ‘Natya Shastra,’ its existence in antiquity evidenced by the dance poses in the sculptures of Odissi Hindu Temples and archaeological sites related to Hinduism, Buddhism and Jainism. The Odissi dance tradition declined during the Islamic rule era, and was suppressed under the British Rule, which was protested by Indians, followed by its revival, reconstruction and expansion since India gained independence on August 15, 1947. Odissi is traditionally a dance-drama genre of performance art, where the artistes and musicians play out a mythical story, a spiritual message or devotional poem from the Hindu texts, using symbolic costumes, body movement, abhinaya (expressions) and mudras (gestures and sign language) set in ancient Sanskrit literature. Odissi is learnt and performed as a composite of basic dance motif called ‘Bhangas,’ that conform to symmetric body bends, stance. It involves the feet, midriff and hand and head as three sources of perfecting expression and audience engagement with geometric symmetry and rhythmic musical resonance. An Odissi performance repertoire includes invocation, Nritta (pure dance), Nritya (expressive dance), Natya (dance drama) and Moksha (dance climax connoting freedom of the soul and spiritual release). For more information and tickets for ‘Arpan’ in Auckland please contact Arvinder Vasudeva on 021-0756194, Shanti Ravi on 021-2946394 or Basant Madhur on (021-0357954).
http://www.indiannewslink.co.nz/odissi-exponent-madhavi-mudgal-to-perform-in-auckland/?shared=email&msg=fail
People across the LGBTQIA+ spectrum came to the VICE offices in Brooklyn to hash out some of the most pressing issues they face today. This is part 2 of that conversation. Within the LGBTQ umbrella lies a multitude of identities and ideologies. In an effort to showcase this VICE gathers a diverse group of progressive and conservative members of the LGBTQ community to discuss topics that relate to queerness and its intersection with culture, and politics. Our goal is to foster conversations that are already happening within communities. With this format, we hope to give the viewer a rare opportunity to witness a conversation with people across the aisle that they otherwise might never find themselves in. TIMECODES OF WHAT WAS DISCUSSED: 00:29 - does Pete Buttigieg being gay make you want to vote for him? 2:08 - Identity Politics (things get heated) 3:20 - Pete Buttigieg as representation for LGBTQ+ people 3:55 - How do people feel about the terms GNC ("gender non-conforming") and "non-binary?" 11:08 - do you feel welcomed by the LGBTQ+ community? 14:29 - The Gender Binary About VICE: The Definitive Guide To Enlightening Information. From every corner of the planet, our immersive, caustic, ground-breaking and often bizarre stories have changed the way people think about culture, crime, art, parties, fashion, protest, the internet and other subjects that don't even have names yet. Browse the growing library and discover corners of the world you never knew existed. Welcome to VICE.
Feasting, fervour mark Onam in DelhiAugust 29th, 2012 - 12:53 pm ICT by IANS New Delhi, Aug 29 (IANS) Spruced up homes, colourful floral designs and delicacies on the platter - the Malayali community in the national capital celebrated Onam with feasting and fervour here Wednesday. There are over two lakh Malayalis in the National Capital Region. The highlight of this harvest festival is the traditional 27-course “sadhya” (feast). “The preparations begin days before Onam. There is vegetable curry, sweet payasam and various other delicacies,” said Gayathri Srinivasan, a resident of Mayur Vihar in east Delhi. The dishes are served on a banana leaf in the traditional way. To herald the new season, courtyards are graced with colourful “pookalams” or floral designs. On Onam it is widely believed that Kerala’s mythological king Mahabali, known for his benevolence, comes to visit his people and ensure their well-being and prosperity. Malayali cultural bodies and community associations in the city had begun the festivities a few days ago, holding elaborate traditional feasts, “pookalam” competitions and music concerts. Onam falls in the month of Chingam, the first month of the Malayalam calendar. Thiruvonam, the most important day in the Onam calendar, falls Aug 29. The Onam week began Aug 26 and ends Aug 31.
http://www.thaindian.com/newsportal/lifestyle/feasting-fervour-mark-onam-in-delhi_100641367.html
Despite some of our recent unpleasantries, I’m still a fan of the Big Tent of Paganism. We do our own things and we work deeply within the traditions that speak to us, but we come together to share what we learn and to support our common interests. I like the Big Tent because I like learning from the many and varied groups and traditions within it. Perhaps the hottest area under the Big Tent these days is sorcery. Some call it wizardry, some call it ceremonial magic – it’s basically working with various spirits to work magic, to learn and grow, and to navigate life more effectively. I blame the popularity on Gordon White and especially on his Rune Soup podcast, which frequently features sorcerers and ceremonial magicians. But there has never been a time when people weren’t working with spirits in a more or less sorcerous fashion – Gordon is simply sorcery’s best current publicist. One of the most popular spirits is Saint Cyprian. His feast day is September 16 in the Catholic and Orthodox churches and September 26 in the Anglican church, so he’s getting even more attention right now. He was born in Carthage in 200 CE and became a powerful pagan magician. When his magic was thwarted by a woman making the sign of the cross, he converted to Christianity and became a priest and then a bishop. He was executed by the Roman proconsul Galerius Maximus in 258 and is considered a Christian martyr. Jason Miller at Strategic Sorcery has an excellent blog post on what it means to work with a Christian saint. If you want Cyprian, you kind of have to take Christ with him … The Chrysm that anointed Cyprian a Bishop is at the heart of what makes his oil potent. If you are looking for powerful Pagan Magicians to venerate there is no shortage of them, but there is only one St. Cyprian, and yeah, he is a Bishop. Pagans have been de-Christianizing and re-Paganizing stories, prayers, and magical workings since the beginning of the modern Pagan movement. For example, none of our existing Celtic stories were written down until well into the Christian era and many have been edited from their original forms. Putting them back the way they were – forensically when we can, speculatively when we can’t – is good and often necessary work and I’ve done my share of it. But reclaiming our ancestral heritage is one thing. Praying to someone who gave up that heritage because he saw an opportunity for power in a new religion is quite another. As Jason points out, Saint Cyprian is not just a magician. He’s a Christian priest, bishop, and martyr. His story is the story of the triumph of Christianity over paganism, of monotheism over polytheism. When you call on Saint Cyprian, you’re affirming that history. Perhaps you’re OK with that. Jason makes a point echoed by Gordon White and some of his podcast guests: Christianity is not inherently fundamentalist and dominating. As I said in last year’s Letter To My Christian Friends, as a polytheist I have no difficulty acknowledging Jesus as a God – just not as the only God. For centuries, magicians in Christian dominated regions have practiced magic within a Christian context, despite the official prohibitions against it. They left us many of the grimoires that are currently so popular. If you’re a Christian with an interest in magic, I encourage you to work with Saint Cyprian. I’m sure many of the people trying to de-Christianize Cyprian are good inclusive liberals who strongly disapprove of cultural appropriation. But that’s exactly what they’re doing when they de-Christianize Saint Cyprian – they’re appropriating him out of his Christian context for their own purposes. “Hey, I know you willingly died rather than recant your Christian faith, but I don’t like Christianity. You won’t mind helping me do some Pagan stuff anyway, right?” Predictably, Jason has a commenter who says he did exactly this and got good results. But that’s disrespectful to the spirit he’s calling on, and it’s as inauthentic as white suburban Americans trying to do a sun dance in their back yard. You may get some results, but without the context of the Plains Indians cultures it will be a weak facsimile of the real thing. So I can’t work with Saint Cyprian. It would be inauthentic and it would work against my core goal of facilitating a Polytheist Restoration. As Jason points out, there are plenty of Pagan magicians to venerate and emulate. There’s no reason you have to work with Saint Cyprian just because he’s currently popular. When I need help with magic, I usually go to Auset (Isis). One of Her titles is Mistress of Magic – She learned the true name of Ra and in doing so, learned the secrets of all magics. Or I may ask for help from the Morrigan, whose expertise in magic (especially battle magic) is shown throughout Her lore. I also may ask for help from certain Druid spirits who’ve made themselves known to me. I have long, committed relationships with all these Gods and spirits based on shared heritage, shared goals, mutual respect and reciprocity. They aren’t names I picked out of some arcane book. I’m fascinated by sorcery and I’m intrigued with its current popularity. It has roots in an ancient polytheist worldview, and there is a place for it in contemporary Pagan and polytheist practices. But like every other magical and spiritual practice, it is strongest when it is incorporated into a whole religious and metaphysical system. Saint Cyprian belongs to the Christian religion and the monotheist worldview. Pagan and polytheist magicians should call on someone else.
https://www.patheos.com/blogs/johnbeckett/2016/09/why-i-dont-work-with-saint-cyprian.html
If you’ve ever wondered what manual handling is or what the regulations for manual handling are, then this post is for you. More importantly, it highlights the manual handling TILE acronym and how to implement this within your organisation. Definition of manual handling and the manual handling regulations The Manual Handling Operations Regulations (MHOR) 1992 define manual handling as: “… any transporting or supporting of a load (including the lifting, putting down, pushing, pulling, carrying or moving thereof) by hand or bodily force.” These regulations were developed to help your organisation reduce the number of Musculoskeletal Disorders (MSDs), associated with Manual Handling, which is the most commonly reported type of work-related ill health. Every organisation has potentially harmful manual handling tasks and our manual handling statistics infographic illustrates why manual handling plays such a central role in occupational safety. These regulations demonstrate a hierarchy of measures that will help you manage your manual handling risks. The ranking system for this hierarchy is listed below: - First – You must avoid the harmful manual handling operations, so far as it is reasonably practicable - Second – Assess the manual handling operations that cannot be avoided - Third – Reduce the risk of injury so far as it is reasonably practicable. To help you assess manual handling risks, combat MSDs and implement the hierarchy of measures a number of different tools are available such as the MAC tool, manual handling risk assessment based on the TILEO acronym (schedule 1 of the MHOR 1992), DVDs, publications, guidance documents and manual handling training – such as train the trainer. Manual Handling TILEO The manual handling TILEO acronym can be used to assess each manual handling activity within your own organisation. The acronym ‘TILEO’ stands for TASK, INDIVIDUAL, LOAD, ENVIRONMENT and OTHER FACTORS and will help your organisation conduct dynamic risk assessments or on the spot assessments. When a detailed risk assessment is conducted, it should take account of both the relevant physical and psychosocial factors contained in Schedule 1 of the MHOR. A manual handling risk assessment will identify a number of hazards and TILEO can be used. The role of the dynamic risk assessments is to identify hazards that may appear on a day-to-day basis due to the changing nature of the work being undertaken and therefore you should train your workforce to consider TILEO before they do any manual handling. These risk assessments do not need to be written down as they form part of handlers’ good working practice. Handlers must observe their surroundings and take appropriate action to reduce or eliminate risks that would have not been foreseeable as part of the manual handling risk assessment. The table below demonstrates how the manual handling TILEO can be used to assess each manual handling activity within your own organisation and how TILEO can be used to conduct an on the spot assessment, which allows you to ‘Think Before You Lift’ or ‘Plan a Lift’. I hope this post has given you more insight into manual handling, please feel free to discuss this information or post more specific information below. Further reading: The five step guide to risk assessment Should the self-employed be exempt from health and safety law?
https://rospaworkplacesafety.com/2013/02/18/manual-handling-definition/
Using data from the National Longitudinal Study of Adolescent Health (Add Health), we investigate whether certain aspects of personal appearance (i.e., physical attractiveness, personality, and grooming) affect a student's cumulative grade point average (GPA) in high school. When physical attractiveness is entered into the model as the only measure of personal appearance (as has been done in previous studies), it has a positive and statistically significant impact on GPA for female students and a positive yet not statistically significant effect for male students. Including personality and grooming, the effect of physical attractiveness turns negative for both groups, but is only statistically significant for males. For male and female students, being very well groomed is associated with a statistically significant GPA premium. While grooming has the largest effect on GPA for male students, having a very attractive personality is most important for female students. Numerous sensitivity analyses support the core results for grooming and personality. Possible explanations for these findings include teacher discrimination, differences in student objectives, and rational resource allocation decisions.
https://miami.pure.elsevier.com/en/publications/effects-of-physical-attractiveness-personality-and-grooming-on-ac
Results from yesterday’s Zurich Diamond League meet showed the poles pulling apart as some rode the high of the 2013 Moscow World Championships, while others looked like the long season had caught up with them. Eunice Sum, Nick Symmonds, Bohdan Bondarenko and Meseret Defar fell into the first category. Asbel Kiprop, Ezekiel Kemboi, Mathew Centrowitz, Ivan Ukhov and even Usain Bolt found themselves joined in the second. The most anticipated race of the evening was the 5000 meter showdown between former Ethiopian teen queens turned Olympic icons, Tirunesh Dibaba and Meseret Defar, the two most decorated women distance runners of their generation. Both arrived in Zurich as World Champions, Tiru at 10,000 meters, Mezzy in the 5000. Since they first squared off at the 2002 Carlsbad 5000 as precocious teens (Tiru took 2nd in 15:19 to Mezzy’s 11th in 15:58) theirs has been the match up that has most intrigued yet frustrated running fans. Though they have competed more than 20 times over the years, Zurich was only their third head-to-head clash since 2009. In this first duel of the year Defar proved to be two-seconds sharper, 14:32.84 to 14:34.82. Her 58-second last lap – that held more in reserve as I saw it — forced Tirunesh to lead out from 600 meters hoping to use her 10,000 meter strength to grind down Meseret’s 5000 meter closing speed. Tiru’s sister Genzebe even aided the family cause by pushing the pace past 4K in an attempt to set up her older sis. But a cursory look at their seasons shows how Mezzy and Tiru have fashioned their focus in these times of race specialization, a focus that made Mezzy the favorite in Zurich. Though Tiru came into Switzerland as the 5000 meter world record holder (14:11.15, Oslo 2008) and the world leader in 2013 (14:23.68, Paris July 6th), she couldn’t dispose of Mezzy (#2 all-time,14:12.88, Stockholm `08) over the final 600 meters, though she tried mightily. In Defar’s favor was her focus on shorter distance racing throughout the season. Though she had shown ample strength with a 10,000 win in Sweden June 27th (in a world leading 30:08) most of her attention has been paid to the 5000. In that regard she tuned up for Zurich last week with a 3000m win at the DN Galan meet in Stockholm after racing 5000 meters in Shanghai (May 18) and Oslo (June 13). She then toured the distance twice more in Moscow at the World Championships earlier this month, once in the prelims then again winning the final (August 14 & 17). And while Tirunesh had raced over 5000m at the Prefontaine Classic June 1st, then in Paris on July 6th, her focus on the 10,000 in Moscow left her speed just a hair shy when compared to her great rival in Zurich, especially in a race that dawdled (by their very high standards) through the first 4K (11:52.15). We shall see if the tables turn when the two meet up again September 15th at the Bupa Great North Run in Newcastle, England where Tiru should have the upper hand as the defending champion (1:07:35), though Meseret PR’d in February at the Rock `n`Roll Half Marathon in New Orleans (1:07:25). Next up, however, will be a 10k world record attempt by Dibaba at the Tilburg 10K in the Netherlands this Sunday September 1st, which should set her up quite well for Newcastle. For a complete preview, go to my colleague Alberto Stretti’s fine blog. In any case, this is exactly the kind of back and forth rivalry that the sport is crying out for. Meseret currently leads their head-to-head in 3000 and 5000 meters, but this eventually could lead all the way to the marathon, and wouldn’t that be a treat? Who would you choose? U’re Ethiopian famous so I’m admire u! b/c u are very examplary in sport world to us . Exciting stuff going on here!
https://tonireavis.com/2013/08/30/the-rivalry-continues-defar-over-dibaba-in-zurich/
November 14-17, 2016 | Marrakech, Morocco. The fourth Edition of the International Renewable and Sustainable Energy Conference (IRSEC’16) aims to provide an international forum to facilitate discussion and knowledge exchange of the state-of-the-art research findings and current and future challenges and opportunities related with all facets and aspects of renewable and sustainable energy. The target public of IRSEC’16 includes all interested people from academia, industry and government, particularly, researchers, policy-makers, engineers, PhD and Masters students and other specialists interested in all issues related to renewable and sustainable energy. The scope of IRSEC’16 covers a broad range of hot topics including renewable energy technologies, energy efficiency, green energy, climate change, sustainable energy systems and smart grid. Fpr more details, programme and workshops visit IRSEC2016 web site.
http://agora.medspring.eu/en/articles/irsec16-international-renewable-and-sustainable-energy-conference
Three Horseshoes Hotel - Barry51.4226992372303, -3.3333232998848 Boasting a shared lounge and a library, Three Horseshoes lies in vicnity of Barry War Museum. The venue has 7 rooms and features a free private carpark and a smoking area on-site. Location The hotel is situated within 7 miles from Llandaff Cathedral. This property is about 3 miles away from the center of Barry. It is within walking distance of a museum and a cathedral. It will take 10 minutes by car to get to Cardiff-wales airport. Rooms Each room at this venue has complimentary WiFi, a trouser press and a fireplace. Three Horseshoes offers accommodations with views of the garden. The rooms also have carpet flooring. Eat & Drink Restaurant at the hotel has local menu. Leisure The property features cribs and a play area for children. Activities for active guests include bowling and darts. Internet Wireless internet is available in the hotel rooms for free. Guest Parking Private parking is possible on site for free. Number of rooms: 7. Free Wi-Fi in rooms Bar/ Lounge area Restaurant Flat-screen TV Electric kettle Express check-in/ -out Sports & Fitness - Bowling - Golf course - Darts Services - Tours/Ticket assistance - Wedding services Dining - Breakfast - Restaurant - Bar/ Lounge area - Special diet menus - Free breakfast Business - Meeting/ Banquet facilities - Fax/Photocopying Children - Cribs - DVDs/ Videos for children - Children's menu - Children's play area - Game room Spa & Leisure - Garden area - BBQ facilities - Leisure/ TV room - Library Room view - Garden view - City view - Mountain view - Pool view Room features - Free Wi-Fi in rooms - Heating - Soundproofed rooms - Terrace - Tea and coffee facilities Bathroom - Private bathroom - Shower - Free toiletries Self-catering - Electric kettle Media - Flat-screen TV Room decor - Carpeted floor Rooms and availability - Max occupancy:2 persons - Pool view - Shower - Private bathroom - Heating - Bathtub - Max occupancy:2 persons - Shower - Private bathroom - Heating - Bathtub - Bed options:
https://three-horseshoes-hotel-barry.booked.net/
of this page. Metro Pacific Sun MetroActive Boulevards Santa Cruz Weekly Bohemian Username / View Profile / Edit Profile / Log Out Log in / Create Account North Bay Bohemian News & Features Music, Arts & Culture Food & Drink Columns & Blogs Deals & Giveaways Browse News & Features News & Features Home Culture Features News The Fishing Report News Archives Browse Music, Arts & Culture Music & Arts Home Arts Art Events Books & Literature Music Music Events Find Bars & Clubs Theater Stage Events Movies Movie Times Find A&E Venues Browse Food & Drink Food & Drink Home Dining Small Bites Wine: Swirl Find Restaurants Food & Drink Events Food Archives Browse Columns & Blogs Columns & Blogs Home The Nugget The Fishing Report Public Eye BohoBlog Boho Beat City Sound Inertia Media Open Mic Letters to the Editor Community Events Columns Archives This is a past event. Tools Email Print Wine Country Spoken Word Festival When: Oct. 13-15 2017 Phone: davepokornypresents.com Inaugural event features both local and nationally renowned authors, poets, comedians and spoken word artists of all genres performing throughout downtown Petaluma. Hotel Petaluma 106 Washington St , Petaluma Sonoma CA 38.23551 ; -122.64242 Be the first to review this location! Events Tweet Pin It Email Print Comments Share Reviews/comments Subscribe to this thread: By Email Subscribing… With RSS Comment Add a review Rating Roll over stars and click to rate. Subscribe to this thread Submit an Event Facebook Twitter Instagram RSS Subscribe Search Events… Map Larger map Nearby Select a category COMMUNITY & PUBLIC PLACES Community Education Government Offices Health/Beauty/Spas NonProfit Parks/Outdoor Recreation Religion Services Shopping CULTURE & ENTERTAINMENT Bars/Clubs Bookstore Family Entertainment Museums and Galleries Music Venues Performance Theaters Performing Arts Sports Venues Winery RESTAURANTS American American new Bistro Brewpub Cafe California Chinese Continental Eclectic French Fusion Irish Italian Italian Northern Mediterranean Mexican Pizza Thai Wine Bar Barber Cellars Tasting Room (0.01 miles) Taste of Petaluma (0.02 miles) Hotel Petaluma (0.02 miles) Volpi's Restaurant & Bar (0.03 miles) Herb Folk Community Medicine (0.03 miles) History Comments (0) Advertise | Archives | Contact | About | | List Your Event Copyright © 2020 Metro Newspapers. All rights reserved.
https://www.bohemian.com/northbay/wine-country-spoken-word-festival/Event?mode=print&oid=4222279
Donation after circulatory death. Donation after circulatory death (DCD) describes the retrieval of organs for the purposes of transplantation that follows death confirmed using circulatory criteria. The persisting shortfall in the availability of organs for transplantation has prompted many countries to re-introduce DCD schemes not only for kidney retrieval but increasingly for other organs with a lower tolerance for warm ischaemia such as the liver, pancreas, and lungs. DCD contrasts in many important respects to the current standard model for deceased donation, namely donation after brain death. The challenge in the practice of DCD includes how to identify patients as suitable potential DCD donors, how to support and maintain the trust of bereaved families, and how to manage the consequences of warm ischaemia in a fashion that is professionally, ethically, and legally acceptable. Many of the concerns about the practice of both controlled and uncontrolled DCD are being addressed by increasing professional consensus on the ethical and legal justification for many of the interventions necessary to facilitate DCD. In some countries, DCD after the withdrawal of active treatment accounts for a substantial proportion of deceased organ donors overall. Where this occurs, there is an increased acceptance that organ and tissue donation should be considered a routine part of end-of-life care in both intensive care unit and emergency department.
A report in Clinical Microbiology & Infection discusses the efficacy of PCV13 in adults to prevent community-acquired pneumonia (CAP) and lower respiratory tract infections (LRTI) not requiring hospitalization. Antibiotic prescriptions in primary care was one indicator used by in the University Medical Center Utrecht–led study. “Pneumococcal conjugate vaccines (PCVs) have been available since 2000 and their introduction in infant immunization programs has reduced incidences of Invasive Pneumococcal Disease in non-immunized populations by 15-50%, with presumably similar relative incidence reductions of non-bacteremic pneumococcal pneumonia,” the Dutch researchers write. “In the elderly, efficacy of the 13-valent PCV (PCV13) has been demonstrated for vaccine-type pneumococcal CAP and Invasive Pneumococcal Disease requiring hospitalization. It has been postulated that PCV13 also reduces the incidence of these episodes and associated antibiotic use in primary care,” they add. To determine if that is the case, the study team randomized community-dwelling immunocompetent adults older than age 65 years to PCV13 or placebo as part of the double-blind Community-Acquired Pneumonia immunization Trial in Adults. Researchers extracted data on CAP and LRTI episodes and antibiotic prescriptions from general practitioner information systems for 40,426 subjects in order to determine vaccine effectiveness (VE). With 20,195 participants receiving PCV13 and 20,231 getting placebo, 1,564 and 1,659 CAP episodes occurred in the PCV13 and placebo group, respectively; VE (95% CI) was 5.5% (-2.6%–13.0%). At the same time, non-CAP LRTI episodes occurred 7,535 and 7,817 times in the PCV13 and placebo groups, respectively; VE (95% CI) was 3.4% (-2.0%–8.5%). In response, 8,835 and 9,245 LRTI-related antibiotic courses were prescribed in the PCV13 and placebo arms, respectively; VE (95% CI) was 4.2% (-1.0%– 9.1%). Researchers point out that antibiotic courses for any indication were prescribed 43,386 and 43,309 times, respectively; VE (95% CI) was -0.4% (-4.9%–3.9%). “PCV13 vaccination in the elderly is unlikely to cause a relevant reduction in the incidence of CAP, LRTI, LRTI-related antibiotic use or total antibiotic use in primary care,” the authors conclude. “Although the reductions of CAP and LRTI, if at all present, are likely to be small, the high incidence of these infections in primary care results in a much larger number of prevented cases as compared to the number of prevented hospitalized CAP cases,” the researchers explain. “However, costs and disease burden of CAP and LRTI treated in primary care is relatively low as demonstrated in a previous cost-effectiveness analysis.” Therefore, the researchers advise, “prevention of CAP and LRTI episodes in primary care, although much more frequent as compared to other preventable pneumococcal disease, does not drive the societal value of pneumococcal vaccination in elderly.” « Click here to return to Vaccine Update.
https://uspharmacist.com/article/pcv13-vaccine-value-is-for-pneumonia-requiring-hospitalization-not-outpatient
From Stanford University: Stephen Schneider, a leading climate expert, dead at 65 "Schneider was influential in the public debate over climate change and a lead scientist on the United Nations' Intergovernmental Panel on Climate Change, which shared the 2007 Nobel Peace Prize with former Vice President Al Gore...Stanford climate researcher, Chris Field, described Schneider as "an inspiration to a whole generation... Steve clearly lit up any room he was in and you could always tell that wherever Steve was, there was going to be lively conversation, there was going to be sharp analysis and there was going to be a lot of intensity," said Field." ------------------ From Ben Santer's eulogy on Realclimate: "Stephen Schneider did more than any other individual on the planet to help us realize that human actions have led to global-scale changes in Earth’s climate. Steve was instrumental in focusing scientific, political, and public attention on one of the major challenges facing humanity – the problem of human-caused climate change. We honor the memory of Steve Schneider by continuing to fight for the things he fought for – by continuing to seek clear understanding of the causes and impacts of climate change. We honor Steve by recognizing that communication is a vital part of our job. We honor Steve by taking the time to explain our research findings in plain English. By telling others what we do, why we do it, and why they should care about it. We honor Steve by raising our voices, and by speaking out when powerful “forces of unreason” seek to misrepresent our science. We honor Steve Schneider by caring about the strange and beautiful planet on which we live, by protecting its climate, and by ensuring that our policymakers do not fall asleep at the wheel." ------------------ From the San Francisco Chronicle: "In 1975, he founded the international science journal Climatic Change, and remained its editor in chief until his death...Dr. Schneider was a science consultant to every president, from Richard Nixon to Barack Obama and, in 1992, won a MacArthur Foundation "genius" award." ------------------ From the New York Times: "...He encouraged scientists to get out and communicate directly with the public, maintaining a Web page, “Mediarology,” describing the challenges attending such a move." ------------------ HERE is Stephen Schneider's website. ------------------ On a personal note, one of my first contacts with climate science was through Steve Schneider's website. I certainly admired him, and felt lucky to meet him briefly at the Copenhagen Climate Conference in December 2009. He was very friendly. (Jan Dash) ------------------ Here is a video of Schneider discussing his book: "Science as a Contact Sport: Inside the Battle to Save Earth's Climate" (National Geographic Books, 2009). ------------------ Schneider, right, was a leader among the scientists whose climate research earned a Nobel Peace Prize in 2007, an honor they shared with former Vice President Al Gore, shown here with his wife, Tipper.
http://climate.uu-uno.org/view/news/51cbecb27896bb431f68d516/?topic=24045
respected teacher, what do you mean by the statement "greatest common divisor of two numbers can be expressed as a linear combination of two numbers"????? please explain using an example.. Asked by shailesh arlekar | 28th Jun, 2014, 07:25: PM Expert Answer: Answered by Vimala Ramamurthy | 30th Jun, 2014, 11:24: AM Related Videos - if r=0,then what is the relationship between a,b and q in a=bq+r of euclid division lemma - find hcf of72and 108divisions lemma - the sum of squares of two consecutive multiples of 7 is 637. Find the multiples - 135 and 225 is dout - Show that any positive odd integer is of the form 3m,3m+1or3m+2 where m is some integer - What is division algorithm - prime factor of 176 - What is algorithm? - irrational - Prove that the square of the from 6q+5,then it is of the from 3q+2 for some integer q, but not conversely.
https://www.topperlearning.com/answer/respected-teacher-what-do-you-mean-by-the-statement-greatest-common-divisor-of-two-numbers-can-be-expressed-as-a-linear-combination-of-two-numbers-ple/w5iz3joo
Q: Decrement (or increment) operator in return statement in Java I was implementing pagination in my web application (using Spring and Hibernate) where I needed the stuff something like the following. public static int deleteSingle(long totalRows, long pageSize, int currentPage) { return totalRows==currentPage*pageSize-pageSize ? currentPage-- : currentPage; } Suppose, I invoke this method from somewhere as follows. deleteSingle(24, 2, 13); With these arguments, the condition is satisfied and the value of the variable currentPage (i.e 13) minus 1 (i.e 12) should be returned but it doesn't decrement the value of currentPage. It returns the original value which is 13 after this call. I had to change the method like the following for it to work as expected. public static int deleteSingle(long totalRows, long pageSize, int currentPage) { if(totalRows==currentPage*pageSize-pageSize) { currentPage=currentPage-1; //<------- return currentPage; //<------- } else { return currentPage; } } So why doesn't it decrement the value by 1 with the decrement operator - currentPage--? Why does it need - currentPage=currentPage-1; in this scenario? A: In your return statement, it uses currentPage-- which causes the decrement after the return. You'd want --currentPage to do the decrement before the return. Personally, with a complicated statement like that, you probably want to break it out anyway for readability's sake, but that's a matter of preference. (Technically, it decrements after it's read. There's nothing special about it being a return statement that changes when it decremements.) If it were up to my, my taste would be to do this: public static int deleteSingle(long totalRows, long pageSize, int currentPage) { if(totalRows==currentPage*pageSize-pageSize) { currentPage--; } return currentPage; } A: Note that x-- decrements x after using its value, you probably want --currentPage, which would decrement the variable before using its value. To see this, consider: public static int deleteSingle(long totalRows, long pageSize, int currentPage) { try { return totalRows == currentPage * pageSize - pageSize ? currentPage-- : currentPage; } finally { System.out.println("*" + currentPage); // value after return } } Calling deleteSingle(24, 2, 13) prints: *12 13 If we replace currentPage-- with --currentPage, we receive: *12 12 as expected. But, don't you think it would be better to simply use currentPage - 1 instead? There is no reason to reassign currentPage in this scenario (remember, such a reassignment will not be visible outside the scope of the method). The prefix decrement operator is covered in §15.15.2 of the JLS. Notice the sentence: The value of the prefix decrement expression is the value of the variable after the new value is stored.
Unit testing: - Review individual tickets. - Test tickets to ensure ticket goal is resolved and that the fix/enhancement is working as intended. - Ensure that the fix/enhancement being tested is thoughtful and scalable. - Ensure that the fix/enhancement complies to regulations and to - BeSmartee’s internal policies and procedures. - Engage the software developer for bug fixes and modifications where applicable. - End-end testing: - Ensure that the fix/enhancement maintains overall integrity of BeSmartee’s products. - Ensure that the fix/enhancement move BeSmartee’s software products forward as an overall compelling solution. - Design, execute and maintain automated continuous testing on multiple environments. - Create test plans, test files, and scripts for unit testing. - Document all testing results. - Manage the overall schedule, internal and external resource coordination, requirements and communications and problem resolution for all software testing. - Support the Client Success Manager(s) with support tickets as required. - Identify areas in which the product can be improved to better serve clients’ needs. - Develop and maintain a working knowledge of all BeSmartee applications. - Communicate any technical issues, customer complaints, or potential new client projects to management. - Continuous focus on reducing/condensing software testing including time and resources required, through systems process improvements and automation. - Provide product, infrastructure, business process and staffing recommendations to executive leadership. - Perform other duties as assigned. Requirements/ Desired skills and experience - 2+ years of well-rounded software testing experience in positions of increasing responsibility. - Ability to execute quickly in a start-up environment and build structure vs. having structure provided. - Good written and verbal communication skills. - Ability to successfully manage software testing across different clients, products and environments simultaneously. - Ability to influence, establish relationships, and continually manage clients’ expectations at all levels, including senior management and C-level. - Analytical skills and data-orientation in decision making. - A positive, proactive mindset, combined with excellent prioritization, task management, and organizational skills. - Demonstrated ability to build strong, trusted relationships with colleagues. - Do not have to be a software developer, but be able to understand code, databases, SDLC (software development life cycle) and related software developer skills. - Ability to perform software configurations. - Ability to design process maps and workflows. - Ability to write business requirements documents (BRD) Benefits - Be part of our hugely international environment; we are currently working with the Global customers, where you can have many opportunities to working oversea. - Competitive salary package based on skills and experience.
https://career.fpt-software.com/jobs/manual-quality-assurance-signing-bonus-up-to-50m/
Engineers at the University of California San Diego have developed a super-hydrophobic surface that can be used to generate electrical voltage. When salt water flows over this specially patterned surface, it can produce at least 50 millivolts. The proof-of-concept work could lead to the development of new power sources for lab-on-a-chip platforms and other microfluidics devices. It could someday be extended to energy harvesting methods in water desalination plants, researchers said. A team of researchers led by Prab Bandaru, a professor of mechanical and aerospace engineering at the UC San Diego Jacobs School of Engineering, and first author Bei Fan, a graduate student in Bandaru’s research group, published their work in the Oct. 3 issue of Nature Communications. The main idea behind this work is to create electrical voltage by moving ions over a charged surface. And the faster you can move these ions, the more voltage you can generate, explained Bandaru. Bandaru’s team created a surface so hydrophobic that it enables water (and any ions it carries) to flow faster when passing over. The surface also holds a negative charge, so a rapid flow of positive ions in salt water with respect to this negatively charged surface results in an electrical potential difference, creating an electrical voltage. “The reduced friction from this surface as well as the consequent electrical interactions helps to obtain significantly enhanced electrical voltage,” said Bandaru. The surface was made by etching tiny ridges into a silicon substrate and then filling the ridges with oil (such as synthetic motor oil used for lubrication). In tests, dilute salt water was transported by syringe pump over the surface in a microfluidic channel, and then the voltage was measured across the ends of the channel. There have been previous reports on super-hydrophobic, or so-called “lotus leaf” surfaces designed to speed up fluid flow at the surface. However, these surfaces have so far been patterned with tiny air pockets—and since air does not hold charge, the result is a smaller electric potential difference and thus, a smaller voltage. By replacing air with a liquid like synthetic oil—which holds charge and won’t mix with salt water—Bandaru and Fan created a surface that produces at least 50 percent more electrical voltage than previous designs. According to Bandaru, higher voltages may also be obtained through faster liquid velocities and narrower and longer channels. Moving forward, the team is working on creating channels with these patterned surfaces that can produce more electrical power. Learn more: Flowing salt water over this super-hydrophobic surface can generate electricity The Latest on: Electricity generation [google_news title=”” keyword=”electricity generation” num_posts=”10″ blurb_length=”0″ show_thumb=”left”] via Google News The Latest on: Electricity generation - The Key to Fast Charging Electric Cars Is Flying 248 Miles Above Our Headson November 20, 2022 at 10:46 am It’s not so simple to make faster electric vehicle chargers, because the increased electricity means excess heat, too.A NASA experiment meant to cool electronics aboard spacecraft could also find its ... - This Heavily-Tuned C8 Generation RS7 Is Nothing But A Speed Demonon November 20, 2022 at 10:31 am But this RS7 can now join the limited 1,000-plus-hp club. The engine modifications from Brex Tuning have added another 449 horsepower, totaling 1,040 ponies for this ludicrous RS7. The torque has seen ... - Energy bill up? Here’s how electricity costs have changed over the past yearon November 20, 2022 at 8:12 am Electricity costs across the U.S. are making their biggest jumps since the last major economic downturn. Despite pushing to build more renewable energy sources over the last decade, fossil fuels ... - Federal legislation to increase renewable energy incentives for Nevada homeowners, including solaron November 18, 2022 at 11:37 am Nevada is showing itself as a leader in generating electricity from solar energy. But what’s that mean for residential homeowners, especially those in Clark County? - (Opinion) The more solar energy NH uses, the more ratepayers gainon November 18, 2022 at 6:55 am These are difficult days for anyone buying electricity in New Hampshire. That’s true for families, schools and nonprofits. It’s also true for businesses like ours, which consume millions of ... - The Qualcomm Snapdragon 8 Gen 2 will power the next generation of flagship Android phoneson November 18, 2022 at 3:38 am Qualcomm is back with another flagship processor set to take the next generation of high-end Android phones to the next level. - Electricity Generation Market Will hold the largest industry share owing to the rising market popularity in Global countrieson November 18, 2022 at 12:39 am Get a Sample PDF of report @ The Electricity Generation market report provides a detailed analysis of global market size, regional and country-level market size, segmentation market growth, market ... - Commercial Solar Power Generation Systems Market : An complete research On Upcoming Trends And Growth Opportunities from 2023-2027on November 17, 2022 at 8:25 pm Commercial Solar Power Generation Systems Market [Spreadsheet of 103 Pages] explore investment in Market. It classify ... - Nuclear power in Kitsap? Not happening. But 'Navy nukes' will keep generating it elsewhereon November 16, 2022 at 12:36 pm Many sailors trained to operate nuclear reactors, including those based at Kitsap, go on jobs operating America's nuclear power plants.
https://www.innovationtoronto.com/2018/10/generating-electricity-by-flowing-salt-water-over-a-special-super-hydrophobic-surface/
Alanna and her twin brother Thom are leaving their home to be trained in their future professions. Their father is sending Thom to learn to be a knight and Alanna to learn womanly things at a convent. But Thom wants to learn to be a sorcerer and Alanna has always been better at fighting than most boys. So Alanna cuts her hair and goes to the castle as Alan, a page training to be a knight. She finds her new path harder than she ever imagined but slowly she begins to excel in her training. She also discovers that in addition to her fighting skills she has some magical ones. Her healing ability is especially useful when the prince is close to death. Alanna saves him but now she has become a target for the powerful sorcerer who cast the spell in the first place.
http://www.librarydynamics.org/2009/09/alanna-first-adventure-by-tamora-pierce.html
Douglas has been infatuated with the recording process ever since he first discovered the cassette 4-track in 1992. He began to recording local bands while also attending recording classes at Virginia Tech. After receiving his degree in interdisciplinary studies with minors in math, biology, and chemistry Douglas moved to Raleigh North Carolina. In Raleigh he interned at Osceola studios for 1 year, then became a staff engineer and remained at Osceola for 1 year. Douglas then movie to Austin, TX where he started a make-shift home studio which slowly evolved into what today, is The Still. The Still continues to attract some of the Austin's best talents as well as artists around the globe. Douglas is also an active musician, and performs in several bands in Austin as well as performing solo. Although Douglas typically performs rock & improv based ensembles, he has worked with all sorts of artists and projects. Randall Squires was born in New York, to a musical family and has been involved in the recording of music for the vast majority of his life. Throughout the years he has studied at The Manhattan School of Music in their Prep division, and at Berklee College of music. He has at one point or another played Violin, Cello, Acoustic & Electric Bass, French-Horn, as well as various forms of Percussion. Randall was Second Engineer at Infinite Studios in Alameda, CA where he worked on Projects with Giovanni Hidalgo, Pastor Walter Hawkins, E-40, Alan Tower & Free Energy, Melky Sedek (Wyclef's Sister & Brother) and many others. Here in Austin, he has been blessed with projects including My Education, The Asylum Street Spankers, Grimy Styles, Natalie James, Burro Magic, Pride Feat. Sean B. as well as live recordings for Spoon, and Mouse on Mars. All of this outlines his eclectic tastes in music, and an ability to switch gears between multiple genre's of music, and recording Settings (ask him about The Solar Studio). He is currently involved in Recording The United States Army Band with his father Greggory K. Squires, whom he has assisted many times. Randall realizes the scope, and intensity that encompasses the world of recording. Understands that every day is a new experience and opportuninty to learn, and rightly so is not afraid to get in over his head. recording engineer | music production | pro audio tech service | sound reinforcement | electronic music | electric guitar and bass | creative improvisation | circuit bending | experimental photography & visual art. born in Seattle; Austinite since 1999. ex-scientist (organic chemistry)." Thomas is a talented engineer/producer who has been professionally producing sound recordings since 1999. Many great players and talents have been through his modest home studio such as Austin's premier underground progressive pop band Grass, jam band deluxe Groovin' Ground, Fastball's number 1 hit writer Tony Scalzo, former Melvin bassist/producer Mark D, and hip hop artist Bavu Blakes (Mr. Blakes). Thomas is well versed in tracking and mixing techniques, and works quickly and with great results. Classical, jazz, pop and rock recordings are all par for the course. Thomas currently divides his schedule between sound production for animation, location video/audio capture, projects in the Still and Tonehaus, and as a performing artist in the classical string quartet Hill Country Strings (violin), dEEp Edward (el. violin/sax), and the Adam Sultan Moment (sax). Thomas apprenticed mastering with Jim Wilson from Airshow Mastering, back when Jim had a studio in Austin.
http://www.stillrecording.com/engineers.html
The Organon is the standard collection of Aristotle's six works on logic. The name Organon was given by Aristotle's followers, the Peripatetics. They are as ... en.wikipedia.org/wiki/Organon_International Organon was a pharmaceutical company headquartered in Oss, Netherlands. In November 2007, Schering-Plough Corporation, based in New Jersey, USA, ... www.amazon.com/Organon-works-Aristotle-Logic/dp/1478305622 The Organon: The works of Aristotle on Logic [Aristotle, Roger Bishop Jones, E M Edghill, A J Jenkinson, G R G Mure, W A Pickard-Cambridge] on Amazon.com. www.britannica.com/topic/Organon Organon: Aristotle: Syllogistic: …a collection known as the Organon, or “tool” of thought. www.merriam-webster.com/dictionary/organon Organon definition is - an instrument for acquiring knowledge; specifically : a body of principles of scientific or philosophical investigation. www.dictionary.com/browse/organon Organon definition, an instrument of thought or knowledge. See more. en.wiktionary.org/wiki/organon organon (plural organons). A set of principles that are used in science or philosophy. Synonym: organum. The name given by Aristotle's followers to his six ... study.com/academy/lesson/aristotles-organon-definition-philosophy-summary.html Let's examine one of the most important works of classical antiquity: Organon, by Aristotle. This work brings together the books of logic written... www.sparknotes.com/philosophy/aristotle/section1 Aristotle wrote six works that were later grouped together as the Organon, which means “instrument.” These works are the Prior Analytics, Posterior Analytics, On ...
https://www.ask.com/web?q=Organon&qo=relatedSearchNarrow&o=0&l=dir
Katharine is the TTU CC's co-director and an Endowed Professor in Public Policy and Public Law in the Department of Political Science. She is an atmospheric scientist with a talent of expertly communicating facts about climate change. Examples of questions are: Isn't climate change part of Earth's natural cycle? Isn't it the sun causing warming? Wasn't there a time when temperatures were warmer than today? Do scientists fabricate data? Is it part of a hoax? Katharine has vast expertise in analyzing observations, comparing future scenarios, evaluating global and regional climate models, and building and assessing statistical downscaling models. This makes her ideally equipped to translate the science of climate projections to information relevant to agriculture, ecosystems, energy, infrastructure, public health, and water resources. Moreover, Katharine, as an evangelical Christian has a unique perspective and standing among those of faith. Her work has been featured in the top-journal Science, she has participated in documentaries, such as the Emmy award-winning, Years of Living Dangerously, the PBS Frontline report, Climate of Doubt, and the film, Merchants of Doubt. Because of her dedication to communicate the science behind climate projections and the associated risks of climate change to a broad audience, Fortune Magazine named her one of the World's Greatest Leaders in 2017 and Time Magazine listed her among the 100 Most Influential People. One such example is the Global Weirding video series, is a fantastic series of short videos to inform us about why we know climate is changing. Victor Sheng Victor is an Associate Professor of computer science and the Founding Director of Data Analytics Laboratory at Texas Tech University. His research interests focus on data science, specifically on crowdsourcing, data mining, machine learning, big data analytics, deep learning, natural language processing, spatial database and information retrieval, and related applications, such as software engineering, business intelligence, and medical informatics. He has published more than 180 papers. Most papers are published in top journals and conferences in knowledge discovery and data management. In addition, we won the test-of-time research award from the 26th ACM SIGKDD (2020), the best paper award from the International Conference on Cloud Computing and Security (2018), the best student paper award finalist from the 16th International Conference on Web Information System Engineering (2015), the best paper award from the 11th Industrial Conference on Data Mining (2011), and the best paper award runner-up from the 14th ACM SIGKDD (2008). Shuo Wang Shuo's research focuses on the quantification of uncertainties of hydrologic predictions using data assimilation (i.e. using data to inform models). He develops quantitative tools to assess how climate change is predicted to affect hydrology, particularly in regards to hydrologic extremes, and to assess management solutions at a wide range of spatial and temporal scales. For example, he used geostatistical tools to determine the changes to precipitation regime (i.e. when and where precipitation occurs) in regions in Canada. Anne Stoner Anne's research focuses on using a suite of statistical downscaling models to produce high-resolution daily projections of various climate variables to station locations or gridded regions. The climate projections she generates are often used for further research in a wide variety of fields ranging from agriculture and ecological processes to engineering projects. This includes quantifying climate change impacts on infrastructure and how to integrate these assessments into city planning. Natasja van Gestel Natasja is a global change ecologist. She explores the effects of climate on soil microbial processes and plant physiology. She uses several quantitative approaches in her research, including data assimilation, meta-analyses, Bayesian and multivariate analyses. Data assimilation is an approach that uses data to constrain a model. She used this approach to constrain a soil carbon model to assess how land carbon predictions of Earth System Model compare to observations from field warming experiments. Land contains far more carbon than the atmosphere. Earth System Models predict that land will lose more C with warming than that they gain. If so, then warming will result in a positive feedback to land C loss, leading to faster rates of atmospheric warming. Ascertaining the strength and direction of this feedback is therefore important. Zhe Zhu Zhe is a land change scientist. He combines remote sensing with other sources of information. Using a combination of field measurements, carbon modeling and remote sensing, he quantified changes to ecosystem carbon gains or losses following a change in land use from rural to urban. How landscapes change over time, why they change, and how the shift in land use influences carbon fluxes are his forté.
https://www.depts.ttu.edu/csc/climateandmodels.php
home › Forums › Kitchen Chit-Chat › Ceramic vs glass bowls? - This topic has 3 replies, 2 voices, and was last updated 1 year, 9 months ago by Laura Pazzaglia. - AuthorPosts - May 10, 2018 at 3:48 pm #880393GloriaParticipant Hi, all. I make oatmeal in my Instant-pot using the pot-in-pot method. [Life-changing! Thanks, Laura!] I often prep the oatmeal in Pyrex bowls for the week (all the dry ingredients – oats, dried fruit, seeds, nuts, salt, etc) so that on a busy morning all I need to do is add the butter and water (or almond milk) & it cooks while I get ready. Occasionally, on weekends, I’ll make it in a porcelain bowl and maybe it’s in my head, but I think it comes out even better in the porcelain bowl. I know glass and porcelain/ceramic heat up differently, but they both seem to cook fine using the same times. I just can’t articulate how the two are different – the porcelain seems creamier/lighter. Is it more because the Pyrex is thicker than the porcelain? Or is it about the shape of the bowls (the Pyrex bowls are more cylindrical/flat while the porcelain ones are tapered)? Would thinner glass bowls or tapered ones with lids work better then the Pyrex ones? Or should I try to find porcelain/ceramic bowls with lids? (Lids are must for the prep-ahead.) Or should I adjust the time for the Pyrex bowls? Thoughts? Or is just in my head? :)May 24, 2018 at 9:30 am #881410Laura PazzagliaKeymaster Gloria, well I think it’s a little bit in your head and a little bit the shape of the container. I mean, on the weekend everything tastes better! Right?!? ; ) I think that with the tapered bowl, there is a larger surface area of the oats exposed to the pressure steam which would mean there are more oats that cooked at higher temperatures (vs. the ones in contact with the bowl, below). It sounds like you have a great system for a healthy breakfast. You can absolutely use lids, but only for storage because placing a lid on your oatmeal bowl during pressure cooking will considerably slow down the cooking. Using my method (yaay, glad you like it) the oats are getting cooked indirectly heat transfer of the steam to the bowl and then more directly from the steam itself. So don’t take the latter out of the equation. Come back to keep us updated on your oatmeal adventures! Ciao, LAugust 9, 2018 at 2:02 pm #885873GloriaParticipant Hi, Laura. Thanks for the reply (and sorry for the delay in *my* reply). Haha! Yes, weekend food is often better. ;) And to clarify, yes, the lids are only for prep/storage; I don’t use the lids for cooking. :) The increased surface area exposure to the steam makes sense. (I might do a little testing with Mason jars & see how that goes.) Aside: I’m a physics teacher, and I am now intrigued by how different materials (metal vs glass vs porcelain) and containers affect the pressure cooking. I’ll be looking into it and maybe doing some experiments. :)August 18, 2018 at 10:55 am #885949Laura PazzagliaKeymaster OK, Gloria! I’m looking forward to seeing what you come up with. Make sure to look-up the “thermal conductivity” of different materials on the Engineering Toolbox website – that was my main source for the Pressure Cooking School segment on heat transfer. https://www.engineeringtoolbox.com/thermal-conductivity-d_429.html Ciao! L - AuthorPosts - You must be logged in to reply to this topic.
https://www.hippressurecooking.com/forums/topic/ceramic-vs-glass-bowls/
Most of the applications we use on our mobile phones and laptops, commonly referred to as social media applications, provides the option to block abusers. To access the block option, click on the corresponding platforms you are facing online abuse on: Comments Hi, Need help with someone harassing my husband on social media.. Leave a Reply Have a question you want to ask our legal experts? Related Resources Online Harassment and Violence against LGBTQ+ Persons Online abuse happens on various platforms on the internet - social media, chat forums, etc. When faced with online abuse, your first step should be to see what the policy of the platform where the abuse is happening, and what steps the platform recommends for you to stop it.
https://nyaaya.org/legal-explainer/blocking-abusers-on-social-media/
For September, both of my Tuesday Blog installments are dedicated to the music of Gustav Mahler, featuring two of his later symphonies his “Tragic” Sixth and this week, his mammoth Eighth. Until 1901, Mahler's compositions had been heavily influenced by the German folk-poem collection Des Knaben Wunderhorn. The music of Mahler's many Wunderhorn settings is reflected in his Symphonies No. 2, No. 3 and No. 4, which all employ vocal as well as instrumental forces. From about 1901, however, 1 Likes Rogerx liked this post ... This installment of Once Upon the Internet looks at a 1950's studio version of Gershwin's opera Porgy and Bess, featuring singers Ella Fitzgerald and Louis Armstrong, which I uploaded years ago from (as I re all) a Russian site. The Porgy discography is vast - from the 1940's "original cast" recording, the soundtrack to the Otto Preminger feature film, to large opera productions by established opera companies, with "concept albums" inspired from the Gershwin work, 0 Likes ... Related Threads: http://www.talkclassical.com/16215-y...20th-21st.html http://www.talkclassical.com/16364-lieder.html En français In a thread started a couple of weeks ago, we were asked to identify our "favourite" contemporary composers in a number of categories. In the Lieder/Song category, I hastily answered George Gershwin. After all, the question specifically asked to identify one favoourite, and didn’t ask for favourite song 1 Likes Sid James liked this post ...
https://www.talkclassical.com/blogs/itywltmt/singers/?s=772be084d9668c72ab2e8f8f030e13c8
The program at MCHL is designed to meet the physical, social, emotional and cognitive needs of each child. Children are encouraged to develop good work habits and share in the responsibility of their learning. They work at their own pace and on their own level with guidance from dedicated teachers. The Montessori materials are designed to appeal to the child’s natural desire to learn. They are divided into the following areas: Practical Life Activities in this area are designed to give children opportunities to practice the skills necessary for everyday life. Children get great satisfaction in the simple tasks that allow them to pour, spoon, scrub and sweep. Not only is the child working toward independence, but the activities help in developing coordination and a sense of order. Children are taught how to care for their own environment and are given lessons in grace and courtesy Sensorial Materials The sensorial materials provide the children with opportunities to explore colors, shapes, smells and sounds. Children learn best when allowed to touch, feel, hold, smell, listen and taste. The sensorial materials are designed to refine the senses while also preparing the child for further learning in math and language. Language The verbal skills of the young child are stimulated every moment at MCHL through conversation and exposure to high quality children’s books. Reading and writing are intimately connected and integrated strategies are offered to ensure success. The tracing of a sandpaper letter provides the tactile, phonetic foundation for later reading. A variety of fun and enriching activities are available to facilitate the child’s emerging writing skills, beginning with the early scribbling stage to keeping a daily journal. Mathematics A basic tenet of Montessori education is that understanding is often a matter of seeing and touching. Special equipment helps the child to absorb abstract concepts through the use of concrete materials. A broad spectrum of activities in the room allows one child to count sets of five or six buttons while next to him a child is adding four digit numbers, through the use of manipulatives. Art/Music Art and Music activities are an integral part of the curriculum and available to children every day. Contact Info Address 880 W Church Rd Sterling, VA 20164 Phone: (703) 421-1112 Fax: (703) 421-9356 E-mail: [email protected] Social Media:
https://www.mchl.org/our-curriculum/
The present invention relates in general to a transmission control apparatus and, more specifically, to commercial utility vehicles equipped with a manual transmission which varies the speed in stages, although the invention also offers advantages for such commercial vehicles that are equipped with automatic transmissions. With automatic transmissions, auxiliary shifts or shift commands are being increasingly used in today's commercial motor vehicles. The required shift signals for these automatic transmissions are determined primarily on the basis of various conditions inside the vehicle. In hilly areas of a country, in particular, there are a number of increased problems associated with producing meaningful shift signals for such transmissions, and such problems are most specifically encountered on changing road grades. When, for example, a vehicle is travelling uphill over the crest of a hill, it is possible then, because of the decreasing slope, that the vehicle will continue to accelerate with the same amount of drive power. Furthermore, on account of the increasing speed of the motor, a criterion for shifting the transmission into a higher gear will soon be reached. But, in such a situation, it may be inappropriate to shift to the higher gear, because behind the crest of the hill there will normally be a descending slope, and then the new higher gear may be inappropriate on account of excessive speed, and the transmission must then be downshifted in such commercial vehicles to obtain the braking effect of the motor. Quite similar problems will likewise occur when a commercial vehicle is travelling downward over the crest of a hill. Here too, is a situation where it would be inappropriate to shift the transmission into a higher gear, as long as the slope continues to increase, because again, the beneficial braking effect of the motor would thereby be decreased. If, for example, the vehicle is travelling through a valley, than, when reaching a downhill grade, it is also inappropriate to shift the transmission too soon to a higher gear, because the driver will usually want to take advantage of the power of the motor to attain even higher speeds to gain some additional momentum for climbing the subsequent hill. Specifically too when an uphill stretch of a valley is being negotiated it is important not to shift the transmission to a higher gear prematurely because the slope of the roadway can become so great that the required drive torque cannot, at a somewhat later point, be brought into a higher gear and then it will become necessary for the driver to downshift the transmission once again. The present invention teaches an apparatus to automatically determine and control an appropriate shift point in an over-the-road vehicle transmission control mechanism. The shift point control apparatus comprises a differentiating means connected to the transmission control mechanism for substantially continuously determining a difference in a constant speed drive torque on such vehicle motor and for generating a signal value that is representative of a difference in the constant speed drive torque of such vehicle motor. A comparator means is provided which is connected to receive the signal value representative of a difference in the constant speed drive torque for comparing this signal value with a predetermined value to determine both an increasing gradient and a decreasing gradient in a roadway and for generating at least one of a positive and a negative signal value that will be representative of an increasing gradient and an opposite sign signal value that will be representative of a decreasing gradient. A verifying means is also provided which is connected to the transmission control mechanism for verifying the increasing and decreasing roadway gradients.
Safran has reported first quarter 2019 revenue amounted to €5,781 million compared with €4,222 million in the year ago period. This represents an increase of 36.9%, or €1,559 million. Changes in scope had a contribution of €802 million, of which €781 million related to the acquisition of Aerosystems and Aircraft Interiors, and €21 million related to the acquisition of the ElectroMechanical Systems activities of Collins Aerospace. The net impact of currency variations amounted to €223 million, mainly reflecting a positive translation effect on non-Euro revenues from the USD (the average EUR/USD spot rate was 1.14 in Q1 2019, compared with 1.23 in Q1 2018). On an organic basis, revenue increased 12.6%, as all activities contributed positively. Combined deliveries of CFM engines (LEAP and CFM56) increased by 15.9% to 577 units in Q1 2019 from 498 units in Q1 2018. 424 LEAP engines were delivered in Q1 2019 compared with 186 units in the year ago period. Regarding the adjustments to the 737 MAX production system announced by Boeing, CFM has maintained the production rate for the LEAP-1B at this point and will undertake temporary adjustments if necessary.
https://www.avitrader.com/2019/04/29/safrans-revenue-up-36-9-in-first-quarter-2019/
This site is used in the Ponderosa Computer Lab. This is the landing page, or home page ("home with Chrome"), that all students use to get to their grade level specific lessons and activities. K-5 students will learn how to use and navigate the site for their grade level. Explore these pages and have fun with your student! The students in Kindergarten through 5th grades learn the basics about how a computer works, Microsoft Office basics (Word, Excel, and Powerpoint), Google Docs and Google Drive, digital citizenship and footprints, coding, internet safety, safe searching, and more. Our lab is comprised of 28 PC's, on which we primarily utilize Google Chrome as our browser of choice. Please do not hesitate to contact me should you have any questions or concerns.
https://www.thompsonschools.org/domain/1558
Region of interest (ROI) quantitation is an important task in emission tomography (e.g., positron emission tomography and single photon emission computed tomography). It is essential for exploring clinical factors such as tumor activity, growth rate, and the efficacy of therapeutic interventions. Bayesian methods based on the maximum a posteriori principle (or called penalized maximum likelihood methods) have been developed for emission image reconstructions to deal with the low signal to noise ratio of the emission data. Similar to the filter cut-off frequency in the filtered backprojection method, the smoothing parameter of the image prior in Bayesian reconstruction controls the resolution and noise trade-off and hence affects ROI quantitation. In this paper we present an approach for choosing the optimum smoothing parameter in Bayesian reconstruction for ROI quantitation. Bayesian reconstructions are difficult to analyze because the resolution and noise properties are nonlinear and object-dependent. Building on the recent progress on deriving the approximate expressions for the local impulse response function and the covariance matrix, we derived simplified theoretical expressions for the bias, the variance, and the ensemble mean squared error (EMSE) of the ROI quantitation. One problem in evaluating ROI quantitation is that the truth is often required for calculating the bias. This is overcome by using ensemble distribution of the activity inside the ROI and computing the average EMSE. The resulting expressions allow fast evaluation of the image quality for different smoothing parameters. The optimum smoothing parameter of the image prior can then be selected to minimize the EMSE.
https://ucdavis.pure.elsevier.com/en/publications/optimization-of-bayesian-emission-tomographic-reconstruction-for-
The great and revered Italian conductor Cllaudio Abbado has just passed away at the age of 80 in his home in Bologna,Italy . It's difficult to believe that such a force of nature , a conductor so filled with energy and enthusiasm , is no more . But he had been struggling with the effects of stomach cancer for some years , while managing to continue conducting , and his appearance had become more gaunt with age . Many great conductors have been feared by the musicians who played under them ; they were strict disciplinarians who were respected but dreaded . Toscanini, Szell, Reiner , for example . But Claudio Abbado was universally loved by the world's greatest orchestras which he conducted for decades ; the Berlin Philharmonic, the London symphony , the Vienna Philharmonic, the Chicago symphony , to name only those most closely associated with him , not to mention the orchestras of such great opera companies of the great La Scala opera house in his native Milan and others . Abbado was the gentle giant of the podium ; never bossy, overbearing and imperious , he won the respect of orchestral musicians everywhere with his exceptional musiciaship and quiet authority . He was equally admired by the world's greatest opera singers , with whom he regularly worked , and the most renowned violinists , pianists and other solo instrumentalists . Claudio Abbado was chosen by the musicians of the mighty Berlin Philharmonic to succede the legendary Herbert von Karajan as their chief conductor in 1989 shortly after the older maestro died that year , and also been principal conductor of the London symphony orchestra , the music director of La Scala Milan and the Vienna State opera . He also served as principal guest conductor of the Chicago symphony orchestra during the 1980s . As a budding young conductor in the early 1960s he was an assistant conductor of the New York Philharmonic under Leonard Bernstein ,and guest conducted such great American orchestras as the Boston symphony , Philadelphia and Cleveland on occaision, although his main base was Europe . Later in life, he founded several special more or less ad hoc orchestras such as the Lucerne Festival orchestra , a deluxe, hand-picked festival orchestra chosen from the greatest orchestras of Europe , and the Mozart orchestra of Bologna , as well as the All-European youth orchestra , drawn from the most talented young aspiring musicians of the continent . Abbado was at home in a wide rnge of orchestral and operatic repertoire , ranging from Mozart and Beethoven to the most important composers of the present day . In opera , he was especially renowned for his interpretations of Verdi and Rossini Mozart, but also conduted operas by Wagner, Alban Berg, Mussorgsky , Debussy and Richard Strauss . He was a staunch champion of such leading Italian contemporary composers as Luigi Nono and others , as well as the music of other avant-garde European composers as Pierre Boulez and Karlheinz Stockhausen . Abbado made numerous recordings , mostly for Deutsche Grammaphon , but also for Sony Classical and Decca of a wide variety of works ,and many have become classics , such as his La Scala recordings of operas by Verdi ,including Don Carlos , Aida , Simon Boccanegra, Macbeth and Un Ballo in Maschera . Classic Rossini opera recordings include Il Barbiere Di Siviglia , La Cenerentola (Cinderella ), and Il Viaaggio Reims . There are also recordings of Bizet's Carmen, Mussorgsky's Boris Godunov and Khovanshchina, Wagner's Lohengrin , Debussy's Pelleas& Melisande , and Berg's Wozzeck . Abbado recorded all nine Beethoven symphonies twice, with the Vienna and Berlin Philharmonics, and the complete symphonies of Schubert, Mendelssohn, Brahms , Mahler, and Tchaikovsky . There are also recordings of numerous works by Prokofiev, Ravel, Richard Strauss, Stravinsky, Berlioz , Bruckner Debussy etc . Numerous live performances of operas and concerts by the maestro are available on DVD . Few conductors have been so universally loved and admired as Claudio Abbado , by both audiences musicians and singers . And few have been less egotistical and imperious . He will be universally missed , but leaves a great and priceless legacy of great achievements . Recently, a friend of mine and his wife went to see their first opera , at a not too shabby a venue - the Metropolitan opera . He's a psychologist based in Manhattan and a Jazz buff . But lately, I've been able to increase his interest in classical music and opera , and when he asked me if he and his wife should try a performance at the Met, I said of course , as you might expect from me . The opera he chose, with my recommendation , was the new Met production of Tchaikovsky's poignant "Eugene Onegin", based on a lengthy verse poem by the great early 19th century writer Alexander Pushkin . It's the story of a bored and cynical Russiian playboy who by chance meets a naive and vulnerable young woman who falls hopelessly in love with him ,only to be rejected because he has no interest in settling down as a married man . Several years later , he meets her again at a ball in St. Petersburg, where she is now the wife of a much older Russian general . He now realizes that he loves her, but is crushed by her rejection of him now that she is a married woman, even though she still feels love for him . It's a richly romantic opera with plenty of Tchaikovsky's souldful and haunting melodies . Not a bad choice . My friend asked me about what to wear , and I explained that there is no dress code, and the only time that some people dress formally there is on the opening night of the season, which is a gala occaision . That's right . If you've never been to an opera performance , those scenes in old movies at the opera with everybody dressed in Tuxedos and gowns are nothing like the real experience of going to the opera today . People don't go there to show off their fancy clothes ; they're there to see and hear an opera . A lot of these people are opera fans - just the same way some people are baseball fans , or of football or basketball . Some will always be opera newbies or people who just attend once in a while . There may be some wealthy people in the audience , usually in the expensive boxes , but they too may be big opera fans . There's absolutely nothing stuffy about the opera experience , whatever it may be like . Opera fans discuss the performances just as hearedly as sports fans . But unlike sports, there are no clearcut wnners or losers . They disagree very often . But ultimately, EVERYBODY there is a winner , whether the cast or the audience . I also explained that although the opera was sung in Russian by a mostly Russian cast of singers , and the conductor was also Russian, the Met has a system whereby you can see an English translation of whatever opera is being performed on the back of the seat in front of you , and this certainly helped to enhance their enjoyment of the opera . Many other opera houses use supertitiles, whereby a translation is projected onto the stage . But thew Met stage is so enormous that it's impossible to project a translation so that everyone can see it, hence the ingenious so-called "Met Titles ". So if you've never had the pleasure of attending an opera performance at any of the who knows how many which exist all over the globe, don't hesitate yourself ! Would my friend and his wife like to go to more Met performances ? The answer was a definite yes !
http://www.blogiversity.org/blogs/the__horn/archive/2014/01.aspx
Designing the Compassionate City outlines an approach to urban design that is centred on an explicit recognition of the inherent dignity of all people. It suggests that whether we thrive or decline-as individuals or as a community-is dependent on our ability to fulfil the full spectrum of our needs. This book considers how our surroundings help or hinder us from meeting these needs by influencing both what we can do and what we want to do; either inspiring us to lead healthy, fulfilled lives or consigning us to diminished lives tainted by ill health and unfulfilled potential. Designing the Compassionate City looks at how those who participate in designing towns and cities can collaborate with those who live in them to create places that help people to accumulate the life lessons, experiences and achievements, as well as forge the connections to meet their needs, to thrive and to fulfil their potential. The book explores a number of inspiring case studies that have sought to meet this challenge and examines what has worked and what hasn't. From this, some conclusions are drawn about how we can all participate in creating places that leave a lasting legacy of empowerment and commitment to nurturing one another. It is essential reading for students and practitioners designing happier, healthier places. Buy Designing the Compassionate City by Jenny Donovan from Australia's Online Independent Bookstore, BooksDirect.
https://www.booksdirect.com.au/designing-the-compassionate-city/jenny-donovan/book_9781138562707.htm
What is sometimes overlooked in drafting and negotiating discovery requests is the form of document production. Specifying the form of production within discovery requests can save a significant amount of time and money in the document review phase and lead to the production of valuable data that would not otherwise be produced at all. While some of the categories discussed below may seem obvious, in our experience even law firms with extensive litigation and e-discovery experience have at times failed to request this data and been disadvantaged as a result. Metadata All productions of electronic documents should at minimum contain certain basic metadata fields, including: - File name - File created and modified dates - Email sent date - Custodian - Author - Last modified by - Recipients (including blind copies (bcc)) - Source path - Subject line The purpose of metadata is to preserve information about documents that would be lost if, for instance, you printed out a document and handed it to opposing counsel. Who wrote the document? Who received it, and when? Where important data is not available on the face of the document, success can turn on the ability to analyze this metadata. Not only does this information provide key facts not apparent from the document, it also assists in the efficient and accurate review of the documents. Analyzing metadata is one of the key ways that the document review team can intelligently identify privileged, key, and responsive documents without having to engage in a linear, manual review. Document Format: TIFFs and Natives While TIFF format is an acceptable form of production for many documents, certain file types need to be produced in native format. For certain documents, significant content, such as speaker notes in presentations or track changes and comments in word processing documents, is lost in the TIFFing process. For others, such as Excel spreadsheets, the TIFF format no longer presents data in the coherent manner that the native file does. Specifying the types of files that must be produced in native format at the beginning of the discovery process allows for a more accurate review and negates the delay inherent in having to request the documents be produced again in native format (if that’s even possible). The additional information in these documents can also be very useful in crafting a document review strategy. Documents that contain comments and tracked changes, for example, are often more likely to contain privileged and/or interesting information. Spreadsheets and presentations often focus on the key issues in a matter in a predictable way and prioritizing review of certain categories or sub-categories of document types can enable the front-loading of documents that are most likely to be important, allowing for the law firm and the client to gain a more complete understanding of the facts earlier in the discovery process. Define “Documents” It is also important to specify all types of data being requested. While it is standard to request the production of all paper and electronic documents, failure to individually list all categories of requested “documents” can lead to the omission of key data sources and important information. Consider the specifics of your litigation and whether circumstances are such that it is likely that unusual data, such as chats, texts, shared drives, structured data, specific databases, voicemails, and telephonic recordings may be involved. Requesting these types of documents in discovery requests ensures that the client gets the facts and communications central to building the case, especially in industries that are known to rely on these forms of communications. When considering what types of documents to request, it is important to keep in mind that you, in turn, will most likely also be asked to produce those types of documents as well. If a party believes that the burden will be much greater on themselves than the opposing party to review and produce a certain category of document, and this burden is not outweighed by the possible benefit of receiving the other party’s production of the same data, it makes sense to modify the discovery requests accordingly. Careful consideration of the players involved, the type of industry, and other specifics of the litigation at hand should be undertaken to ensure that you are focusing on the types of documents most likely to contain the information needed. Requesting ESI is a process that can be fraught with pitfalls. The processes outlined above provide some helpful steps that can assist you in avoiding those pitfalls, allowing for a streamlined and efficient review of ESI and the development of a complete understanding of the facts and issues involved in your case.
https://www.lawtechnologytoday.org/2017/08/e-discovery-request-youre-requesting/
The Economist Intelligence Unit (EIU) revealed that New Zealand has maintained its spot as the world’s fourth most democratic country. The EIU’s 2018 Democracy Index was dominated by Scandinavian countries, with Norway, Iceland and Sweden ranking in first, second and third place respectively. Denmark and Finland also ranked in the top 10, while New Zealand formed part of the 20 full democracies worldwide. A less optimistic result was recorded in the United States, with the country dropping 4 rankings from 2017, albeit being partially due to improvements in other countries. However, a significant decline in the public’s trust in the country’s institutions had led to the US being classed as a flawed democracy. This same trend has been seen in some Western European countries, where anti-establishment parties had gained popularity due to a lower quality of democracy. Other countries classed as flawed democracies include Italy, Portugal, France, Belgium, Cyprus and Greece. The index is based on five categories, only one of which – political participation – improved globally in 2018. The EIU said: "The results indicate that voters around the world are in fact not disengaged from democracy. They are clearly disillusioned with formal political institutions but have been spurred into action. "There was also a jump in the proportion of the population willing to engage in lawful demonstrations around the world, almost without exception." Results also showed that political participation among women has increased greatly over 2018 and the last decade. Women’s political participation had advanced more than any other indicator in the Democracy Index. Meanwhile, limitations on free speech by state and non-state actors have been on the increase, heavily impacting democracy. The overall score recorded in New Zealand was 9.26, which is the same as it has been for nine years. Its scores for each individual category remained as they were last year – 10 for electoral process and pluralism, 9.29 for functioning of government, 8.89 for political participation, 8.13 for political culture, and 10 for civil liberties. deVere New Zealand Limited does not warrant, either expressly or impliedly, the accuracy, timeliness or appropriateness of the information contained on this website. The information contained on this website is for informational purposes only. All material provided on this website is believed to be from reliable sources, but deVere makes no representations as to its accuracy or completeness and deVere disclaims any responsibility for content errors, omissions or infringing material and, additionally, disclaims any responsibility associated with relying on the information provided on this website. Professional advice should always be sought before any financial decision is taken. deVere does not intend to provide investment advice through this site and does not represent that the securities or services discussed are suitable for any investor. This site is not intended, directly or indirectly, and should not be construed, or interpreted, as a solicitation to sell or offer any securities or offer any investment advisory services to any residents of any jurisdiction where it would be unlawful to do so. Investors are advised not to rely on any information contained in the site in the process of making a fully informed investment decision. Further, deVere is not in the business of providing tax or legal advice and each investor should consult qualified counsel. This site is not intended to render tax or legal advice. © 2015 - 2019 deVere New Zealand Limited. All rights reserved.
https://www.devere-newzealand.nz/news/New-Zealand-fourth-most-democratic-country
We analyzed the taste preferences of people around the world to create the Sellas Extra Virgin Olive Oil. It mixes cultivars from the southwest and northeast part of Peloponnese in a delicate condiment that gives extra flavor to every meal of the day, every day. Blend, Cold-pressed Region Corinthia, Argolida, Messinia Cultivar Manaki – Early November Koroneiki – Late November Flavor Profile Mild Intense Fruity Bitter Pungent Notes Main:
http://sellas.gr/extra-virgin-olive-oil-black/
FIELD OF THE INVENTION The invention relates to the pre-waste production stabilization of heavy metal bearing hazardous and/or solid waste subject to direct aqueous analyses, solid phase acid leaching, distilled water extraction, the California Citric Acid Leaching test and other citric leaching tests and/or Toricity Characteristics Leaching Procedure, by use of water soluble stabilizing agents such as flocculants, coagulants and heavy metal precipitants including sulfides, carbonates and phosphates. The stabilizing agents, are added to the material production, development or process prior to the first generation of any waste material. This approach responds directly to the RCRA requirement that exempt treatment of hazardous wastes be in a totally-enclosed fashion, a well as allowing for stabilization of heavy metal bearing particles to occur in a pre- mixed and as-produced manner in order to assure consistent and accurate ability to pass the waste extraction method of interest. The combination of pre-waste materials with treatment additives such as epoxy agents, precipitants, flocculating agents, and granular activated carbon particles provides for as-produced stabilization where the need for post-produced waste mixing, feed controls, collection as a waste, storage manifesting, and expensive and burdensome post waste treatment is obviated. One specific use under evaluation and study by the inventor involves the seeding of black beauty and other sand blast grit materials with various forms of air entrainable particle precipitants and minerals which would provide for a integral mixed soluble phase of heavy metal precipitant within the post-sandblast waste generated that would be released under a leaching exposure of the waste after sandblasting of Pb, Cu, Zn, and other metals bearing in paints, such as for ship yards. The advantage of the pre-waste stabilizer additive here is that the collection of the heavy metal bearing waste will not be as necessary for environmental and/or TCLP waste handling reasons, and upon any such collection the grit and paint products will have been seeded thus requiring no RCRA permitting for hazardous waste treatment or handling. Another specific use of pre-waste stabilization involves the injection of particulate water soluble precipitants, flocculants, coagulants and/or mineral salts directly into the processing lines of auto-shredders and wire-chopping systems such that the first generation point of fines, dust, wastes, fluff and/or plastics have been seeded with such stabilizing agents and thus the produced waste will pass TCLP criteria and thus be exempt from RCRA Part B permitting. The general approach of the pre-waste stabilization technology described herein can be utilized in many waste generation systems such as incinerators producing ash materials, wastewater sludge production, drilling tailings production and storage tank sludge collection. The specific application of stabilization agents into the process prior to the generation of wastes would be designed and operated on a case-by- case basis. DESCRIPTION OF RELATED TECHNOLOGY Leaching of heavy metal bearing wastes and direct discharges of heavy metal bearing wastewaters has been of concern to environmental regulators and waste producers since the 1970's and the promulgation of the Resource Conservation and Recovery Act (RCRA) in 1979 and with various health officials. Under RCRA, solid wastes may be considered hazardous if the waste leaches excessive heavy metals under the Toxicity Characteristic Leaching Procedure (TCLP). In addition, there exist various states such as California, Minnesota and Vermont which require additional leaching tests on solid waste in order to classify the waste and direct the more heavy metal leaching wastes to hazardous waste landfills. In order to avoid having solid waste s be required to be handled at more expensive hazardous waste landfills, various researchers and solid waste businesses have investigated and methods to control the leaching of heavy metals such as lead from the solid waste. The art has looked at the control of leaching by ex-situ methods involving portland cement, silicates, sulfates, phosphates and combinations thereof. See U. S. Pat. Nos. 4,629,509 (calcium sulfide); 4,726,710 (sodium sulfur oxide salt); which are incorporated by reference. SUMMARY OF THE INVENTION Existing heavy metal treatment processes are designed and operated in a post-waste production mode or remediation mode and thus ignore the advantages of stabilizing agents into the product stream prior to or during waste production. It is an object of the invention to provide a method that effectively treats any heavy metal bearing wastes by the use of water soluble stabilizing agents such as dry alum, activated carbon and/or heavy metal precipitants (e.g. sulfides and phosphates) such that the stabilized waste will resist the leaching of copper, zinc, lead, cadmium and other heavy metals. It is another object of the invention to provide a method of in- line stabilization which allows for hazardous and solid waste treatment without the need for the use of any post-waste production mixing device and for the treated waste to remain free flowing. It is a further object of the invention to provide for the mix of treatment chemicals to be added directly to the material generated prior to a waste classification and thus avoid the need to treat the waste as a hazardous waste under RCRA and avoid the need for treatment permitting. In accordance with these and other objects of the invention, which will become apparent from the description below, the process according to the invention comprises: adding a stabilizing agent, for example, a flocculant, coagulant and/or precipitant, or mixture thereof, such as ferric chloride, alum, ferric sulfate, feldspar, clays, activated alumina, phosphates or wastes comprising these elements, in sufficient quantity such that the treatment chemicals are dispersed onto or into the pre-waste material such that the produced waste will pass the regulatory limits imposed under the acid leaching tests, similar aggressive or natural and distilled water leaching environments. Providing for a sufficient pre-waste seeding of stabilizing agents assures passage of TCLP leaching criteria and/or other relevant leaching tests in order to characterize the waste as non-hazardous and/or to reduce the solubility of the heavy metal bearing waste to a point considered suitable by the appropriate local, state and/or federal leaching criteria. DETAILED DESCRIPTION One of the most costly environmental tasks facing industry in the 1990's will be the clean-up and treatment of heavy metal bearing wastes, both solid and hazardous, at old dump sites, storage areas and retention areas and at existing waste generation sites such as process facilities or incinerators throughout the world. Depending on the specific state and federal regulations, those wastes will be classified as either solid, special or hazardous. The management options for the waste producer vary greatly depending on the waste classification and the regulatory requirements associated with that classification. The most stringent waste classification is that of hazardous. There exist various methods of stabilizing and solidifying heavy metal bearing hazardous wastes. The most common method, using portland cement for physical solidification, is common knowledge in the environmental engineering field. There exist several patented processes for hazardous waste treatment such as using carbonates, polysilicates, phosphates and versions of portland cement. These patented methods and the use of portland cement all recognize the need to control chemistry and provide for mixing of the waste and the treatment chemicals in order to control heavy metal solubility as tested by the TCLP Federal acetic acid leaching test by either precipitation of the heavy metal into a less soluble compound or the physical encapsulation of the waste and surface area reduction. Wastes subject to regulation are usually tested via the USEPA TCLP extraction method. The TCLP extraction method is referred to by the USEPA SW-846 Manual on how to sample, prepare and analyze wastes for hazardousness determination as directed by the Resource Conservation and Recovery Act (RCRA). The TCLP test by definition assumes that the waste of concern is exposed to leachate from an uncovered trash landfill cell, thus the TCLP procedure calls for the extraction of the waste with a dilute acetic acid solution which simulates co-disposal with decaying solid waste. In the method of invention, a stabilizing agent can be used to reduce the leachability of heavy metals, such as lead, copper, zinc, chromium and cadmium, from a heavy metal bearing waste by contacting the stabilizing agent with the product from which the waste is generated, or with the generated waste while in the waste generation stream. Wastes stabilizable by this method include various types of waste materials from which heavy metals can leach when subject to natural leaching, runoff, distilled water extraction, sequential extraction, acetic acid, TCLP and/or citric acid leaching or extraction. Examples of such heavy metal leachable wastes, include, for instance, wire chop waste, auto shredder fluff, sludges from electroplating processes, sand blast waste, foundry sand, and ash residues, such as from electroplating processes, arc dust collectors, cupola metal furnaces and the combustion of medical waste, municipal solid waste, commercial waste, sewage sludge, sewage sludge drying bed waste and/or industrial waste. In one embodiment, a stabilizing agent is contacted with the product prior to generating a waste from the product. For example, the stabilizing agent can be contacted with the product while the product is in a product storage pile and/or while the product is in a waste generation stream. Further, the stabilizing agent can be directed onto the product while in said stream and/or onto the waste generation equipment which transports the product and/or operates upon the product to form the heavy metal bearing waste. For example, to reduce heavy metal leachability from auto shredder wastes, such as fluff, a stabilizing agent is added prior to generation of the wastes, which are collected after baghouse and cyclone collectors, including adding the stabilizing agent to auto shredder units, to conveying units or to handling units. In another embodiment, heavy metal leachability from wastes, which are generated by chopping insulated wires, such as wire or fluff mixed with PVC, or paper, which surrounded the wire, are reduced by adding a stabilizing agent to the waste generation stream. The stabilizing agent can be added to the wire prior to, or after, primary and/or secondary choppers, separating beds, pneumatic lines, cyclones or other handling or processing equipment. In yet another embodiment, the leachability of waste, generated from sand blasting a surface painted with heavy metal bearing paint, is reduced by contacting a stabilizing agent with the paint particles as the paint particles are generated by the sand blasting. The stabilizing agent can be blended with the grit used for sand blasting prior to blasting the painted surface, or coated onto the painted surface prior to blasting with the grit. The existing hazardous waste treatment processes for heavy metal bearing wastes fail to consider the use of pre-waste stabilizer seeding and fail to design a treatment with the expectation of using the TCLP extractor as a miniature Continuous Flow Stirred Tank Reactor (CFSTR) in which complex solubility, adsorption, substitution, exchange and precipitation can occur as well as macro-particle formations. The invention presented herein utilizes the TCLP, WET and/or distilled leaching (DI) extractor as a continuous stirred tank reactor similar to that used in the wastewater industry for formation of flocculants, coagulants and precipitant reactions. In addition, the invention presented herein utilizes the post-extraction filtration with 0.45 micron filters as the method of formed particle capture and removal similar to that conducted by rapid sand filtrators used within the wastewater and water treatment fields. Existing heavy metal treatment processes are designed and operated relying upon a post-waste production treatment. This approach ignores the regulatory, process, handling and permitting advantages of combining stabilizing agents such as retaining matrixes, coagulants and precipitants with the material to be wasted prior to such waste activity. The ratio and respective amount of the applied stabilizing agent, added to a given heavy metal bearing material will vary depending on the character of such heavy metal bearing material, the process in which the waste is produced, heavy metal content and treatment objectives. It is reasonable to assume that the optimization of highly thermodynamically stable minerals which control metals such as Pb will also vary from waste type, especially if the waste has intrinsic characteristics available forms of CI, Al(III), sulfate and Fe. The current methods incur an extensive cost in assuring waste- to- treatment additive mixing with heavy equipment, waste handling and excavation. The invention presented herein changes that basis, and stands on the principle that the waste pre-seeding will suffice for any and all form of mixing and that regulators will allow for such seeding such that produced rainfall or simulated rainfall would carry the treatment chemical to the areas which, by natural leaching pathways, demand the most epoxy, flocculant, coagulant and precipitant treatment. Thus, for stabilization of heavy metal within, a stabilizing agent is added to the top of the waste pile and is then dispersed into said pile by leaching. Alternately, a stabilizing agent can be tilled into the first several feet depth of the product in a product pile, thereby allowing a time release of the stabilizing agent into the product pile and leaching pathways. The leaching can be natural, such as leaching resulting from rainfall, and/or the leaching can be induced, such as by spraying or injecting water at the surface of the product pile or below the surface of the product pile. The present invention also utilizes the mixing time and environment provided within the extraction device, thus deleting the need for the treatment additives to be mixed within the field. The sampling population required under SW-846 in addition to the mixing within the extractor provide for ample inter-particle action and avoid the need for expensive bulk mixing used with cements and common precipitant treatments now used on full scale waste treatment and site remediation activities. EXAMPLE 1 In this first example, a medium grit sand blast was mixed with agglomerated Diammonium Phosphate prior to sand blasting a Pb bearing paint. As shown in Table 1, the grit was initially subject to TCLP leaching without the pre-waste treatment and secondly with 4 percent by weight Diammonium Phosphate. The results show that the combination of grit blast black beauty material and dry agglomerated phosphate met the regulatory limits of 5.0 ppm soluble Pb under the TCLP acid leaching test. The extraction used a 1000 ml tumbler and extraction fluid of TCLP1 in accordance with the TCLP procedure. Pb was analyzed by ICP after filtration of a 100 ml aliquot through a 45 micron glass bead filter. TABLE 1 ______________________________________ Pb from Sand Blast Residues Subject to TCLP Leaching Untreated 4% DIAMMONIUM PHOSPHATE ______________________________________ 47 ppm &lt; 0.05 ppm ______________________________________ EXAMPLE 2 In this example, a copper wire waste was mixed on-line with Triple Super Phosphate prior to separation of the wire from the housing through a chopping line and thus prior to any generation of waste. The addition of Triple Super Phosphate was controlled by a vibratory feeder with a slide gate to control the volumetric rate of Triple Super Phosphate to the sections of wire passing by on a vibratory conveyor. After the on- line mixture, the wire and additive were subject to high speed chopping and air separation of the plastic housings and paper off of the copper wire. At this point in the process, the wire is considered a product and thus exempt from TCLP testing. The removed plastic and paper is lead bearing, and unless treated as above, is considered a hazardous waste. The combination of the wire waste and the Triple Super Phosphate resulted in a waste which passed TCLP testing, and thus allowed to be managed as a solid waste or for reuse and recycling. TABLE 2 ______________________________________ Wire Chopping Wastes Subject to TCLP Leaching Untreated 4% Triple Super Phosphate ______________________________________ 8 ppm Pb &lt; 0.5 ppm Pb ______________________________________ From the above examples, it is apparent that a large number of combinations of products and treatment additives could be mixed prior to the generation of the product waste in order that the waste as generated would contain the sufficient quantity and quality of heavy metal stabilizing additives such that the waste as tested by TCLP would pass regulatory limits and thus avoid the need for post-waste production stabilization. The exact combination of stabilizing additives for each waste would be determined from evaluating local waste products and/or chemical supplies and conducting a treatability study using such mixtures that produces the end objective of soluble heavy metal control within the produced waste material at the most cost efficient manner. The exact mix recipe and dosage would probably vary due to the waste stream as shown in the above examples, and will vary depending on the aggressiveness of the leaching test or objective for waste stabilization.
We know creativity works. We know innovation works. We know that purpose works. In this essay, I will demonstrate that neurodiversity works, for creativity, innovation, purpose – and for everyone. In Why Should Anyone Work Here? by Rob Goffee and Gareth Jones, published by Harvard Business Review, the authors reflect on organisational cultures of the 50s and 60s. The old goal was to mould ourselves to fit into a company: to trim off any of the eccentricities that make us who we are, so that we fit in better with others. We know that this strategy does not produce satisfied employees. Conformist organisations aren’t just constraining and boring, they also don’t produce great work. In fact, diversity of thought and experience – the differences between us all – is essential to creating excellent work. We need disagreements to create and innovate. But we also need inclusive cultures, so that we can disagree safely and move on with our workplace relationships, without any bad blood. At WPP, a creative transformation company, we need our workforce to be the most creative and innovative. As AI begins to threaten jobs, when industries of all kinds are naming creativity and problem-solving as key skills for the future, we see the importance within the education system on these skills dropping off. We see traditional talent pools drying up, as the competition for talent from elite universities becomes even more intense. We know we must appeal to all demographics, as talent is spread among all. We know we must work harder to reach out to those from ethnic minorities and less privileged socio-economic backgrounds, to women, LGBTQ+ and people from all generations. We must develop those who don’t fit the traditional mould by giving them opportunities to thrive and magnify their strengths. We must provide flexibility for parents, carers and older people and recognise that life happens around work. All this diversity and inclusion work takes time and deep thinking, but more importantly it requires empathy. How do we build empathy into our culture? How do we create the psychological safety for people to bring their whole self to work? How do we let people know that difference is valued, and that their individual needs are taken seriously by their employer? The untapped revolutionaries There is a huge pool of talent, making up 20% of the population here in the UK. This pool was shown to outperform their peers in the IPA Diagonal Thinking study, a measure of both lateral and linear thinking, which is crucial to success in our industry. They are problem solvers, because their everyday life is beset by obstacles to overcome. And the race to attract these people has already begun. The Valuable 500 launched in 2019 at the World Economic Forum, challenging companies to take the full breadth of diversity seriously – by putting disability on board’s agendas. I am very proud to say that WPP has signed that pledge. One in five of us lives with a disability. Scope, the disability charity, estimates that there are 13.9 million people in the UK who are disabled. And an estimated 80% of those disabilities are hidden – just 10% of disabled people are wheelchair users, despite our common conception of disability being all about mobility. Disabled people are often thought of as the final group to be included in the workplace. And it’s true, with only 52% of disabled people who could work finding a suitable role. This represents a huge cost to society in the form of out-of-work benefits, but more importantly, it is a massive waste of talent. So many disabled people are naturally creative but are stuck at home. We know that disabled people are far more likely to have “neurodivergent” brains – minds that perceive and work differently than the norm. And when neurodiversity has been harnessed – such as in the cases of Steve Jobs, who was dyslexic, and Greta Thunberg, who is autistic – there have been revolutionary leaps in thinking. As we make the adjustments necessary for an inclusive and accessible workplace, we will be enhancing our work, our culture, and the representation and quality of life of all disabled people. Autistic people have a unique, analytical cognitive style, affording talents in areas such as systems thinking, attention to detail and visualising problems New systems reinforce old obstacles The creative and tech industries have been building a brave new world online for the past 30 years. This world bypasses the barriers that so many disabled people face when trying to get involved in society. Unfortunately, this new world isn’t the utopian vision of disability inclusion that we hoped for. We see new accessibility challenges as designs haven’t been tested by disabled people, making recruitment unnavigable for many. Increasingly, it’s becoming impossible to apply for a job without using the internet, and the tests you find online are difficult to adjust. Talent acquisition software has promised to take the pain out of recruitment. But does that also mean it is removing this pool of creatives? Out of all disabled people, the group least likely to gain employment is autistic people. I should know, as I myself have a diagnosis of autism, and found it incredibly difficult to access employment, despite my abilities and experience. Autistic people have a unique, analytical cognitive style, which can afford talents in areas such as systems thinking, attention to detail, ability to hyper focus, comfort with repetitive tasks and visualising problems. These talents can lend themselves to data analysis, AI and statistics, software design and development, exactly the areas in which we are desperate for talent. But beyond those stereotypical autism-friendly roles, autistic people can also spot inefficiencies in systems and solutions for complex problems ahead of their neurotypical peers. So why is it that only 16% of autistic people who want to work are in work, when we know that the jobs and skills of the future are exactly the kinds of things autistic people excel at? Autistic people are simply too “different” for systems designed for neurotypical people. We know that of all the skills interviews test candidates on, social and communication skills matter most to a hiring manager’s impression, which is exactly the area in which autistic people are most disadvantaged. The autistic candidates’ strengths may lie in the extra productivity and alternative perspectives they bring. Their unusual education and work histories can be a barrier, and job applications might not give room to detail self-taught skills and experience. Today, new talent acquisition software that uses AI to analyse your body language during a video interview have the exact same biases as humans against the behavioural differences that autistic people display, but the oversight to correct for those biases has been removed. The same effects are seen for people who have had a stroke or have a facial disfigurement. When presented with these differences, the computer says “no”. Hacking systems for difference Creativity involves hacking – finding new routes through a problem, bringing together different strategies, trying lots of things until something sticks. Because of our systemic thinking, autistic people make natural hackers. We see flaws and inefficiencies in systems quickly and know how to exploit them. This skill can be applied by autistic people to any system. It’s fundamentally the same problem-solving ability you see in disabled people who find ways to do the things that they struggle with, finding workarounds in a world which simply isn’t designed for them. It’s the same ability you see in dyslexic people, who rely on their brain’s visual system, rather than the language systems that neurotypical people use. If spelling mistakes and putting words the wrong way around is something we might reject an employee’s application on, we may well be rejecting someone with a brilliant visual and creative mind. Just as symbols get mixed up in the dyslexic person’s head, so do images, in the same way that puns exploit the bringing together of two disparate concepts that might sound similar. Dyslexic, bipolar and schizophrenic people show this extraordinary ability to make links, allowing them to excel in creative roles such as art, comedy and writing. These are the creatives we need, but without the support and right connections, end up unemployed or in low paying work which neglects their abilities. Game theorists point out that in any competitive system there is a value to using unconventional rules. In recruitment, an advantage can be gained by finding talent that is undervalued by everyone else. We have seen autistic people being employed as developers, despite having poor qualifications compared with their colleagues – perhaps they dropped out of school or could only focus on a few subjects – but outperforming them three-fold in terms of productivity. So, what if we could hack our recruitment systems, just as disabled people use different strategies and systems to get around their impairments and society’s barriers? Read more from Atticus Journal Volume 25 This is an excerpt from Why neurodiversity works for creativity published on 10 November 2020 Category More in The Atticus Journal Making sustainability profitable Sustainability investments must deliver returns – both financial and reputational – to be ‘sustainable’ for business. Something needs to change, says Luc Speisser Sustainability comms must get real There’s a disconnect between the way corporations talk about climate change and how the public discusses the same issue. That’s the conclusion of research by Jamie Hamill, Alessia Calcabrini and Alex Kibblewhite.
https://www.wpp.com/wpp-iq/2020/10/diversitys-last-frontier
Your health is dynamic. Events happen, diseases wax and wane, and none of us can escape the process of aging. Your health is also complex. The human body has many systems, all of which interact with one another. Diagnosis and treatment is rarely straightforward. Tests are ordered, treatments are tried, and specialists get involved. This requires coordination, documentation, and followthrough by someone that knows you well. This is continuity of care. Continuity of care is concerned with how your health is followed over time. Your health is dynamic and complex requiring good continuity of care Components of continuity of care Continuity of care can be broken down into three components: relationship continuity, information continuity, and management continuity: Relationship continuity - A relationship with a primary care doctor is where continuity of care begins. It is essential. The same doctor over time - from phone call to phone call, visit to visit, and year to year. With time, this doctor begins to accumulate knowledge about you - your values, your passions, and your fears; knowledge difficult to record in a chart, yet key to making good medical decisions. Information continuity - Medical care produces a mountain of information. Your health records need to be collected and accessible in a central location. Only with full access to complete information can your doctors help you make good medical decisions. Management continuity - The complexity and dynamic nature of your health requires coordination. A single entity needs to assume accountability. An entity with the infrastructure to monitor tests, treatments, chronic conditions, and specialty consultations, and ensure followthrough. From observational studies, patients with good continuity of care have better health outcomes. These outcomes include: greater patient satisfaction, increased adherence to medication, fewer ER visits, reduced hospital use, and even reduced mortality. Poor continuity of care A major criticism of current medical practice is poor continuity of care. It is widespread. There are several reasons for this: Uncertain accountability - Continuity of care is shared between primary care doctors and specialists, it’s not clear who is responsible. Inappropriate incentives - Primary care doctors are not incentivized to ensure continuity of care - it is not reimbursed. Instead, primary care doctors are reimbursed for more offices visits, more tests, and more procedures. Resources - Most primary care doctors do not have adequate time or personnel to ensure continuity of care. This leads to care that is fragmented. Care from multiple players, with poor access to your records, unfamiliar with your case. Let me illustrate with the following examples of poor continuity of care: The lost test result. A doctor orders a mammogram. A suspicious lesion is found and the report is faxed to the office. However, it does not make its way to the doctors desk. The patient assumes the result is negative, not hearing back from the office. The office does not have a system in place that ensure followthrough of tests and test results. The after-hours call. David has a long history of episodic chest pain. Extensive testing has ruled out heart problems; it's likely due to anxiety. Over the weekend, he experiences a severe attack and calls his doctor's answering service. His doctor is not on-call, and instead, the service connects him with another doctor. Not knowing his history or having access to his chart, she tells him to go to the ER. She views David's chest pain as a heart attack until proven otherwise. Why take a risk or expose herself to litigation? The ER doctor treats the situation similarly; David gets admitted to the hospital for a full workup and more testing. This exposes him to the dangers of a hospital stay and unnecessary testing. Ironically, the entire incident generates more anxiety for David, and none of it is documented in the chart because the practice lacks personnel to retrieve hospital records. All of this could have been easily avoided had David spoken with his regular doctor in the first place. The Executive Physical. The Executive Physical entails an extensive annual check-up with a large battery of tests, often requiring travel to another city. As a resident in Internal Medicine at Mayo Clinic and Cleveland Clinic, I helped conduct many of these evaluations. It turns out, many patients use their Executive Physical doctor in lieu of a primary care doctor at home. This leads to poor continuity of care; a different doctor is often assigned at each executive physical, and between physicals you live in another city. Consequently, the doctor who knows the you best is not present when you need them the most - when you become ill between physicals. Concierge Medicine and continuity of care The Concierge Model of medicine is the ideal practice model for optimal continuity of care. I have the adequate resources to provide good continuity of care because my annual fee ensures a low volume of patients. I am incentivized to provide good continuity of care because good continuity of care leads to better performance and better outcomes. Hence, there is no confusion in my practice, I am fully accountable for the continuity of your care. There is no confusion, I am fully accountable for your continuity of care. I try to excel in all three components of continuity of care: Relationship continuity - An important principle of my practice is direct access - I do not delegate your care to subordinates. Knowing your personal values and preferences allows me to help you make better medical decisions. I am always there to bridge the gap between you and the rest of the medical world. I never shut down my communication devices. I rarely require phone coverage from other physicians. I am reachable 24/7/365.- Informational continuity - My staff and I have ample time to collect your health records. Old records are retrieved from prior doctors. Current records are retrieved from specialists and facilities. All are stored and maintained in an easily accessible user-friendly electronic medical record. Management continuity - My practice is low volume and managing a low volume of cases is always easier than managing a high volume of cases. Nevertheless, I employ other tools to ensure followthrough; such as, direct access, which reduces the number of subordinates in the chain of communication, and user-friendly electronic medical records with automated reminders and drug interaction checkers, which keep track of events. I am always there to bridge the gap between you and the rest of the medical world.
https://www.iwoolf.com/post/concierge-medicine-and-continuity-of-care
How Open-Source Robotics Hardware Is Accelerating Research and Innovation Erico Guizzo for IEEE Spectrum: The latest issue of the IEEE Robotics & Automation Magazine features a special report on open-source robotics hardware and its impact in the field. We’ve seen how, over the last several years, open source software—platforms like the Robot Operating System (ROS), Gazebo, and OpenCV, among others—has played a huge role in helping researchers and companies build robots better and faster. Can the same thing happen with robot hardware? It’s already happening, says robotics researcher and RAM editor-in-chief Bram Vanderborght, who explains that building hardware has gotten much easier thanks to things like 3D printers, laser cutters, modular open electronics kits, and other rapid prototyping and fabrication techniques. And while “open-source robotics hardware is taking longer to catch on” compared to open-source robotics software, he notes that “several impressive examples exist, taking advantage of benefits of those novel rapid prototyping possibilities.” Making robotics hardware more affordable, versatile, and “standardized” is hugely important for the field, as Aaron Dollar, Francesco Mondada, Alberto Rodriguez, and Giorgio Metta, who guest edited the special issue, explain: In the field of robotics, there has existed a relatively large void in terms of the availability of adequate hardware, particularly for research applications. The few systems that have been appropriate for advanced applications have been extremely costly and not very durable. For those and other reasons, innovation in commercially available hardware is extremely slow, with a historically small market and expensive and slow development cycles. Effective open-source hardware that can be easily and inexpensively fabricated would not only substantially lower costs and increase accessibility to these systems, but would drastically improve innovation and customization of available hardware. Full Article: Comments (0) This post does not have any comments. Be the first to leave a comment below. Post A Comment You must be logged in before you can post a comment. Login now.
https://www.roboticstomorrow.com/story/2017/03/how-open-source-robotics-hardware-is-accelerating-research-and-innovation/9738/
Cracker Barrel was ordered by a jury to pay $9.4 million over a lawsuit involving a Tennessee patron who drank what he believed to be a glass of water, but which turned out to be a cleaning fluid called Eco-San. The jury awarded $4.3 million in compensatory damages and $5 million in punitive damages, according to a statement from the man's attorney. Cracker Barrel didn't immediately return a request for comment. The award may be reduced, however, due to a $750,00 cap on damages under Tennessee law. The incident occurred in 2014 when William Cronnon stopped at a Cracker Barrel for lunch in Marion County, Tennessee, and the waitress refilled his glass with what he believed to be water. Instead, it was a mixture of water and Eco-San, which is a commercial grade bleach, attorney Thomas Greer wrote in the statement. Cronnon went to the ER for treatment, and developed gastro-intestinal issues afterwards, including cramping and reflux pain after meals. According to Greer, Cracker Barrel allegedly "used unmarked water pitchers to mix water and Eco-San together, and then soaked parts of the soda machine in that mixture in order to clean them." While Cronnon was in the ER, Cracker Barrel's corporate office reported faxed a safety data sheet for Eco-San to the medical facility, which the attorney claimed indicated that the restaurant chain "knew immediately after the incident exactly what happened." Cronnon's injuries have been severe enough that he isn't able to work, Greer said. Cronnon's career had been in textile factories, the statement added.
About MWCC Foundation, Inc. Donate Now Fundraising Activities Foundation Scholarships Celebrating 30 Years of Innovation and Excellence About MWCC President's Message Vision, Mission, and Values Leadership Our Campuses Vital Stats Administrative Offices & Directory College History Sustainability Testimonials Accreditation Public Disclosure New Students Current Students Faculty & Staff Alumni Display Format: Select Grid View Category View Date/Time View Summary View Location View All Categories Submit Events Day Week Month Year Welcome to the MWCC Events Calendar. November 2020 Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 1 2 3 4 5 Display Month: Select January February March April May June July August September October November December Advanced Search (New Search) From: To: Category(s): Select Select All Select Multiple Academic Calendar Full Semester Quick Semester 1 Quick Semester 1 Lab Quick Semester 2 Quick Semester 2 Lab Advising Transfer Visits Workshops Auditions Campus Tours Club Events Alpha Beta Gamma Green Society Honors Program Parent Support Group Phi Theta Kappa Student Government Association CommunityService Diversity Events Financial Aid SAP SAP Warning Free College Day Health Services Miscellaneous Office of Admissions Dual Enrollment Registration Smart Start Performances Presentations Social Social Activities ThoughtfulLearning Veterans Appointments Volunteer/Civic Engagement Workshops Location(s): Select Select All Select Multiple Devens Campus Fitchburg Campus Gardner Campus Leominster Campus Remote Session Keyword(s): Submit Event Details Go Back World AIDS Day Contest Start Date: 11/23/2020 Start Time: 8:00 AM End Date: 12/13/2020 End Time: 11:45 PM Event Description: World AIDS Day is observed on December 1st each year. The purpose of the day is to increase HIV awareness and knowledge in an effort to end the HIV epidemic. To be entered into the contest to win an STD Prevention Kit, simply view the video below and visit one or more of the links to educate yourself about HIV and then email Kate Kusza at [email protected] with an STD prevention fact by 12/13/20. https://www.aidsprojectworcester.org/ https://www.cdc.gov/std/prevention/default.htm https://youtu.be/FJ1QFyLy1aI Hosted by: MWCC Student Life Office Online Location: https://youtu.be/FJ1QFyLy1aI Additional Information can be found at: https://mwcc.campuslabs.com/engage/event/6620584 LOCATION: Online Calendar Software powered by Dude Solutions Select item(s) to Search Close Select None Select All Save Select item(s) to Search Close Select None Select All Save Select item(s) to Search Close Select None Select All Save Select item(s) to Search Close Select None Select All Save Policies Title IX Public Disclosure Employment iConnect 444 Green St. • Gardner, MA 01440 978-632-6600 • Admissions:
https://calendar.mwcc.edu/EventList.aspx?fromdate=11/29/2020&todate=11/29/2020&display=Day&type=public&eventidn=5671&view=EventDetails&information_id=28313
Chair Wilkin, Vice Chair White, Ranking Member Sweeney, and members of the House Government Oversight Committee: Thank you for the opportunity to submit this testimony. My name is James Broughel. I am a senior research fellow at the Mercatus Center at George Mason University and an adjunct professor at the Antonin Scalia Law School. My research focuses on regulatory institutions, economic analysis of regulations, and the effects of regulations on economic growth. My testimony today centers around Senate Bill 9 (SB 9), which is currently being considered by this committee. Specifically, I have three main points to convey: - Regulations have unintended consequences. These include many regressive effects that disproportionately burden the most vulnerable Americans. Regulations can also increase mortality, which is a hard-learned lesson from the ongoing pandemic. - Some states are reviewing their regulatory codes, either in response to the pandemic or as a more general good-housekeeping reform. As part of these efforts, some states have set reductions targets in range of 25 to 33 percent in recent years. - Regulatory reductions in this range are achievable, given recent experiences in states such as Idaho and Missouri, with no apparent adverse effects on safety or welfare. The Unintended Consequences of Excessive Regulation It is now well known that the process of regulatory accumulation—the buildup of administrative rules over time—can stunt economic growth and lower living standards below what would be otherwise. A coauthor and I recently conducted a review of peer-reviewed studies that rely on measures of regulation constructed by the World Bank and Organisation for Economic Co-operation and Development, and we found an apparent consensus that regulations that affect entry of new firms into an industry and regulations with anticompetitive product and labor market effects are generally harmful to productivity and growth. Unintended consequences of regulation extend into the realm of health, as became apparent in 2020. Regulations in a wide variety of areas have had to be relaxed or suspended in order to facilitate the public health response to the COVID-19 pandemic. Examples of such waived or relaxed regulations include (a) regulations that restrict telehealth services by preventing medical professionals from meeting with their patients virtually; (b) occupational licensing and scope-of-practice regulations, which restrict who can work in certain professions and what services they can provide; (c) regulations governing clinical laboratories, which determine who can perform diagnostic tests, such as COVID-19 tests; and (d) certificate-of-need laws, which require healthcare providers to seek permission from the government before they offer new services or expand or build new facilities. Had these regulations not been rolled back during the emergency, the devastation from the pandemic would likely have been far higher. Beyond the pandemic, regulations have other unintended consequences that affect incomes and health. A recent report from the Mercatus Center, which was based on underlying peer-reviewed studies, finds that the increase in federal regulations from 1997 to 2015 is associated with 236,454 more people living in poverty in Ohio, 3.6 percent higher income inequality in the state, 287 fewer businesses annually, 4,508 lost jobs annually, and 7.35 percent higher prices (see the attachment to this testimony). These unintended consequences affect ordinary citizens, and they can even increase health and safety risks inadvertently. Compliance costs from regulations reduce business profitability, and these losses are passed on to workers in the form of lower wages and to customers in the form of higher prices. By extension, families have less income to spend on doctor’s visits, safer vehicles, or living in more secure or less polluted neighborhoods. Across society, some risks inevitably rise as growing regulatory burdens push incomes down. When regulatory costs rise enough, one can expect more deaths to occur than otherwise would because the assortment of rules increases risks for some hardworking Americans who are on the margins. Recent research suggests that for each $40 million to $110 million or so in regulatory costs, there will be one expected death owing to this impoverishment effect. Relatedly, as federal regulation of states’ economies rises so does state mortality, even after controlling for other factors that explain mortality. Regulatory Rollbacks in the States The fact that so many regulations have had to be rolled back to protect public health during the pandemic raises the question as to whether these suspended or relaxed regulations ever made sense to begin with, even during normal times. It is not surprising, therefore, that governments are engaging in reviews of regulations waived or suspended during the pandemic. In Arizona, Governor Doug Ducey signed an executive order in early 2021 directing state agencies to conduct a comprehensive review of regulations suspended during the COVID-19 emergency to determine whether suspensions should be made permanent. In Idaho, Governor Brad Little signed an order that requires regulators to initiate rulemakings to remove regulations waived during COVID-19. These reviews form part of a broader state regulatory reform movement. Even before the pandemic hit, a wave of regulatory reforms was sweeping the states, as states such as Virginia and Idaho were making substantial headway at trimming regulatory clutter that had accumulated over decades. Ohio has been part of this movement to some extent with the passage of its one-in, two-out provision in 2019. Regulatory agencies involved in efforts to cut red tape need goals so that they have something to aspire toward and so that they know when they have succeeded. Thus, every regulatory reform should have some goal in mind. A regulatory reform without a goal is like a ship captain sailing aimlessly without a destination, and with no course charted. Determining the appropriate goal is ultimately a political decision. Several factors, however, can inform the decision as to how much red tape is the appropriate amount to cut. In SB 9, Ohio legislators have proposed a 30 percent reduction target, which would put the state’s count of regulatory restrictions (274,000 as of 2020) closer to—but still above—neighboring Pennsylvania (163,000), and still substantially above that of the average state with 135,000. A 30 percent reduction goal may sound large, but it is similar to goals in other jurisdictions, including British Columbia (33 percent), Kentucky (30 percent), Missouri (33 percent), Oklahoma (25 percent), and Virginia (25 percent). One factor to consider is whether the same reduction target should apply broadly across the whole government or whether each agency should have to meet a unique target. Some states, such as Idaho and Missouri, have achieved substantial reductions in their regulatory codes in aggregate (see table 1), but the reductions vary greatly by agency. A single, across-the-board reduction target can be made more flexible with an average goal that is exceeded at some agencies but not at others. Or there could be a process whereby agencies can petition to be exempt from meeting a target (similar to the process SB 9 would create whereby agencies can appeal to the joint committee on agency rule review for their reduction requirement to be lessened). Some states, such as Virginia, have set targets that apply only to discretionary regulations—that is, regulations that can be amended or repealed without further legislative changes—and have identified backup enforcement mechanisms if targets are missed. A 30 percent reduction goal such as the one SB 9 creates may sound ambitious, but several states have achieved reductions in this range in recent years. Table 1 presents the top six states to have reduced regulatory counts in recent years. Notably, Idaho and Missouri saw the biggest reductions in regulatory restrictions in percentage terms, and both of these states have attempted to reduce red tape using a regulatory restrictions metric to help guide their efforts. Idaho saw a 37 percent reduction in regulatory restrictions, and Missouri saw a 30 percent reduction. Kentucky also instituted an effort to cut red tape under its previous governor, Matt Bevin, which explains why the state saw the fourth-largest percentage reduction in the country. Nebraska had the sixth-largest reduction, following a regulatory reform executive order from the governor in 2017. Deciding What to Measure The choice of what measure to use to guide a state’s regulatory reform effort is an important one. Any effort to cut red tape should start with a measure of regulation in mind so that reformers can track their progress. Some may question a state’s decision to set a reduction goal based on counts of restrictive terms. Tradeoffs inevitably arise between simple and more complicated metrics. A complicated measure, such as regulatory cost, could be hard to apply broadly, since relatively few policies have credible cost estimates. A more easily applied measure, like restrictive term counts, may only roughly approximate the true regulatory burden, but can be applied broadly to a wide swath of law easily. The optimal tradeoff might be to use simple measures applied broadly to as many laws as possible and to supplement them with more complicated measures on a case-by-case basis (for example, for some of the largest individual regulations). Ohio already has experience reporting counts of regulatory restrictions by department, as doing so was a requirement in the 2019 budget. Many regulatory departments have already reported their base inventories of regulatory restrictions, meaning that Ohio is well-positioned to move forward with further regulatory reforms. The base inventory reports contain meaningful information about departments’ regulatory requirements. Real people are taking time to look up individual restrictions and explain their purpose. Having an oversight authority, such as the Joint Committee on Agency Rule Review or the Common Sense Initiative, can continue to ensure that reporting contains meaningful information, which is likely why these bodies have been assigned oversight roles in SB 9. Conclusion Regulatory agencies seeking to cut red tape need a concrete measure of regulation to track their progress and to have a goal in mind so that they have something to aspire toward and so that they know when they have succeeded. Just as a ship captain needs a compass, a red tape cutter needs a guide for his or her journey. To continue the ship analogy, a regulatory reform without a goal is like a captain sailing without a course charted to a destination. Ohio is already on the path to meaningful regulatory reform. Legislation being considered before this committee would continue Ohio down that path. Thank you for the opportunity to submit this testimony. I am happy to answer any questions you may have. Attachments Dustin Chambers and Colin O’Reilly, “The Regressive Effects of Regulations in Ohio” (Mercatus Policy Brief, Mercatus Center at George Mason University, Arlington, VA, December 2020).
https://www.mercatus.org/research/state-testimonies/setting-sensible-reduction-target-ohios-administrative-rules
This site is provided as a service for the members of DIA. DIA is not responsible for the opinions and information posted on this site by others. We disclaim all warranties with regard to information posted on this site, whether posted by DIA or any third party; this disclaimer includes all implied warranties of merchantability and fitness. In no event shall DIA be liable for any special, indirect, or consequential damages or any damages whatsoever resulting from loss of use, data, or profits, arising out of or in connection with the use or performance of any information posted on this site. Do not post any defamatory, abusive, profane, threatening, offensive, or illegal materials. Do not post any information or other material protected by copyright without the permission of the copyright owner. By posting material, the posting party warrants and represents that he or she owns the copyright with respect to such material or has received permission from the copyright owner to post the material, including any text, video, photo, audio, or any content (the “Materials”). In addition, the posting party grants DIA and users of this site the nonexclusive right and an irrevocable, perpetual, and royalty-free license to display, copy, publish, distribute, transmit, print, modify, edit, prepare derivative works of, and otherwise use such Materials and any information contained therein for any purposes consistent with DIA’s mission. DIA does not actively monitor the site for inappropriate postings and does not on its own undertake editorial control of postings. However, in the event that any inappropriate posting is brought to the attention of DIA we will take all appropriate action. DIA reserves the right to terminate access to any user who does not abide by these guidelines. The servers DIA uses to collect and store information reside in the United States. Thus, by voluntarily providing information through the DIA website or other medium, you consent to the transfer of your information, including personal information, to the United States. The information that DIA receives, and how we use it, depends on what you do when visiting our website. DIA collects and uses your non-personal information (information that is not identifiable to you personally) differently than your personal information. The DIA website automatically collects certain non-personal information from its visitors, such as the name of your Internet service provider and the Internet Protocol (IP) address through which you access the Internet, the date and time you access the website, the pages that you view while browsing the website, browser types and versions, geographic information, device use, and the Internet address of any third-party website from which you linked directly to our website. This information is used to help improve the DIA website, personalize your experience, analyze trends and administer the website. All guests to the website can use the open portions anonymously. We may track the number of users who visit areas of the website for internal use, but this tracking will not identify users. The DIA website will prompt you to voluntarily provide personal information if and when it is needed by DIA to provide a service or conduct a transaction that you have requested, such as registering as a member to gain access to members-only areas of the website or the personalized features of the website, ordering publications or registering for courses and webinars, downloading DIA products, accessing My Transcript to request credit for participation in an educational program, downloading a statement of credit, submitting information, joining the discussion boards and online forums, making contributions to DIA and communicating with DIA through email. The types of personal information that you may be asked to provide to DIA includes your name, home, business or other mailing address, title, company or organization, telephone number, mobile number, fax number, email address and credit card information. • to enforce this Privacy Statement and the other rules regarding use of this website. DIA may disclose personal information if required to do so by law or in the good faith belief that such action is necessary to comply with legal process, protect the rights of DIA and its website or, in certain circumstances, to protect the health, safety or welfare of DIA or its employees, users of DIA’s products and services or members of the public. The security of personal information is important to DIA, and DIA employs various security measures to protect against the loss of information that is collected through the DIA website and other means. However, those providing personal information to DIA should keep in mind that the DIA website, network, and information management system are run on software, hardware and networks, any component of which may, from time to time, require maintenance or experience problems or breaches of security. No method of transmission over the Internet or method of electronic storage is one hundred percent secure and we cannot guarantee its absolute security. Users of the website are solely responsible for maintaining the confidentiality of their username and password and are responsible for any unauthorized use. DIA will not sell, rent, exchange, publish or otherwise share your personal information with any third parties except as otherwise described in this Privacy Statement. In the ordinary course of business, DIA may engage third parties to provide services on our behalf, such as website hosting, packaging, mailing and customer service functions. DIA will only provide those companies the personal information necessary to perform the service, and they are required to maintain the confidentiality of such information and are prohibited from using that information for any other purpose. Member names and business contact information are made available to other members via DIA online communities. You also may provide information to be published or displayed on DIA discussion boards, online forums, online communities, or other public areas of the website, or transmitted to other uses of the website or third parties (collectively, “User Contributions”). You provide User Contributions and transmit them to others at your own risk. We cannot control the actions of other users of the website with whom you may choose to share your User Contributions. Therefore, we cannot and do not guarantee that your User Contributions will not be viewed by unauthorized persons. You may review and update the information and contact preferences you provided to DIA through the website by visiting the customer profile area or contacting us at the email address below. In the customer profile area, you may view and edit your personal information, opt-out of DIA mailings and other marketing information. Please note that some non-marketing communications, such as product download and sales transactions, are not subject to general opt-out. Members and other customers have the option to determine how they receive their various communications from DIA (i.e., whether they prefer to receive communications at their home email address rather than their work email address, etc.). After registration, members can change how they wish to receive their membership benefits and other customers can change how they wish to receive communications from DIA, through the Manage My Subscriptions feature of their customer profile. We may use a web analytics service, such as Google Analytics, to record and analyze your activity on this website. The website or any such service may track your browsing across web sites that use the same service. The DIA website may include social media features such as the Facebook like button, widgets or interactive mini-programs that run on our website. These features may collect your IP address, browsing information and may set a cookie to enable the feature to function properly. Your interactions with these features are governed by the privacy policies of the parties providing them. Our website is intended for adults, such as our members. DIA does not knowingly collect any personal information from children under the age of 13. Please contact DIA at [email protected] if you suspect that DIA has collected any such information.
https://communities.diaglobal.org/codeofconduct
A few weeks ago Vanit from Vanit Studios released the 1.1 version of the Sin and Punishment translation to english. As the author comments: In this minor second release I have changed the following: - Updated the tutorial in the readme - Added the dlls people were emailing me about saying it asked for them - see the tutorial below [in the *.rar file] for more info. - Turns out I left 3 lines untranslated in the Training tutorial and I have put them in this time round. Thanks to everyone who emailed me about it. I hope you people enjoy of this “bundled” new v1.1 release. Staff note: You can get it here as well. [ 0 COMMENTS ] Relevant Link Karnov (NES) translation released 09 May 2006 7:45AM EST - Update by Kitsune Sniper Translations News I’ve just released my translation of Karnov, for the Famicom. The script was translated by Eien Ni Hen, and I got help from RedComet in locating the pointers. There’s not much to say, other than the game has three endings, of which I could only get one. If you have any problems, read the readme file for instructions. Staff note: You can get the patch right here, from our database. [ 9 COMMENTS ] Relevant Link Shining Force III Premium Disc Translated! 09 May 2006 1:45PM EST - Update by lockshaw13 Translations News knight0fdragon of the Shining Force Central forums has completed an English translation of the Shining Force III: Premium Disc for the Sega Saturn. He is also supposed to be working on translating of Shining Force III: Scenario 2 in English as well. [ 9 COMMENTS ] Relevant Link Phantasian Productions New Project – Tales of Destiny 2 01 May 2006 1:40PM EST - Update by Cless Translations News This is the real Tales of Destiny 2 only released in Japan on PlayStation 2, not to be confused with the US Tales of Destiny II for PlayStation 1, which is actually Tales of Eternia. It\’s been nearly four years since it was released and Namco USA seem intent on leaving it in Japan, so we\’re stepping up to it. At this time, I\’m in the process of working on a menu patch, or more specifically, all or most of the text in the game\’s main executable file. It\’s coming together rather quickly– I have a complete double byte table already and should very soon have an awesome text dumper thanks to _Bnu. For now I intend to translate the simple one or two word list things myself but I would really appreciate any volunteer help to handle more long-winded things like item descriptions. I see it being possible that this could turn into a full localization at some point after doing some more reverse-engineering of the data files. Plus, I\’ve found that the game uses the exact same compression format as Tales of Phantasia PSX, so that\’s one huge hurdle down. [ 19 COMMENTS ] Relevant Link Gameboy Wars 3 Translation, First Release 27 April 2006 1:38PM EST - Update by akadewboy Translations News After a few days of work the main menus of Gameboy Wars 3 have been replaced with English. This is just the first release, more to come in the future. Staff Note: Hopefully there isn’t a patch release following the translation of every single screen in the game. But, you can get your menu patch here on RHDN as usual. [ 0 COMMENTS ] Relevant Link Final AMC version 0.95 Released 24 April 2006 5:32PM EST - Update by Nebelwurfer HQ Translations News Nebelwurfer HQ has released the final version of their Genesis Advanced Military Commander Translation, v 0.95. This version has all scenario and End Game text translated. The readme file also has instructions for how to get into the hidden Scenario Editor and Map Editor modes. Note: You can get the patch from the author’s homepage or directly from our database [ 4 COMMENTS ] Relevant Link Esparks, Double Moon, and a relaunch! 24 April 2006 7:33PM EST - Update by Bongo` Translations News After a brief hiatus, my website is back up and running. I decided announce two projects which have a fair amount of hacking done and only need a translator to get rolling. The first project is Esparks, a great little zelda style adventure game. I’ve got an english font inserted, and have figured out the text compression and coded a working decompressor and recompressor. The other is Double Moon Densetsu, a NES RPG in the style of DragonQuest. I have a few translated script file but I don’t know who translated them and would really like to know who did so that I can get their permission to use them. More details and info can be found on my site. [ 0 COMMENTS ] Relevant Link Wizardry 6 Bane of Cosmic Forge Translation 24 April 2006 8:06PM EST - Update by TiCo. Translations News I have released a Wizardry 6 Bane of Cosmic Forge Translation. It’s currently at version 0.3beta and features Castle Levels playable in English. The patch currently only works in SNES9x. Staff note: You can download the patch from our site, or the link below. [ 0 COMMENTS ] Relevant Link KingMike Celebrates 5 Years 17 April 2006 9:09PM EST - Update by KingMike Translations News KingMike celebrates the fifth anniversary of the launch of his site. There is an under-construction redesign, as well as the release of two patches. One is for Shell Monsters Story for the NES. This will repair an infinite text loop bug after saving a game. The next one is a completed translation of Deep Dungeon - The Heretic War, also known as Deep Dungeon 1, for the Famicom Disk System. [ 7 COMMENTS ] Relevant Link AGTP Announces... a lot. 16 April 2006 7:08AM EST - Update by Neil Translations News Gideon Zhi has announced a boat load of projects that he’s been working on, and has updated his page with project pages for them with screenshots and notes. - 3×3 Eyes: Juuma Houkan: in the process of gettig a properly formatted script dump - Actraiser: script dumping phase - Adventures of Hourai High: inherited this from satsu, who remains on as translator - Ancient Magic: currently undergoing script translation - Dark Law: needs ASM mods; the script needs to be redumped and reformatted - Dark Lord: in cleanup mode to prepare for beta testing - Dragon Squadron Danzarb: looking for a translator - Esper Dream 2: has a smaller font and a translator - Ganbare Goemon 2: has a translated script, still needs to be inserted - Guardian of Paradise: beta testing currently - Ladystalker: currently being translated - Lagrange Point: needs a custom inserter coded - Majin Tensei: working on demon dialogue currently - Majin Tensei 2: Prelim work done will pick up more on it when Majin Tensei’s done. - Metal Max 2: Another formal announcement of a Help Wanted project. - Madara 2: compression tools are ready, waiting on Madara 1’s completion. - Mystic Ark: currently being translated, nearly done. - Snoopy Concert: script has been translated, needs some reformatting - Super Robot Wars 4: script has been translated, needs some editing - Sutte Hakkun: script currently being translated - Tactics Ogre: Using the official script from the Playstation release. Screenshots, details, and a tireless hacker can be found at the link below. Updated 4/16/06 11:50AM EST: This post was updated with clarification on Dark Law/Dark Lord status.
http://server.romhacking.net/?page=news&category=3&startpage=76
Scores are defined in XML documents. This article details the format of XML files that hold the definition of a score. <ScoreDef UID="[<unique identifier>]" Name="[<name of the score>]" A universally unique identifier for the score. It is a 128-bit integer number expressed in hexadecimal format with separators and available tools exist to generate them with low collision probability . Example: f54972e7-0f9c-46ce-8931-bbe31b06f7b4. The name of the score as shown in the Finder. The type of object to which the score applies. It can either be user or device. Whether the score should appear inside the User or Device view in the Finder or not. Note that the User and Device views can display a maximum of five scores each one. Possible values are true or false. Whether the score should be calculated or not. Possible values are enabled or disabled. The version of the schema for the XML file. Currently fixed to 1. The version of the Nexthink Data Model on which the score relies for its queries. Currently fixed to 12. Once the general elements of the score definition are laid out, add a single composite or leaf score element to the definition. This last element describes how to compute the score from the values stored in the Nexthink database. If you add a LeafScore element to your ScoreDef, the score will depend on one computation or on the value of one field only. On the other hand, if you add a CompositeScore element to the ScoreDef, a combination of several scores is used to compute the main score. As a matter of fact, a composite score is itself composed of other composite or leaf scores, forming a tree of up to five levels. Ultimately, all the nodes at the lowest level of the tree must be leaf scores. CompositeScore and LeafScore are the elements in the score definition that are used to compute individual scores. A universally unique identifier for the score (similar to that of the ScoreDef element). The name of the score. A textual description of the score. Optional: Whether the score should be visible in the Finder, nowhere, or only in quantity metrics in the Portal. Possible values are visible, hidden, and visible only in quantity metrics. By default, a score is visible everywhere. Optional: A floating point number with up to two decimal places that acts as multiplier of the computed score when its immediate composite parent score performs a weighted average operation (see how a composite score is computed below). Links to external HTTP resources (e.g. knowledge base, external documentation, etc). Links to remote actions which can be manually triggered, to take appropriate action on devices that display a poor score. See how to document scores for the complete reference of the Document element. <CompositeScore UID="[<unique identifier>]" Name="[<name of the score>]" Compute the arithmetic mean of the direct child scores. Get the minimum direct child score. Get the maximum direct child score. Compute the addition of the direct child scores. Compute the arithmetic mean of the direct child scores after multiplying each child score by the quantity specified in its Weight attribute. Compute the multiplication of the direct child scores. The list of child scores is limited by the maximum number of scores that you can simultaneously enable. Remember however that this limit includes the nested scores and that the maximum nesting of scores is 5 levels. A leaf score is the result of applying a normalization procedure to an input value coming from the Nexthink database. The input is either the value of a field that belongs to the user or the device objects or the result of a computation on devices or users expressed in the NXQL language. <LeafScore UID="[<unique identifier>]" Name="[<name of the score>]" In its turn, the Input element encloses either a Field or a Computation element. Note the use of the keyword NULL as the default output value in the previous query. Whenever it does not make sense to return a value for the score if the underlying field or aggregate is undefined, write NULL as the default output value. In the previous example query, the ratio of HTTP requests is undefined if the device made no HTTP requests. Rather than force an artificial value for the score, it is preferable to return no value. A leaf score with no value that is part of a composite score is discarded for the computation of the composite score. If a composite score or a main score cannot be computed because of lack of underlying values, they have no value as a result. The Finder displays a dash sign (-) for scores with an undefined value. In the case that the input to a leaf score actually gets a proper value, transform it to give it businness significance by means of a normalization function. The purpose of scores is indeed to make sense out of the detailed measures in the Nexthink database. Normalize quantities, ratios, enumerations, or even strings to a numerical range, usually the range from 0 to 10, that makes it easier for you to understand the status of a user or device with respect to the measured input. A score of 0 for an input value between 0% and 60% (not included). A score of 5 for an input value between 60% and 80% (not included). A score of 10 for an input value equal to or greater than 80%. For an input value between 0% and 60%, the score ranges from 0 to 2. For an input value between 60% and 90%, the score ranges from 2 to 9. For an input value between 90% and 100%, the score ranges from 9 to 10. For an input value equal to or greater than 100%, the score is 10. As shown in the figure, linear interpolation is performed within each interval. For instance, an input value of 30% would receive a score of 1. Note that the Value and Score attributes of a To element must be equal to those of the From element defined in the next range for the piecewise linear function to be continuous. The last range does not define a To element to avoid imposing a limit to the input value (in the case that our hypothetical ratio input could be higher than 100%). *, as placeholder for 0 or more characters. ?, as placeholder for a single character. Value="Ok" is not the same as Value="ok". Log in to the Finder as a user with the right to manage scores. Select Scores in the left-hand side panel of the main window. Right-click anywhere in the Scores space to bring up a context menu. Choose Export > Score schema to file... from the menu. Select the location where to store the file in the dialog and press Save. To write your own scores, take the existing scores that you can find in the Library as example and validate the XML files that you create against this score.xsd schema file.
https://doc.nexthink.com/Documentation/Nexthink/latest/UserManual/ScoreXMLReference
This application seeks five additional years of funding to continue the Visual Neuroscience Training Program (VNTP) at the Johns Hopkins University School of Medicine. The VNTP is a joint program between the Wilmer Eye Institute and the Neuroscience Department at Hopkins, and it also includes participation from a number of other related graduate programs. Its goal is to recruit young, talented scientists into the visual neurosciences, and to provide them with broad theoretical and methodological research training that will allow them to contribute to our understanding of the neurobiology of vision and the pathological mechanisms responsible for visual loss in the context of human disease. Hopkins is fortunate to have a large number of investigators who study vision; their approaches range from the molecular and cellular to the systems levels, and the technologies they employ include cell biology, molecular biology, biochemistry, developmental neurobiology, electrophysiology, functional imaging, and psychophysics. The diverse nature of the vision research community at Hopkins provides a wide variety of research options for VNTP trainees. The VNTP currently accepts 2 predoctoral students per year, and supports them for 2 years each, and accepts 2 postdoctoral fellows per year, and supports them for 1 year each. The VNTP also organizes and provides vision-related courses, seminars, and related activities. In this renewal application, we propose to continue the basic structure of our existing training program, but to also add some new components. In order to provide additional training in the problems of clinical ophthalmology, with an emphasis oh translational problem solving, the trainees will participate in the medical student ophthalmology minicourse. In addition, in order to expose medical students to the excitement and opportunities of vision research, and hopefully to inspire them towards careers related to the study of vision and ophthalmic disease, we propose to establish a program to support 2 medical students per year working in a vision research lab during the summer between their first and second years of medical school. Through these programs, we hope to continue and expand upon the Vents success in recruiting and training the next generation of vision scientists and clinician scientists. PUBLIC HEALTH RELEVANCE: The goal of this program is to recruit young, talented scientists into the visual neurosciences, and to provide them with broad theoretical and methodological research training that will allow them to contribute to our understanding of the neurobiology of vision and the pathological mechanisms responsible for visual loss in the context of human disease.
This study investigates further the previous paper by Shamsul Nahar and Al-Murisi (1997) by examining the interavtive effects of the variables in that paper and introducing other variables associated with corporate governance and political costs. The present study postulated that percentage of external directors on audit committee interacted with the presence of an accountant on audit committe and with the number of years and audit committe in existence, respectively, to influence audit committee effectiveness. The study also posited that the interaction of the presence of an accountant on audit committee and the number of years an audit committee in existence positively and significantly influenced audit committee effectiveness.
https://repo.uum.edu.my/id/eprint/509/
As the Earth’s ecological systems upon which we depend accelerate in their slouch towards Bethlehem, our society faces an existential crisis. The effects of climate change are far direr than we initially expected. Global atmospheric carbon dioxide concentrations have risen to 415 ppm for the first time in over three million years. The recent Intergovernmental Panel on Climate Change’s Special Report on Global Warming of 1.5°C describes that we face an increase in global average temperatures of 1.5 degrees C as soon as 2040. Furthermore, the two degrees of warming that scientists widely argue is the final major threshold before permanent, large-scale climatic shifts leading to ecological collapse, is no longer some far-off possibility or hyperbolic fear-mongering, but an imminent reality. The future we face in this new Earth is marked ever more frequent and intense fire and flooding, famine and disease, droughts and storms. Similarly, compounded by decreasing habitat availability from deforestation, overfishing, and resource overuse, climatic shifts are already being accompanied by staggering and consistent losses in biodiversity. Published last month, the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services’ Global Assessment Report concluded that the second-fastest mass extinction event in planetary history is underway; the current rate of extinction is 100 to 1000 times greater than historical background rates. Over one million species are at risk of extinction within the next few decades. As they perceive the horsemen beginning to saddle up their mounts, many of my friends and colleagues in the environmental community have succumbed to anxiety, if not despondency. It is all too easy to become overwhelmed and unable to make a decision on how to proceed given the enormity of the problem and the lateness of the hour. Indeed, given their scale, global challenges like climate change and biodiversity loss have been called hyperobjects which humans cannot comprehend despite their pervasive effects that are and will be experienced by everyone in some way. We have been taught to be hopeful in the face of adversity, as evidenced by the preponderance of media offering reasons for optimism in the face of ecological catastrophe. Some argue that we only need to make simple changes in our personal lives that collectively will suffice to halt current trends of environmental degradation; we just need to give up eating meat, stop flying, or stop using disposable cutlery. Ethical consumerism promises that we can minimize our environmental impact by buying the right product. These small lifestyle changes have become moral imperatives that are increasingly being written into law. Others tout that we are on the verge of technological breakthroughs, such as fission power, electric cars, carbon capture, or any number of geoengineering solutions, that will address the problems we face. However, while the analgesic nature of these articles may briefly buoy our hopes that we still have an exit strategy to extract ourselves from our current crisis without substantive changes to our lifestyle, they serve as a red herring. First off, placing the onus of responsibility of solving this colossal mess on individuals rather than on the economic and political actors who created it to further their own gains, actively undermines efforts aimed at achieving necessary systemic changes which cannot be fulfilled on a collective individual level. Our ability to deal with these problems has not been limited by a lack of personal action. These are not glitches that can be solved by the economic and political systems that created it. This is not to dissuade anyone from changing their behavior and consumption patterns, but their collective impact would be dwarfed and rendered negligible by the negative impact of corporations and industries over which we have no control, whose responsibility is to their shareholders rather than humanity, and often operate outside of the control of the rules and regulations we have collectively established to protect society. By propagating the fairy tale that we bear a responsibility to clean up their mess and have the capacity to do so, these very groups have delayed the necessary and systematic changes that would make a difference, precisely because such a shift would undermine their power and profit. Furthermore, this implicit deceit has been complemented by a more explicit and overt campaign to spread misinformation, foment skepticism regarding environmental research, and undermine legislation that would begin addressing environmental issues. Second, the touted solutions are often unlikely to be effective. We have reached the point where the feckless environmentalism of anodyne half-measures will not suffice. Many technological solutions are unlikely to be achieved soon enough to avert the worst of climate change and environmental degradation. Furthermore, most have their own set of socioeconomic and environmental problems that undermine their sustainability. Similarly, even if a substantive proportion of individuals were convinced to make personal lifestyle changes to minimize their environmental impact, which is highly questionable (if only given the time frame in which we need to develop a solution), feedback loops in environmental processes will result in continued climatic changes even if we were to cease all anthropogenic carbon emissions immediately. The problem with hope is that it is fickle. It can give way from beneath your feet, allowing you to fall into despair. In fact, despair’s etymological roots can be traced for the Latin words for “down from” and “hope.” In the absence of stable footing, we are left unmoored and driftless as fear creeps in. The antidote to this fear is courage, which is not the lack of fear, but having the conviction to strive forward despite it. We may be less prone to falling into despair and inaction if we embraced the absurdity of this situation rather than depend on hope of emerging from it victorious. This is not to capitulate to the fatalistic laziness of nihilism or denial. In fact, what we do matters very much. None of this is to say that we shouldn’t recycle, subscribe to renewable energy suppliers, or take public transport to work rather than drive. However, what matters is not its impact something but the doing itself. In Greek mythology, King Sisyphus was punished by the gods for trying to cheat death, cursed to roll a boulder up a mountain for eternity, only to have it fall just before he reached the summit. Sisyphean tasks are those which are likely doomed to failure, regardless of our best efforts, and should be avoided at all costs given their hopelessness. However, rather than give up, yielding to the inevitability of defeat, Albert Camus suggested that we imagine Sisyphus happy as he strides back down the mountain to bear his load again. This is how he can liberate himself from the oppressive shackles of his fate. If we forgo expectations of having a meaningful impact, what is left is the work itself and the satisfaction of doing a good thing well. Sisyphus’ curse is self-imposed by despising rather than accepting the situation’s intractability. Camus argues that the way victory can be achieved in this unwinnable situation is to revolt against the assumption that this failure was an outcome to be dreaded, saying, “There is no fate that cannot be surmounted with scorn.” As a result, we become resilient to the challenges and failures that we will inevitably encounter upon this path. Of course, there are risks associated with this approach as well. The systematic changes that are necessary will be difficult and frightening. Accepting the sacrifices that must be made as a result of this systemic change will be a bitter pill to swallow. They are likely more difficult and less comfortable than making small changes that are ultimately of little consequence. We must accept that this is ultimately apostolic work for which we will likely pay a high price in the short term without any guarantee of seeing the long-term results. It can be tiresome to pursue the seemingly insurmountable, making all the more important to care for oneself. Camus’ novel The Plague describes a deadly epidemic that decimated the population of a small town. The residents worked indefatigably to care for the sick and dying, despite the lack of any reasonable hope of success. Amidst the chaos, two of the main characters went for a leisurely, restorative swim before returning to the fray. The lesson here is to learn how to rest rather than give up. As is summarized by the novel’s narrator, the story of The Plague is, “not be one of a final victory. It could be only the record of what had to be done, and what assuredly would have to be done again in the never ending fight against terror and its relentless onslaughts, despite their personal afflictions, by all who, while unable to be saints but refusing to bow down to pestilences, strive their utmost to be healers.” But let’s strip away the pretense of philosophical jargon to conclude, instead using the plain, unassuming language of the plain, unassuming folks who have always done the lifting and will need to do so again now. The challenge that lies before us seems difficult at best and insurmountable at worst; it may not be any easier or more likely to end in success if we take it together. However, though the hour may be late, it’s never too late to express the goodness that is within each of us. Even if this world is failing, we can still plant the seeds of a new one in the shell of the old because it’s the only thing we can do and because it will be more fun that way, if nothing else. Let’s put our hands in the earth and our shoulders to the wheel. Let’s live up to the standards we set for each other and forgive one another when we fail. Let’s cultivate new relationships with one another and the land that honor the dignity of both. Let’s take it easy, but take it.
https://www.resilience.org/stories/2019-07-05/abandon-all-hope-moving-toward-an-existentialist-environmentalism/
The new adidas Torsion X arrives in the “Core Black” colorway, presenting original details and innovative design. The back is composed of an imposing heel in an iridescent hologram that gives a futuristic look to the shoe. In the upper, composed of textile and synthetic materials layers, it features a transparent tube that runs through the laces area and participates in the avant-garde aspect of this version, suitable with Torsion technology in the sole to ensure stability at each step. Details:
https://www.sivasdescalzo.com/en/adidas-torsion-x-fv4551
While concrete driveways look nice when they are new, over time the concrete can chip, crack and crumble. Cracks and potholes form due to the freezing and thawing of water that has seeped under the driveway through smaller cracks, weed or grass growth in small cracks, and general wear and tear. Regular maintenance will prevent the need to replace the existing driveway. This is usually easy to do and requires up to a half day to complete depending on the condition and size of your repairs. Make sure to always wear safety goggles to protect your eyes when breaking cement. Do not try to feather the edges when patching with cement. When dry, the feathered area will crack and flake off. Use a plastic sheet to cover newly repaired concrete for a few days. This will help keep it moist and allow it to cure slowly. 2. Use a broom or a stiff brush to remove small pieces of concrete and dirt. 3. Then using the spray attachment on your garden hose, spray the crack with water to remove any remaining debris. It is important to clean the crack well so that the patching material will adhere to it. 5. Allow all surfaces to dry before continuing with the patching process. 6. Cracks that are less than ¼" are generally filled with sealants that come in a caulk gun tube. If you are using filler from a caulk container or tube, squeeze the filler into the crack until it begins to overflow, then smooth it out using a metal or plastic scraper. It may be necessary to repeat this process to ensure that you are filling the crack completely. 7. Cracks that are between ¼" and ½" are generally filled with premixed mortar or sealants that comes in a can or plastic jug. Shake or stir the product well before using. `According to the directions on the product, squeeze the filler into the crack until it begins to overflow, then smooth it out using a metal or plastic scraper. It may be necessary to repeat this process to ensure that you are filling the crack completely. 8. Cracks that are larger than 1/2" must be filled with premixed concrete which comes in bags of 60 lbs or 90 lbs. Mix your material as the manufacturer recommends, and pour this material into the crack. 9. Let the mortar set for about an hour before using a trowel to remove any excess. Allow the repaired area to dry completely, which may take several days. Make sure to keep it moist by lightly misting the area using a garden hose and keeping it covered with a plastic sheet. A slow cure will result in stronger concrete. When the concrete has cured, apply a water seal with either a roller or a sprayer.
http://w.easy2diy.com/cm/easy/diy_ht_3d_index.asp?page_id=35790167&parent1=Location&child1=Yard&parent2=Category
Program Overview: Emerge is a student-driven program dedicated to developing community and building young leaders through existing resources and collaborative partnerships. Students in Emerge will design and implement projects that have a positive impact on the community and they will attend monthly workshops to develop their own leadership capabilities. Emerge is a program for sophomores at ETHS. Students must apply in the SPRING of their FRESHMAN YEAR to be considered. Emerge has two main components: (1) Community projects are designed and implemented by groups of Emerge students. Each group will partner with a local school or community organization to receive guidance, information, and resources as they develop their projects. Past projects include The HUB, environmental initiatives, STEM programs for elementary students, the WitherBell Forum, and more. (2) Monthly workshops where sophomores learn skills to become effective leaders in their school and community as well as guidance and instruction on their projects. Workshops are led by the Community Service Office, Northwestern University students, and ETHS juniors. Throughout the program, participants will have the opportunity to: ● Meet key decision-makers in Evanston and ETHS ● Collaborate with local organizations and other passionate sophomore students ● Learn practical strategies for making change Specifics & Time Commitment: There is NO cost to participate. We provide dinner for students at each workshop and a t-shirt. Attendance is mandatory for all Emerge activities: 1. Meet & Greet — September: A one-hour event for students and parents in September. Facilitators and staff explain the details of the program, introduce program leaders, answer questions, and do an activity. 2. Field Trip/Retreat — September: Learn more about issues impacting our city and school through conversations with community leaders, school leaders, and experts in their fields. Students decide on the key issues they want to work on throughout the year. Past speakers include the Evanston mayor, ETHS superintendent, ETHS principal, Northwestern University president, Evanston Police Department, City of Evanston officials, activists, and student leaders. 3. Leadership Workshops — October-April: (students may miss one of the seven workshops IF approved by the Community Service Office in advance) The essence of the program is a series of leadership workshops held monthly on select Tuesday evenings from 6:15pm-9pm at ETHS. ETHS juniors and Northwestern University students facilitate interactive workshop activities and group work focusing on the following themes: ● Defining Leadership ● Communication and Advocacy ● Diversity and Inclusion ● Assets and Organization ● Project Planning and Group Dynamics ● Applying Leadership ● Presentation Skills 4. Community-Based Project Teams — October-May: (meetings scheduled during AM support, after school OR on weekends according to students’ availability) Each participant will work on a community project with 4-5 other Emerge students and with the assistance of ETHS student facilitators (juniors). The themes of the projects are chosen by the students with the goal of positively impacting the Evanston/ETHS community. Each project team partners with a community or school organization to design and implement a project. Teams will meet 2-3 per month with their group outside of the workshops and once per month with the Community Service Office to check in. 5. Final Presentations — May: Students present their group project at Final Presentations in May. Parents, community members, school leaders, partners, former Emerge participants, and other guests are invited to the final presentations to learn about the impact of the projects. Important note: Many students with other commitments (such as student-athletes, performing artists, etc.) ARE able to manage the time commitment for Emerge. However, if you are on a team or involved in an activity that also has strict attendance and meets on most Tuesday nights after 6:00pm, you probably can’t do both the activity and Emerge. Contact the Community Service Office with questions about how this might work for your individual situation.
https://www.eths.k12.il.us/Page/1483
A concerted effort has been undertaken by the US health care system to improve the quality of care received by American citizens. These efforts are based on the Affordable Care Act of 2010 that documents new incentives and strategies for quality health care (Ogrosky & Kracov, 2010). The government and non-governmental organizations have taken a keen interest in improving and sustaining quality health practices. Besides, these frameworks use quality concepts to align funding for public health and other grants with programs in priority areas (Doug, 2004). The office of inspector general supports quality care and services and assists its members to comply with the requirements found in F-520. Furthermore, these organizations have come up with frameworks that offer structures and systems in healthcare. These structures evaluate health programs and build evidence-based measures using the authority given by the Affordable Care Act for public health systems and service research. Overview of OIG The office of inspector general was incepted way back in 1978. It is under the US department of defense. It is an independent monitoring body headed by Inspector General appointed by the president. Its main responsibility is detecting abuse of office and fraud cases. (Doug, 2004). Moreover, it provides information to Congress in a balanced, fair, non-ideological, non-partisan, objective and fact-based way (Jennifer, 2006). OIG, being an oversight arm of the Congress of the US audit and evaluate the roles of the government. The defense criminal investigative service works closely with the Inspector General under the operations of the OIG. OIG values which include transparency and integrity on behalf of the US government (Lanier et al., 2003). Additionally, it ensures accountability of both the American people and the government. In its operations, it ensures that it follows strict professional standards of referencing and review of facts and analysis to check for accuracy How it supports quality care services The OIG plays an important role in improving efficiency and transparency for the department of defense and as a result, it supports quality care in service delivery on behalf of the federal government. Issues related to the procurement of supplies to the department are dealt with by OIG. For instance, rigorous auditing of accounts is carried out by this office to identify any forms of fraud. Resource allocation to this office is done by the Department comprising of qualified staff vetted and proved to be efficient (Doug, 2004). Also, it audits the operations of the defense department to determine the effectiveness and efficiency of quality care services as well as how federal funds are spent. In addition, it gives audited reports to law enforcement agencies for purposes of action (Cascardo, 2009). This is in line with the mission of the office of not only monitoring federal spending but also acting as a whistleblower. The OIG carries out investigations on improper spending practices and illegal activities carried out by the defense staff. OIG and Quality Assessment and Assurance (QAA) Quality assessment and assurance (QAA) refers to the evaluation of structures and processes to determine whether they are achieving the expected standard of quality (Doug, 2004). OIG emphasizes the need for the department to perform its duties according to required standards. Through Quality assessment and assurance programs, OIG encourages higher standards of care. It has set up standards in an attempt to improve the level of spending practice. OIG came up with the office to the assistant of secretary of defense. However, this office was later dissolved as part of improving quality assurance by assessing the current status. In addition, the defense audit service also worked for hand in hand with the latter but was also abolished to pave way for the Office of the Inspector General (Doug, 2004). In addition, OIG offers assistance in terms of facilities needed to carry out auditing services Conclusions For the department to improve, a number of important issues must be addressed by the Office of the Inspector. To begin with, there is a need for modernization of procurement policies for the federal government to foster fiscal discipline and reward efficiency and quality while maintaining quality access to services. Secondly, reforms should be done in the Medicare financial schemes by limiting payments made to the government or states that inappropriately use the schemes. Instead, the finances should be used to provide medical services. In addition, the government should create a budget that will ensure that the initiatives for restructuring the procurement system and its administration are in place. Also, programs should be put in place to ensure integrity in procurement and supplies. References Doug, M. (2004). Looking For loopholes. Beef. 40(11), 24. Cascardo, D. (2009). OIG Demands Transparency for Physicians and Staff in 2010: Welcome to the Modern Era of Compliance. The Journal of Medical Practice Management: MPM. 25(3), 156-159. Web. Lanier, C. et al. (2003). Doctor performance and public accountability. The Lancet, 362(9393), 1404-8. Ogrosky, K., & Kracov, D. (2010). The impact of reform on health care fraud enforcement. The Brief. 40(1), 45-51. Web.
https://demoessays.com/how-inspector-general-office-supports-quality-care-services/
13:21 | Lima, Jan. 16. Pope John Paul II visited Peru in 1985, when the country faced one of its worst economic, political and social crisis, bringing jubilation and great hope to its population. At that time, Peruvians clung to the Holy Father's message, looking for a justification to cope with the situation they were in. A report by El Peruano official gazette revealed that 1985 was a real challenge for the country's economic authorities. The economy showed a very high inflation, a reduced growth, and a high public sector deficit. In fact, Peru struggled to pay its foreign debt and to access foreign financing. Back then, governmental actions did not yield the expected results. The country achieved a GDP growth rate of only 1.6%, and its annual inflation reached 158.3%. Net international reserves increased US$318 million, due to foreign currency purchase by Peru's Central Bank and not to a foreign trade policy. Three years later, in 1988, Pope John Paul II returned to the Andean country. His visit took place as part of the Marian Eucharistic Congress of Bolivarian Countries in Lima, Peru. The political, economic and social situation in those days was even worse. Poverty worsened due to the desperate measures taken by the then-Government. Peru's GDP contracted 8.8% in 1988. Mining, manufacture and construction sectors saw the lowest production figures. Indeed, Peru experienced hyperinflation of 1722.3% that year. This was explained by accumulated imbalances in external and public sectors. The picture is completely different a few days before the arrival of Pope Francis. The current Pontiff will find a recovered and healthy nation, with indicators dissimilar from those reported during Pope John Paul II's visits. As is known, Pope Francis will visit Peru on January 18-21. His trip will include stops in cities: Lima, Puerto Maldonado, and Trujillo.
https://andina.pe/ingles/noticia-pope-francis-in-peru-economic-changes-since-pope-john-paul-ii-1985-visit-696079.aspx
Tips for Finding a Supervisor Having the agreement of a faculty member to review your application as a potential supervisor is a competitive advantage in submitting an application (though not a guarantee of admission) and is required for some UNBC programs. Finding a prospective supervisor starts with searching our Find a Supervisor directory and program pages to find a faculty member whose research focus aligns closely with your area of interest. Once you have identified a faculty member you are enthusiastic about potentially working with, you should reach out by email. How to approach a potential supervisor Faculty members receive a lot of inquiries from potential students so you will want to make your initial inquiry relevant, concise and specific. Here are some tips on approaching a potential supervisor: - Research the faculty member first. Review their published work and, if they have one, their profile or website. Make sure you understand their research and areas of interest. - Be specific in your email. Choose a subject line that is clear and concise (e.g. "Prospective Applicant Inquiry - MSc NRES"). In order to stand out, instead of writing "Dear Professor", address the faculty member by their title (e.g. "Hello Dr. Smith"). Faculty members receive unsolicited emails sent to multiple people at one time and this can make it difficult for them to prioritize a serious inquiry. - Be brief in your introduction. Provide a quick summary of who you are and your academic qualifications. Highlight a couple of your areas of strength/features as a prospective student. - Connect to their research. In your email approach, you should demonstrate you understand their active areas of research and briefly outline how this fits with your intended area of research. - End on a thank you. In concluding your email, thank the faculty member for taking time to review your request. - Be patient waiting for a reply. Start this process early. It may take time to hear back and you don't want to leave this to the last minute. - Ask permission. After you have engaged in a dialogue, ask if they would be comfortable with you referencing them as a prospective supervisor in your application. Remember - a faculty member's agreement to consider your application is not a guarantee of admission. There are a lot of great resources on the Internet about how to approach a prospective supervisor that you may wish to consult. Here are some links to get you started: - Dear Dr. Neufeld (an article from a Professor with an annotated sample approach email). - How to Find a Supervisor for your PhD (a guide from Oxford with good general advice). - So, you want to go to grad school? Nail the inquiry email (from a STEM perspective).
https://www2.unbc.ca/admissions/graduate/tips-finding-supervisor
Blog Post: Javier A. Reyes Key words: Evidence, Expert Discovery, Damages. Evidence – Expert Discovery/Damages: In Vazquez v. Martinez, 175 So. 3d 372 (Fla. 5th DCA 2016) the court addressed the credibility of expert witnesses and the standard for obtaining future medical expenses. With respect to an expert-witness’ credibility, the court held that the plaintiff was entitled to fully explore the relationship between the defendant and his expert witnesses. During the trial, the trial court permitted plaintiff to present evidence that payments totaling almost $700,000 were made “by the defense or its agents” to defendant’s expert witnesses. Here, an insurance company represented the tortfeasor defendant. The defense argued that this evidence was irrelevant because the insured did not have any direct financial relationship with any of the experts, and instructing the jury on payments made by “representatives of the defendant” or “defendant or its agents” improperly implied the existence of insurance. The Fifth DCA acknowledged that, typically, “introducing the subject of insurance where insurance is not a proper issue constitutes prejudicial error,” citing Herrera v. Moustafa, 96 So. 3d 1020, 1021 (Fla. 4th DCA 2012). Nevertheless, the court reasoned that: A party may attack the credibility of a witness by exposing a potential bias. § 90.608(2), Fla. Stat. (2013). “A jury is entitled to know the extent of the financial connection between the party and the witness, and the cumulative amount a party has paid an expert during their relationship.” Allstate Ins. Co. v. Boecher, 733 So. 2d 993, 997 (Fla. 1999). And that “whether the party has a direct relationship with any of the experts does not determine whether discovery of the doctor/law firm relationship or doctor/insurer relationship is allowed.” The purpose of § 90.608(2) is to expose any bias between the expert and the party, including ties between the litigants’ agents (e.g., lawyers and the insurer). The Fifth DCA cited the Herrera case again, which held that a party was entitled to show financial ties between an expert and a litigant by showing that the defense firm had paid the expert $330,000. Herrera, 96 So. 3d at 1021. The Fifth DCA did emphasize that the trial judge must permit evidence of possible bias without disclosing the actual existence of insurance. Second, the Fifth DCA denied a jury award of $50,000 for future medical expenses because the evidence was insufficient. The court reaffirmed that future medical expenses may only be awarded if it is reasonably certain to be incurred in the future. As part of that analysis, there must also be an evidentiary basis upon which the jury can, with reasonable certainty, determine the amount of those expenses. A mere possibility that certain treatment might be obtained in the future cannot form the basis of an award of future medical expenses. Fasani v. Kowalski, 43 So. 3d 805, 812 (Fla. 3d DCA 2010). In Vazquez, the experts testified that the Plaintiff did not need future surgery or follow-up treatment. And while the experts recognized that she may seek medications or chiropractic or physical therapy, neither expert thought it would be helpful. Therefore, “no competent, substantial evidence establishing that Ms. Martinez was reasonably certain to incur expenses for future medical treatment.” Takeaways: When a Plaintiff is exploring an expert’s relationship with an insurance company, the court must be careful to permit evidence of bias (i.e., the relationship between the expert and the insurance company) without disclosing to the jury the existence of insurance. The court’s reasoning in this regard sanctions what amounts to a fig-leaf of sorts since some jurors might surmise that the plaintiff in a run-of-the-mill automobile negligence case does not have the resources to directly pay doctors $700,000 in fees. Nevertheless, it is fair to say that the court prioritized a party’s ability to attack an expert’s credibility over the need to guard against the disclosure of insurance and the potential prejudice arising therefrom.
https://brresq.com/2015/09/18/javier-a-reyes-key-words-evidence-expert-discovery-damages/
Thermodynamic Laws that Explain SystemsA thermodynamic system is one that interacts and exchanges energy with the area around it. The exchange and transfer need to happen in at least two ways. At least one way must be the transfer of heat. If the thermodynamic system is "in equilibrium," it can't change its state or status without interacting with its environment. Simply put, if you're in equilibrium, you're a "happy system," just minding your own business. You can't really do anything. If you do, you have to interact with the world around you. A Zeroth Law?The zeroth law of thermodynamics will be our starting point. We're not really sure why this law is the zeroth. We think scientists had "first" and "second" for a long time, but this new one was so important it should come before the others. And voila! Law Number Zero! Here's what it says: When two systems are sitting in equilibrium with a third system, they are also in thermal equilibrium with each other. In English: systems "One" and "Two" are each in equilibrium with "Three." That means they each have the same energy content as "Three". But if THAT’S true, then all the values found in "Three", match those in both "One" and "Two". It’s obvious, then, that the values of "One" and "Two" must ALSO match. This means that "One" and "Two" have to be in equilibrium with each other. A First LawThe first law of thermodynamics is a little simpler. The first law states that when heat is added to a system, some of that energy stays in the system and some leaves the system. The energy that leaves does work on the area around it. Energy that stays in the system creates an increase in the internal energy of the system. In English: you have a pot of water at room temperature. You add some heat to the system. First, the temperature and energy of the water increases. Second, the system releases some energy and it works on the environment (maybe heating the air around the water, making the air rise). A Second LawThe big finish! The second law of thermodynamics explains that it is impossible to have a cyclic (repeating) process that converts heat completely into work. It is also impossible to have a process that transfers heat from cool objects to warm objects without using work. In English: that first part of the law says no reaction is 100% efficient. Some amount of energy in a reaction is always lost to heat. Also, a system can not convert all of its energy to working energy. The second part of the law is more obvious. A cold body can't heat up a warm body. Heat naturally wants to flow from warmer to cooler areas. Heat wants to flow and spread out to areas with less heat. If heat is going to move from cooler to warmer areas, it is going against what is “natural”, so the system must put in some work for it to happen. Or search the sites for a specific topic. - Overview - Energy Transfer - Expansion - Heat - Temperatures - Thermo. Laws - First Law - Second Law - Enthalpy - Entropy - More Topics NASA Ames Helps Develop Heat Shield (NASA/Ames Video) Useful Reference MaterialsEncyclopedia.com: http://www.encyclopedia.com/topic/Laws_of_thermodynamics.aspx Wikipedia: http://en.wikipedia.org/wiki/Laws_of_thermodynamics Encyclopædia Britannica (Conservation of Energy):
http://physics4kids.com/files/thermo_laws.html
Category: Answers>Geography>Regional geography Question #173404 on a map, the distance between two towns is 2.6 cm. the scale of the map is 1 cm: 50 km. what is the actual distance between the two towns? Expert’s answer 1 cm is 50 km. Then 2 cm is 100 km. Add 0.6 cm (which is 0.6*50=30 km), and the total distance in real life is 130 km.
https://hwdoer.com/answer-to-question-173404-in-regional-geography-2/
Do we know the total environmental costs of our lives? How, where and from what were the products we consume and utilize for our activities produced? What are the implications for nature and environment? What are the subsequent impacts on people? There are many questions which surround our everyday life. The only thing we know for sure is that nature will survive. But how to make humanity to flourish on this planet also in thousand years? what about million years? What about the time our children will grow up? Is humanity able to share the planet with someone it needs to sustain our lives? Are people able to make agreements which will limit their short time well/being in the benefit of future and their own children? I do not know the answers. What I have learnt is that if you want action, you need a force. The results of my research aim to provide the force. If consumers were more aware of the global environmental and social consequences of their consumption and their impact on people in other parts of the globe it would undoubtedly lead to more conscious efforts to modify their behavior and add pressure to business and governments to amend their policies. However, it is necessary first to quantify and to provide evidence regarding those consequences. I am just a small piece in the puzzle. See carbonfootprintofnations.com which covers results of more researchers for the global displacements of greenhouse gas emissions, water use, land use, fossil energy and impacts on biodiversity. If still alive, visit eureapa.net and try to design your own consumption and production patterns to see how they change the environmental profile of our society. The nice thing about science is that the way to answer a question creates new questions.
https://udrzitelnost.cz/czp/index.php/en/jan-weinzettel-personal-page/jan-weizettel-research
STATE OF KANSAS, Appellee, v. WILLIAM L. MOORE, JR., Appellant. No. 47,871 Supreme Court of Kansas. Opinion filed December 13, 1975. Michael E. Foster, of Anderson & Foster, of Valley Center (court-appointed), argued the cause and was on the brief for the appellant. Stephen E. Robison, Assistant District Attorney, argued the cause, and Curt T. Schneider, Attorney General, Keith Sanborn, District Attorney, and Stephen M. Joseph, Assistant District Attorney, were with him on the brief for the appellee. The opinion of the court was delivered by KAUL, J.: Defendant-appellant (William L. Moore, Jr.) appeals from convictions by a jury of theft (K.S.A. 21-3701 [a]) and unlawful deprivation of property (K.S.A. 21-3705). Defendant was acquitted of a charge of burglary. During the early morning hours of July 25, 1973, Kenny's Eastgate Mobile Service Station in Wichita was burglarized. Entry into the station was accomplished by breaking a windowpane. The burglary was discovered by an officer (J.D. Jones) on a routine building patrol about 4:30 a.m. The lessee-operator of the station, Kenneth Johnson, was summoned to the scene and determined that $150 in cash was missing. Mr. Johnson described the missing money as one ten-dollar bill, eleven five-dollar bills; fifty one-dollar bills; a roll of quarters; and a roll of dimes. Mr. Johnson informed the police that defendant had worked at his station intermittently for the past three years; and had been employed two weeks prior to *451 the burglary, but had quit and taken a job at another local filling station. Mr. Johnson also told the police that during the evening preceding the burglary Moore had been working on his automobile at the station until about 8 p.m. About 10 p.m. Johnson closed the station and left. Johnson owned a red jeep, which was parked outside the station when he left. Charles Haskell, a Wichita Police Officer, called by the state, testified that while on patrol in the general vicinity of the Johnson station, during the early morning hours of July 25, 1973, he discovered an automobile in a ditch on Rock Road and a red jeep parked nearby. A man, later identified as defendant, was racing the automobile engine in an attempt to extricate it from the ditch. After a conversation with defendant about his predicament, Officer Haskell and defendant attached a rope from the jeep to defendant's automobile. Haskell got in the jeep and used it to pull defendant's automobile from the ditch. At defendant's request, Haskell then followed him as he returned the jeep to Kenny's Mobile Station. According to Haskell, defendant explained that he worked at the station and had walked down there to get the jeep in order to get his automobile out of the ditch. Haskell testified that he returned defendant to his automobile and parted company with him at approximately 4:10 a.m. As we have noted, the burglary was discovered by Officer Jones around 4:30 a.m. Police knowledge that defendant had pled guilty to charges of burglary and theft of the same station about a year and a half prior to the incident in question and his association with the station aroused suspicion of defendant. Officers Jones and Haskell, accompanied by Detectives Brown and Rhodes, went to defendant's residence about 8:30 a.m.; they knocked on the front door and received no response. The officers then proceeded to walk around the house and looking through the window saw defendant lying on a bed and a considerable amount of money on the floor next to the bed. The officers returned to the front door and knocked louder, but still got no response. The officers discovered the door was unlocked; they entered the house; awakened defendant; and put him under arrest. Detective Brown testified that he advised defendant of his Miranda rights (Miranda v. Arizona, 384 U.S. 436, 16 L. Ed. 2d 694, 86 S.Ct. 1602) by reading a card and that defendant responded that he understood all of his rights; and further that he would talk to the officers. The money found in the bedroom generally matched *452 the description given by Mr. Johnson. A "jumper wire" fashioned out of a piece of wire with clips attached at both ends was found in defendant's automobile. It was identified by Johnson as a tool which he had made and used at his station to start and repair automobiles. Defendant testified in his own behalf. He denied the burglary and theft, but admitted that he had been at the station with a friend working on his automobile early in the evening. He said that he had used the "jumper wire" in attempting to repair the radio in his automobile and had left it on the floorboard. Defendant testified that after he left the station he went home, cleaned up, and spent the night at the "Casino" and "Lamplighter" Clubs drinking beer and whiskey; that he left the "Lamplighter" about 3 a.m. and next remembered finding himself stuck in the ditch on Rock Road. Defendant testified that he had never before seen the money which the police found on his bedroom floor, but that he had bought the roll of quarters while he was at the Casino Club playing a coin operated game. The only point raised by defendant on appeal concerns the admission into evidence of his previous conviction of burglary and theft of the Johnson station. The evidence in question was admitted through the testimony of James Hatfield, bailiff and parole officer, who produced the information and journal entry of judgment in the previous case, wherein defendant had entered a plea of guilty before the same division (No. 8) of the Sedgwick County District Court. The matter was first discussed out of the hearing of the jury. During this hearing. after defendant's counsel had examined the information, the court inquired if there was any objection, defendant's counsel responded:"Yes, I object to this on the grounds it's improper." The court responded:"The Court will take judicial notice of its own file. The objection is overruled. The Court does take judicial notice of the contents of the file marked Case No. CR-7825. You may go ahead." On appeal defendant asserts that the admission into evidence of the information and journal entry of the prior conviction, over his timely objection, was erroneous in that the prejudicial nature of the evidence far outweighed its materiality. The state response is twofold. First, the defendant's general objection at trial does not meet the standard of specificity required by K.S.A. 60-404 and, *453 second, that defendant's prior conviction for burglary and theft of the same station, less than two years prior to the instant charge, was highly relevant and probative. In this connection defendant, on the one hand, contends that none of the eight elements enumerated in K.S.A. 60-455 were at issue; while the state, on the other hand, asserts that all of the 60-455 elements were at issue with the exception of opportunity. Although defendant does not specify error concerning the court's instructions, he points out in his argument that in connection with the prior conviction the court gave what has been labeled a "shotgun" limiting instruction, which included all of the elements or exceptions listed in 60-455. Defendant says that the reason the instruction was given without limiting it to the exceptions, which appeared to be applicable, is the fact that none of them were substantially at issue. Defendant argues that evidence of the prior conviction was admitted and the instruction broadly covering all eight of the exceptions was given without any consideration as to the probative value of the prior conviction versus its prejudicial effect or its relation to any of the eight exceptions set forth in the statute. In his additional commentary pertaining to § 60-455 (Gard, Kansas Code of Civil Procedure, 1975 Cumulative Supplement) Judge Gard observes that because of the number of cases dealing with 60-455 it is reasonable to infer that trial judges have overemphasized the admissibility sanction of the statutory rule and have failed to recognize that relevancy must first be established before a former conviction may be received to prove some element of the crime charged. We agree with the inference drawn by Judge Gard that the misinterpretation of the statute by some trial judges has given rise to the many problems encountered in its application in criminal cases. In the recent case of State v. Cross, 216 Kan. 511, 532 P.2d 1357, Justice Schroeder noted that the admissibility of prior convictions for the limited purposes authorized by 60-455 has become one of the most troublesome areas in the trial of a criminal case. An indepth review of our cases dealing with the subject appears in State v. Cross, supra, and State v. Bly, 215 Kan. 168, 523 P.2d 397. Further discussion of the pronouncements of Cross and Bly would, for the most part, only be repetitious. It will suffice to say that the opinions in those cases clearly indicate that the position taken by *454 this court is that of a very conservative attitude in the admission of evidence of prior convictions for the limited purposes authorized by 60-455. First, the evidence must pass the test of relevancy and to meet the test the former conviction must be shown to have some peculiar significance other than merely its force in showing a disposition of the defendant to commit a certain type of crime. In other words, the facts of the prior crime should be linked in the similarity of the two offenses in order to show relevancy of a prior conviction. (State v. Johnson, 210 Kan. 288, 502 P.2d 802; State v. Cross, supra; State v. Bly, supra; and additional commentary appearing in Gard, Kansas Code of Civil Procedure, 1975 Cumulative Supplement, § 60-455.) It must also be kept in mind that a showing of relevancy alone does not conclusively establish admissibility. The next step in the exercise of discretion in determining admissibility is balancing the probative value of such evidence for the limited purpose for which it is offered against prejudicial effect thereof in keeping with the philosophy expressed in 60-455, and the fundamental rules governing the exercise of discretion. (State v. Johnson, supra; State v. Davis, 213 Kan. 54, 515 P.2d 802; and State v. Clingerman, 213 Kan. 525, 516 P.2d 1022.) In summary, the trial court first must determine relevancy on a basis of factual similarity; second, it must specifically find that one or more of the exceptions enumerated in 60-455 is at issue; and, third, balance probative value against prejudicial effect. While the determination of relevancy is a matter left to the judicial discretion of the trial judge that discretion must not be abused. It must be based upon some knowledge of the facts, circumstances or nature of the prior offense. (State v. O'Neal, 204 Kan. 226, 461 P.2d 801.) It is the better practice to conduct a hearing in the absence of the jury to determine the probative value as to one or more of the eight statutory elements to which such evidence must be relevant. (State v. Gunzelman, 210 Kan. 481, 502 P.2d 705.) One of the circumstances frequently giving rise to the problems in this troublesome area in a criminal trial is the failure of the defense to lodge specific objections and, likewise, the failure of the prosecution in stating the purpose for which the prior conviction is offered to specify the one or more statutory elements which is deemed to be at issue in the case. Our contemporaneous objection rule is codified in K.S.A. 60-404. The importance of the rule in the conduct of a trial is set forth in Baker v. State, 204 Kan. 607, 464 P.2d 212, wherein Justice O'Connor speaking for the court said:*455 "The contemporaneous objection rule long adhered to in this state requires timely and specific objection to the admission of evidence in order for the question of admissibility to be considered on appeal. (K.S.A. 60-404.) The rule is a salutary procedural tool serving a legitimate state purpose. (See, Mize v. State, 199 Kan. 666, 433 P.2d 397; State v. Freeman, 195 Kan. 561, 408 P.2d 612, cert. denied, 384 U.S. 1025, 16 L. Ed. 2d 1030, 86 S. Ct. 1981.) By making use of the rule, counsel gives the trial court the opportunity to conduct the trial without using the tainted evidence, and thus avoid possible reversal and a new trial. Furthermore, the rule is practically one of necessity if litigation is ever to be brought to an end." (p. 611.) In the instant case defendant made only a general objection stating that the admission of the prior conviction was "improper." He failed to make clear the specific grounds of his objection, nor did he make known to the court the action which he desired the court to take, i.e., whether the court should have rejected evidence of the prior conviction in toto, or if admitted what specific element or elements of 60-455 should jury consideration have been limited to. A natural consequence of such a general objection is the state's failure, in response, to finger specifically the one or more elements which it deems to be at issue under the facts developed. Lack of specificity in this regard substantially increases the burden of the trial court in resolving the question of admissibility. Adherence to the contemporaneous objection rule is essential to the orderly and effective conduct of a criminal trial. In State v. Parker, 213 Kan. 229, 516 P.2d 153, we considered the import of K.S.A. 60-404 and the related rule prescribed in K.S.A. 60-246. In the Parker case we held that where a defendant ignores the mandate of 60-404 and 60-246, by failing to make clear the specific grounds of his objection to evidence and to make known to the court the action he desired, he has failed to show prejudicial error. In the instant case, the objection lodged fails to preserve error subject to appellate review. Nevertheless, we do not hesitate to say that on the facts at bar identity is clearly shown to be the critical question at issue. Since the same filling station was the object of two burglaries, less than two years apart, the prior conviction was relevant in determining the identity of the burglar. As we have previously indicated, defendant did not object to the "shotgun" limiting instruction at trial, nor did he specify the giving thereof as error on appeal. After admitting the prior conviction evidence in the first instance, a limiting instruction was necessary. Even though we have, on many occasions, disapproved *456 the instruction in the form in which it was given here, we cannot say the submission thereof to be clearly erroneous under the circumstances attendant here. (State v. Bly, supra.) We find no error that justifies the granting of a new trial and the judgment is affirmed.
https://law.justia.com/cases/kansas/supreme-court/1975/47-871-1.html
The State, Respondent, v. Christopher Ramsey, Appellant. Appeal From Kershaw County Edward B. Cottingham, Circuit Court Judge Opinion No. 25325 Heard May 8, 2001 - Filed July 23, 2001 AFFIRMED Assistant Appellant Defender Robert M. Dudek, of South Carolina Office of Appellate Defense, of Columbia, for appellant. Attorney General Charles M. Condon, Chief Deputy Attorney General John W. McIntosh, Assistant Deputy Attorney General Donald J. Zelenka, Assistant Attorney General Jeffrey A. Jacobs, and Solicitor Warren B. Giese, all of Columbia, for respondent. William Mobley ("Mobley") was the night cashier at the Flamingo video games parlor in the Liberty Hill section of Kershaw County, South Carolina. On March 21, 1997, Richard Bowers ("Bowers"), a newspaper delivery person, discovered Mobley's dead body on Spring Rock Road as he was delivering the morning paper. Mobley had been brutally beaten and murdered. His throat was slashed from ear to ear with a serrated knife. The Flamingo is a tavern and video poker establishment located about a mile from where Bowers found Mobley's body. Mobley was the sole employee of the Flamingo from midnight until 8:00 a.m., and the doors were locked between those hours. He would only allow people he knew into the club between those hours. Police investigators learned Mobley knew Ramsey, and probably would have let him in the Flamingo that night. At the crime scene, police investigators found signs of a struggle and footprint impressions. The investigators also found a trail of blood stretching 244 feet from Mobley's body. They took plaster casts of footprints and tire impressions found at the scene. There were several pieces of evidence linking Ramsey to the crime. First, police investigators found a striped sweater with blood on it near the crime scene. (1) One of the hairs from the sweater was determined to be Mobley's. Several witnesses identified Ramsey as wearing the striped sweater on the night of the murder. Tony Crolley testified he saw Ramsey driving in front of the Flamingo on the night of the murder wearing the striped sweater. He also stated he thought Ramsey wore a striped shirt or sweater with overalls most of the time. Truman Payne, who operated the Beaver Creek restaurant, testified Ramsey had been in the restaurant wearing a striped sweater on the night of the murder. Melissa Payne, who was also working at the restaurant that night, corroborated her husband's testimony concerning the sweater, and testified there "is not no doubt in my mind" Ramsey was wearing a striped sweater. Furthermore, investigators showed photographs of the sweater to other police officers. Several police officers saw someone wearing the sweater at the Kershaw County Courthouse prior to the murder. The investigators prepared a photographic lineup that included a picture of Ramsey. Two officers, Deputies Patrick Boone and David Dowey, both identified Ramsey from the lineup as the person wearing the sweater several days before at the courthouse. In fact, Deputy Boone knew Ramsey and had a conversation with him outside the courthouse when he was wearing the sweater. The second piece of evidence linking Ramsey to the crime was a bloody boot found by police investigators at Ramsey's trailer. Although the pattern on the sole of the boots was similar to the cast taken from the crime scene, the boots were larger than the cast. (2) Blood from the boot as well as the blood from the sweater were sent to SLED for DNA testing. SLED agents determined the blood on the sweater matched the DNA profile of Mobley. The blood from the boot was forwarded to a lab in Tennessee for more testing. Michael Deguglielmo ("Deguglielmo") from the Tennessee lab determined, through a polymerase chain reaction ("PCR") analysis, that the blood from the boot contained a mixture of DNA from two people. Deguglielmo testified the blood was not contaminated, it was simply a mixed sample. He further stated that mixed samples are a reality of life in forensic testing. Deguglielmo concluded he could not exclude Mobley as one of the persons whose blood was on the boot. Based on his testing, he determined the chance the DNA on the boot did not come from Mobley was one in 4,601 - a percentage greater than 99.9. The third piece of evidence linking Ramsey to the murder was the tire impressions taken from the crime scene. Police investigators photographed the tires on Ramsey's blue station wagon in order to compare them with the casts of the tire tracks from the crime scene. The impressions and the tires appeared to be similar. The tire tracks at the scene indicated the car was moving from the site where Mobley's body was discovered toward the site where the sweater was discovered. Ramsey was seen in the blue station wagon on the night of the murder. Finally, when police officers arrested Ramsey, they advised him of his Miranda rights, and he waived them verbally and by signing a Miranda waiver form. As Captain Tomley was transporting Ramsey to the Sheriff's Department, Ramsey stated, "I guess you guys are going to be arresting Joey [Conners], because he was with me that night." Ramsey was indicted by the Kershaw grand jury for the offenses of murder, kidnaping, and armed robbery. On November 10, 1998, the jury found Ramsey guilty of murder and kidnaping, but acquitted him on the armed robbery charge. The trial judge sentenced Ramsey to life imprisonment for murder but did not sentence him on the kidnaping charge pursuant to S.C. Code Ann. § 16-3-910 (Supp. 2000). The following issues are before this Court on appeal: I. Did the trial judge err in refusing to conduct an in camera hearing regarding the suggestiveness of an out-of-court identification? II. Did the trial judge abuse his discretion by admitting expert testimony that the blood on the boot from Ramsey's trailer matched the blood of the victim? III. Did the trial judge properly deny Ramsey's motion for directed verdict? Relying on State v. Williams, 258 S.C. 482, 189 S.E.2d 299 (1972), Ramsey argues the trial judge erred by refusing to hold an in camera hearing to challenge the suggestiveness of the photographic lineup identification made by Deputies Boone and Dowey. We disagree. Where identification is concerned, the general rule is that a trial court must hold an in camera hearing when the State offers a witness whose testimony identifies the defendant as the person who committed the crime, and the defendant challenges the in-court identification as being tainted by a previous, illegal identification or confrontation. State v. Cash, 257 S.C. 249, 185 S.E.2d 525 (1971). For example, in State v. Simmons, 308 S.C. 80, 417 S.E.2d 92 (1992), this Court remanded the case for an in camera hearing where a witness saw a suspect at a bond hearing prior to his in-court identification of the suspect. The witness may have gotten a "fix" on the suspect at the bond hearing because the suspect's name was called and she came forward for the judge to set bond. Id. An in camera hearing was needed to determine whether the in-court identification was of independent origin or was the tainted product of the circumstances surrounding the bond hearing. Id. The cases cited by Ramsey involve situations where an in-court identification is the product of an unlawful confrontation or lineup. These cases, however, are immaterial because the issue in this case is simply whether an out-of-court photographic lineup was impermissibly suggestive, not whether the subsequent in-court identification was tainted. Defense counsel did not object to the manner in which the photographs in the lineup were presented to the officers. Ramsey's main concern is that after the sweater was discovered, and the police department identified him as a suspect, investigators asked the two officers whether they had seen anyone in the lineup wearing the striped sweater. Deputy Boone was able to identify Ramsey with no problem. Deputy Dowey, however, recognized the sweater, but could not remember who he saw wearing it. He remembered Ramsey wore the sweater at the police station only after he saw the lineup. Defense counsel maintains the two officers would naturally "pick the big suspect in the big case for the Sheriff's Department at that time." Even if the photographic lineup was impermissibly suggestive, any error in the trial court's refusal to conduct an in camera hearing was harmless. See Simmons, supra (noting that under certain circumstances, if the identification is corroborated by either circumstantial or direct evidence, the harmless error rule is applicable). Even without the deputies' testimony, the State had the testimony of three witnesses who testified they had "no doubt" Ramsey was wearing a striped sweater on the day of the murder. Ramsey did not challenge this testimony, which is actually more probative than the deputies' testimony. Therefore, even if the admission of the deputies' testimony was erroneous, it was harmless error.II. DNA Evidence Ramsey argues the trial judge erred by admitting the "contaminated" DNA evidence from his boot. Ramsey also contends the DNA evidence was unreliable pursuant to Rule 702, SCRE. We disagree. This issue contains the following two distinct subparts: (1) whether the evidence was tainted and totally unreliable pursuant to State v. Ford, 301 S.C. 485, 392 S.E.2d 781 (1990); and (2) whether Mr. Deguglielmo's expert testimony was based on unreliable scientific evidence which should be excluded pursuant to Rule 702, SCRE. A. Tainted Evidence According to Ramsey, the police investigators mishandled all of the evidence in this case and committed "classic violations of evidence preservation." Specifically, pieces of evidence were taken from one crime scene to another. For example, the blood evidence was taken from the murder scene, to the Flamingo, and finally to Ramsey's home. The police investigators also took the striped sweater from the murder scene to the Flamingo where they removed hairs. Finally, the bloody footprint casts were taken from the murder scene to Ramsey's home where they were washed. According to Ramsey's expert witness, Donald Girndt ("Girndt"), a former SLED agent and crime scene investigator, the blood on the boot could have been contaminated because the victim's blood from the casts may have splashed onto the boot while they were being washed. Girndt maintained the evidentiary value of the blood on the boot under these circumstances was virtually zero. DNA evidence may be admitted in judicial proceedings in this State in the same manner as other scientific evidence, such as fingerprint analysis and blood tests. Ford, supra. However, the admissibility of DNA evidence remains subject to traditional attack, such as attacks based on relevancy or prejudice. Id. According to this Court in Ford, "traditional challenges to the admissibility of [DNA] evidence such as the contamination of the sample or chain of custody questions may be presented. These issues relate to the weight of the evidence. The evidence may be found to be so tainted that it is totally unreliable and, therefore, must be excluded." Id. at 490, 392 S.E.2d at 784. We find the DNA evidence in this case is not so tainted that it is totally unreliable. Two conflicting theories were offered at trial as to how the evidence was collected and its potential for contamination. Ramsey maintains the blood on the boot could be contaminated, while the police officers testified they were careful and complied with procedures. We find these issues relate to the weight of the evidence.B. Admissibility Pursuant to Rule 702, SCRE The proper standard for the admissibility of scientific evidence is outlined in Rule 702, SCRE. Pursuant to Rule 702, SCRE, in order for the evidence to be admissible, the trial judge must find the scientific evidence will assist the trier of fact, the expert witness is qualified, and the underlying science is reliable. (3) The trial judge should determine the reliability of the underlying science by using the following factors: (1) the publication of peer review of the technique; (2) prior application of the method to the type of evidence involved in the case; (3) the quality control procedures used to ensure reliability; and (4) the consistency of the method with recognized scientific laws and procedures. State v. Council, 335 S.C. 1, 515 S.E.2d 508 (1999). Further, even if the evidence is admissible under Rule 702, SCRE, the trial judge must determine if its probative value is outweighed by its prejudicial effect under Rule 403, SCRE. Once the evidence is admitted under these standards, the jury may give it such weight as it deems appropriate. Id. Ramsey does not challenge the evidence based on any of the Council factors. Most importantly, he does not challenge the qualifications of the State's expert, Deguglielmo, or the reliability of the PCR procedure. He challenges only the means in which the police officers handled the evidence prior to the testing conducted by Deguglielmo. The issue is whether Deguglielmo's expert testimony regarding the DNA evidence was unreliable pursuant to Rule 702, SCRE and, therefore, inadmissible. This Court reviews the admission of such testimony under an abuse of discretion standard. Payton v. Kearse, 329 S.C. 51, 495 S.E.2d 205 (1998). The trial judge did not abuse his discretion by admitting Deguglielmo's expert testimony. First, Ramsey does not challenge the PCR procedure used by Deguglielmo to test the DNA samples. Even if the samples were mixed during collection, Ramsey does not demonstrate Deguglielmo's testing procedure was unreliable, much less so unreliable as to warrant exclusion. Second, the testimony concerning the DNA evidence complied with the requirements as set forth in Council. Any evidence concerning contamination, therefore, went to the weight of the testimony, not its admissibility. Finally, the mixture of DNA evidence is not a basis for the exclusion of PCR evidence. See Oregon v. Lyons, 863 P.2d 1303 (Or. Ct. App. 1993) (finding the potential for DNA contamination presents an "open field" for cross examination at trial, but does not indicate the PCR method of DNA testing is inappropriate for forensic use). According to Deguglielmo, mixed samples are simply a fact of life in forensic science.III. Directed Verdict Ramsey argues the trial judge should have directed a verdict of acquittal because the DNA evidence should have been excluded. (4) Because we find the DNA evidence was admissible, it is unnecessary for us to address Ramsey's directed verdict motion. Based on the foregoing, Ramsey's sentence and convictions are AFFIRMED. MOORE, WALLER, BURNETT and PLEICONES, JJ., concur. 1. The sweater is a distinctive gray, brown, and beige horizontally striped sweater. 2. Evidence was presented which indicated two people murdered Mobley. Ramsey was seen on the night of the murder with his roommate Joey "Clown" Connors. 3. Rule 702, SCRE, states: If scientific, technical, or other specialized knowledge will assist the trier of fact to understand the evidence or to determine a fact in issue, a witness qualified as an expert by knowledge, skill, experience, training, or education, may testify thereto in the form of an opinion or otherwise. 4. A defendant is entitled to a directed verdict when the State fails to produce evidence of the offense charged. State v. Brown, 103 S.C. 437, 88 S.E. 21(1916). If there is any direct evidence or substantial circumstantial evidence reasonably tending to prove the guilty of the accused, the Court must find the case was properly submitted to the jury. State v. Pinckney, 339 S.C. 346, 529 S.E.2d 526 (2000).
http://www.judicial.state.sc.us/opinions/displayOpinion.cfm?caseNo=25325
UPDATE for 11 a.m. ET: The first official announcements for today's news have been released. See the latest story here: Dark Matter Possibly Found by $2 Billion Space Station Experiment. NASA will unveil the first discoveries from a powerful $2 billion particle physics experiment on the International Space Station in what could be a major vindication for the science tool, which almost never made it into space. The space agency will hold a press conference at 1:30 p.m. EDT (1830 GMT) today, April 3, to reveal the first science results from the experiment, called the Alpha Magnetic Spectrometer. You can watch the AMS science results live on SPACE.com, via NASA TV. The Alpha Magnetic Spectrometer is an advanced cosmic-ray detector designed to seek out signs of antimatter and elusive dark matter from its perch on the backbone-like main truss of the International Space Station. More than 200 scientists representing 16 countries and 56 institutions are part of the science team, which is led by Nobel laureate Samuel Ting, a physicist at MIT. "AMS is a state-of-the-art cosmic ray particle physics detector located on the exterior of the International Space Station," NASA officials said in an announcement Tuesday (April 2). [See photos of the Alpha Magnetic Spectrometer in space] NASA and the AMS team have not revealed exactly what the first science results from AMS will be, but Ting has assured that it will be a significant announcement. "It will not be a minor paper," Ting said on Feb. 17 during the annual meeting of the American Association for the Advancement of Science in Boston, adding that it would represent a "small step" toward understanding the true nature of dark matter, even if it is not the final answer. The spectrometer consists of a huge, 3-foot wide magnet that bends the paths of cosmic particles and steers them into special detectors designed to measure particles' charge, energy and other properties. The complicated space experiment was 16 years in the making, but despite its lofty mission, the 7-ton AMS almost never flew. In fact, NASA canceled the space shuttle mission originally slated to launch AMS to the space station in 2005. At the time, the space agency cited safety concerns following the 2003 space shuttle Columbia accident – an event that led directly to the space shuttle fleet's retirement in 2011. But NASA's decision to cancel the AMS mission did not sit well with the science community. Scientists launched a persistent campaign to resurrect the AMS launch, including an intense lobbying effort to sway lawmakers in Congress to their side. The fight back was a success. Congress approved funding for an extra space shuttle mission that would launch the AMS experiment to the space station. That mission, NASA's STS-134 flight aboard Endeavour, launched into space in May 2011. "I never had any doubt when they were going to fly. I think it was three days after the inauguration of President Obama, we were on the manifest," Ting told SPACE.com in 2011, just before the experiment arrived at the station. "We didn't change the mission, we just continued." During the fight to revive the AMS experiment, NASA and its International Space Station partners also approved a plan to extend the orbiting laboratory's operations in space through 2020. That decision prompted Ting and his science team to make a last-minute change to the AMS instrument. The team swapped out the spectrometer's original magnet, which would last only a few years, for a longer-lasting permanent magnet to allow for longer science observations. The Alpha Magnetic Spectrometer was first attached to the International Space Station on May 16, 2011. Three days later, the instrument was activated for the first time and has been performing science observations ever since. The instrument is managed by NASA's Johnson Space Center in Houston, which is home to the space station's Mission Control. Visit SPACE.com today for complete coverage of NASA's Alpha Magnetic Spectrometer announcements. Email Tariq Malik at [email protected] or follow him @tariqjmalikand Google+. Follow us @Spacedotcom, Facebookand Google+. Original article on SPACE.com.
https://www.space.com/20488-nasa-astrophysics-discovery-ams.html
Report by: Strategic Youth Network for Development (SYND). The need to address gender inequality and social exclusion in all socio-economic development agenda has gained strong attention globally. Governments have made [legally] binding commitments to ensuring gender responsiveness in all decision making processes. Building on these commitments, the 2030 Agenda, including its Sustainable Development Goals (SDG), recognize the interlinkages between gender equality and the economic, social, and environmental dimensions of sustainable development and call for integrated solutions with the aim to “Leaving No One Behind”. In recent times, the issue of natural resources and environment management has increased owing to their inter-relatedness and cross-cutting nature as well as their inevitable impact on development agenda. They include Climate Change; Biological Diversity (Biodiversity); Water, Sanitation and Hygiene (WASH); Forestry; Fisheries; Oil and Gas; Food Security; Land Degradation/Desertification; Renewable Energy; International Waters; Mining and Wildlife. Hitherto, in a quest to demonstrate good governance in the management of the natural resources and environment sector in particular, the Government of Ghana launched the Natural Resource and Environmental Governance (NREG) program from 2008 – 2012 with support from the Agence Francois de Development (AFD), Department for International Development (DfID) of the United Kingdom, the European Commission (EC), the Royal Netherland Government (RNG) and the International Development Association (IDA – WB) to address governance issues related to the mining and forestry sectors and to improve environmental management with the overall objective of ensuring economic growth, poverty alleviation, increasing revenues and improving environmental protection. Increasingly, development partners are acknowledging the need for extensive consultation involving local communities, indigenous peoples, civil society organizations, private sector, women, men, girls, boys and vulnerable groups including persons with disabilities in addressing environmental challenges. Yet in the planning, implementation and evaluation of interventions, the inclusion of young people as relevant actors has been poorly executed. Even where issues of social inclusion are specifically discussed, little attention is paid towards empowering young people consciously to participate. Though some efforts have been made, there is still a lot more that needs to be done in enhancing the access of young people to public participation and decision making.
https://access-coalition.org/youth-inclusion-in-the-governance-of-ghanas-natural-resources-and-environmental-nre-sector/
They were the little cotton sprouts that could: a handful of seedlings that poked themselves up from the dirt inside a small biosphere on China's lunar lander, Chang'e-4. Yes, the plants were stunted compared with the earthbound control plants. But they had just survived a space launch and difficult journey to the moon, and were growing in the low gravity and high radiation of extraterrestrial space. They were the first plants ever to grow on the lunar surface. None of the other species that made the trip with them showed any similar signs of life. Now they're dead. And it's all the moon's fault. During a news conference today (Jan. 16), project leader Liu Hanlong explained the plants' deaths in their little, faraway can, the Hong Kong publication GB Times reported. As night fell on the region of the far side of the moon where Chang'e-4 sits, temperatures plunged in the 5.7-lbs. (2.6 kilograms) mini biosphere. Hanlong reportedly said that the temperature inside the chamber had fallen to minus 62 degrees Fahrenheit (minus 52 degrees Celsius), and could continue to plunge to minus 292 degrees F (minus 180 degrees C). The experiment is effectively over, as the lander has no onboard mechanism for keeping the experiment warm without sunlight. So what, precisely, would have happened to the extraterrestrial growth as temperatures plunged? Some plants are better at dealing with cold than others, as the Food and Agriculture Organization of the United Nations (FAO) explained in a post. As days shorten and temperatures drop, the plants flood their cells with sugar and other chemicals to lower the freezing point of the water inside. This process is important because it keeps intracellular water from turning to ice crystals that expand and shred cells from the inside. Other plants also toughen cell membranes, or -- in extreme environments, plants survive freezes by dehydrating themselves, literally pumping water out of the cells. However, according to the FAO, all of these "hardening" techniques require that, for several days, the environment send t signals that winter is coming. This is why sudden frosts can kill even cold-weather plants on Earth. And cotton, native to warm regions on Earth, is not particularly well adapted to the cold in the first place. The lunar nighttime chill would have been nothing like the gradual seasonal shift to which plants are adapted. During the two-week daylight period, temperatures on the lunar surface can be as high as 212 degrees F (100 degrees C). But when night falls, they can rapidly plunge to minus 279 degrees F (minus 173 degrees C). So the cold shock to the cotton was likely brutal and sudden. Water in newly formed cells would have turned quickly to ice, flaying them open from the inside. Any buds and leaves would have gone first, according to research published in 2001 in the journal Annals of Botany. A close look at them under microscopes would reveal cell membranes wrinkled and folded on themselves like burst water balloons. The hardier stems would have frozen shortly afterward. At the same time as the cells froze, that study found, water between the cells would have frozen as well. That process would have sucked more water out of the cells before it could freeze, killing the cotton by dehydration as much as physical destruction. Though no earthly plant is known to survive at temperatures colder than even the middle of Antarctica, the cotton likely wouldn't have put up a fight to prevent its death without any autumnal light shifts to signal the temperature change. The end of those cotton sprouts was probably nasty, then. But at least it was quick. We salute the botanical explorers, now frozen in their lunar graves. Originally published on Live Science.
https://www.space.com/43033-how-china-moon-plant-died.html?
Keep up to date with our blog articles, latest news and industry developments. See below for the latest posts or use the category listings to hone your search for stories of interest. WP29 GDPR Guidelines: The Right to Data Portability Article 29 Working Party provides new guidelines with respect to data portability under the GDPR. The Article 29 Working Party (WP29), an advisory body made up of national data protection officers from across European Union (EU) Member States, has published three sets of guidelines on how the incoming General Data Protection Regulation (GDPR) will work in practice (Guidelines).The Guidelines focus on three separate issues: data protection officers; data portability; and lead supervisory authorities. These three issues will all place new obligations on some organisations under the GDPR which will come into force in May 2018. You can see our previous blog posts on GDPR developments here. This blog post will focus on the Guidelines on data portability. The GDPR will introduce a new right to data portability that will enable individuals to receive information which they have provided to a data controller in a structured, commonly used and machine-readable format, in order to transmit the data to another service provider, usually for no fee and without undue delay. Data portability and the existing right to access The new right is intended to give individuals more control over the processing of their data and the opportunity to move to new service providers more freely, thereby encouraging competition. Subject access requests will continue to exist under the GDPR and it is hoped that the right to data portability will complement the existing access rights. Scope of the data to be provided The new right applies both to data knowingly provided by individuals and to personal data generated by an individual’s activity. However, data inferred or derived by the data controller on the basis of the personal data provided by the individual would not fall within the scope. This may be a grey area for some organisations which might find it difficult to determine exactly how the personal data they process is generated. One of the more complex aspects of the right to data portability is providing data which adversely affects the rights and freedoms of a third party. This data should not be provided, unless the receiving data controller is pursuing a legitimate interest in relation to the data. Data controllers will also have to consider applicable intellectual property rights, for example database rights, and decide to what extent these rights may restrict the provision of data. How should the data be provided The Guidelines state that data controllers should provide a range of tools for individuals to receive their data, including a direct download option and an option to automatically transmit data to another data controller. WP29 recommends and encourages industry stakeholders to work together to develop a common set of standards and formats for delivering information related to a data portability request to simplify the process for the individual. It is suggested in the Guidelines that the data is provided by an application programming interface (API) which enables users to access their data via an application or web service to which other service providers can link their systems, to enable a data controller to automatically pass personal data to the individual’s chosen new service provider. The Guidelines set minimum standards that organisations must comply with when delivering data including: - to provide for a high level of abstraction to allow for the data controller to remove information which is outside the scope of portability, e.g. passwords; - to provide as much metadata as possible in order to preserve the precise meaning of the exchanged information; and - to securely deliver information to the correct individual and ensure that the information is transmitted and stored as securely as possible. WAB Comment Even with the help of the Guidelines, the new right to data portability remains complex in a number of respects, including third party data and the scope of data that must be provided. Organisations which expect to receive numerous data portability requests will be keen to receive further clarification from the Information Commissioner’s Office (ICO) as to the extent of their obligations under the GDPR. Unfortunately however, data portability is not currently on the ICO’s list of upcoming GDPR guidance (see below). It remains to be seen if national data protection authorities will favour an industry-led approach to data portability compliance and allow industry stakeholders to develop common practices within their own sector. You can access the data portability Guidelines here and the frequently asked questions here. This blog post was written by Amelia Day, trainee solicitor at White & Black. Disclaimer: This article is produced for and on behalf of White & Black Limited, which is a limited liability company registered in England and Wales with registered number 06436665. It is authorised and regulated by the Solicitors Regulation Authority. The contents of this article should be viewed as opinion and general guidance, and should not be treated as legal advice.
https://www.wablegal.com/wp29-gdpr-guidelines-right-data-portability/
The philosophy of Ullens Kindergarten has its basis in constructivism, a theory that emphasizes the child’s active role in constructing knowledge through exploration and play. By reflecting on our experiences, we construct knowledge and understanding of the world. In a constructivist approach, teachers focus on making connections between facts and fostering new understanding as the children explore and play. Teachers tailor their teaching strategies to student responses and encourage students to analyze, interpret, and predict information. Teachers also rely heavily on open-ended questions and promote extensive dialogue among students. Learning is child-centered: new learning is connected to and built upon students' prior knowledge. Learning is active: students and teachers are engaged with real-life experiences and hands-on activitites that help them construct their own understanding. Learning engages children in critical and creative thinking: students deepen their understanding through scientific inquiry, problem solving, making connections, drawing logical conclusions, and articulating their own ideas. Learning is a creative and dynamic process: content reaches across multiple disciplines giving children authentic and relevant opportunities to engage with the world around them. Drawing inspiration from best practices worldwide, Ullens Kindergarten offers an emergent curriculum of both planned and spontaneous activities, encouraging independence while presenting limits that arise from being part of a group. Teachers provide a variety of choices for play designed to encourage physical skills, cognitive learning, problem solving and independent thought.
http://ullenseducation.edu.np/kindergarten/pages/mission--philosophy
Astronomers watched as gas approached 30 percent the speed of light during the most detailed observation ever made of material orbiting so close to a black hole. There is something huge lurking at the center of the Milky Way. For years, astronomers have presumed that this object, known as Sagittarius A*, was a supermassive black hole, which are known to be at the center of most spiral and elliptical galaxies. Now they have strong confirmation that this assumption was correct. An international team of astronomers used a special instrument on the European Space Observatory’s Very Large Telescope to observe infrared flares coming from the gas ring orbiting the black hole at the center of our galaxy. It’s the most detailed observation ever made of material orbiting so close to a black hole. These flares are produced by matter orbiting extremely close to the black hole’s event horizon, the point at which no matter or light can escape the black hole’s gravity. Just beyond the event horizon is the black hole’s accretion disc, a belt of gas that is rapidly orbiting the black hole. Only about 1 percent of material is estimated to cross the event horizon and get pulled into the black hole. Most of the material orbiting Sagittarius A* is ejected and produces the flares seen by the ESO astronomers. GRAVITY is an instrument added to the Very Large Telescope in 2015, which is able to look at galactic centers in unprecedented detail. The instrument is an interferometer, which combines the light measurements from four different telescopes to achieve image resolutions far higher than is possible with a single telescope. GRAVITY also has a novel stabilization mechanism that allows the long exposures that reveal the faint objects around our galactic center. In the case of Sagittarius A*, a black hole with a mass 4 million times greater than the sun, the 26,000 light years between its center and Earth is filled with gas that makes observations difficult. GRAVITY’s extreme sensitivity, however, allows astronomers to watch as stars, gas, and planets orbit the black hole. If astronomers were using a stethoscope to observe the Milky Way’s heart before, GRAVITY was like introducing them surgery. Earlier this year, GRAVITY allowed researchers to confirm a critical feature of Einstein’s theory of relativity when a star passed within 12 billion miles of the black hole and was accelerated to approximately 3 percent the speed of light. During those observations, researchers observed the infrared flares that were used to confirm the existence of a supermassive black hole in the center of the Milky Way.
https://astronaut.com/astronomers-find-strong-evidence-there-is-a-supermassive-black-hole-at-the-center-of-our-galaxy/
Governments across Europe should keep clean energy top of mind as they consider measures to protect their economies against a likely recession caused by the coronavirus, the International Energy Agency (IEA) has said. The European Commission is paying close attention to the economic impact of the coronavirus, saying border shutdowns risk disrupting entire product value chains – ranging from automotive to agriculture and food. And this is valid also for wind, solar and energy-saving technologies. “On renewables, this is definitely one dimension in a very complex situation that we are facing at the moment,” EU Commission spokesperson Eric Mamer told a regular press briefing on Monday (16 March) But he said it was too early to make recommendations to EU member states at this stage about potential measures to prop up investments in clean technologies. “We will have to see as events unfold what analysis we can make of the impact on investments in various areas and how we can react,” Mamer said. Renewables and energy savings are cornerstones of the EU’s climate change strategy and a central element of the bloc’s Green Deal agenda of reaching climate neutrality by 2050. Over the weekend, the International Energy Agency (IEA) has warned that low oil prices risked delaying investments into clean tech such as wind and solar, in the face of falling oil prices. “The sharp decline in the oil market may well undermine clean energy transitions by reducing the impetus for energy efficiency policies,” the IEA’s executive director Fatih Birol said in a blog post published over the weekend (14 March). As governments draw up stimulus plans to counter the economic damage from the coronavirus, they should ensure that clean energy investment “doesn’t get lost amid the flurry of immediate priorities,” Birol said. “Governments can use the current situation to step up their climate ambitions and launch sustainable stimulus packages focused on clean energy technologies,” he argued, saying solar, wind, hydrogen, batteries and carbon capture (CCUS) “should be a central part of governments’ plans because it will bring the twin benefits of stimulating economies and accelerating clean energy transitions”. The IEA’s warning isn’t isolated. On Thursday (12 March), BloombergNEF published a report on the likely effects of COVID-19 on renewable power, energy storage, electric vehicles, heating, cooling and the circular economy. “We are currently more concerned about demand, as policymakers may divert attention away from clean energy to more pressing concerns,” said BloombergNEF in the introduction to the study. 2020 could be the first down year for solar since 1980s The solar PV sector is expected to be hardest hit by the economic slowdown caused by the coronavirus, as solar PV modules are chiefly manufactured in China, which has temporarily shut down factories to contain the spread of the infection. BNEF cut its global solar demand forecast for 2020 by 8% – from 121-152GW to 108-143GW, saying “this could make 2020 the first down year for solar capacity addition since at least the 1980s”. There was also a potential silver lining to the crisis, BloombergNEF pointed out, saying the short-term interruption of production in China has highlighted the need for diversified supply chains and strengthened the case for localised manufacturing in places like Europe and the US. However, it said this would be the case “especially for batteries”, not for solar PV. “I’m not bullish about new solar manufacturing here, simply because Asia is so far ahead,” said Jenny Chase, head of solar analysis at BloombergNEF. Of course, she said, there are a few solar PV manufacturers left in Europe, the biggest being Recom, headquartered in France. “But they are the exception” – not the rule, Chase told EURACTIV. Besides, Chase said solar PV manufacturing “is a horrible business to be in,” with low margins, and factories becoming out of date before firms pay off their debt on them. “Battery manufacturing in Europe is more promising, partly because batteries are harder to ship than solar panels, which are basically inert,” she said. Another exception is wind turbine equipment. “We see some downside risk to our global wind forecast of 75.4GW, but thus far we still expect 2020 to be a record year for wind build,” BloombergNEF said. SolarPower Europe, an industry association, said it expects big projects to go ahead “mostly as planned”. But things could be different in the smaller-scale market. “If consumers spend less money in general, it could potentially have an impact on the solar sector, in particular rooftop solar,” said Walburga Hemetsberger, CEO of SolarPower Europe. “In such a scenario, we may need to consider asking for measures to stimulate solar technology and its associated jobs that are the backbone of the energy transition,” she told EURACTIV in emailed comments. On the manufacturing side, Hemetsberger pointed to news from China suggesting that most solar PV factories were producing at a high output level again. According to her, the disruption caused by the coronavirus “has highlighted the need to have local manufacturing facilities along the value chain in Europe”. This, she said, would “bolster security of supply” in Europe at a time when solar is projected to become one the main future power generation sources on the continent.
https://www.euractiv.com/section/energy/news/europe-warned-about-virus-impact-on-clean-tech/
Wnt2b inhibits differentiation of retinal progenitor cells in the absence of Notch activity by downregulating the expression of proneural genes. During the development of the central nervous system, cell proliferation and differentiation are precisely regulated. In the vertebrate eye, progenitor cells located in the marginal-most region of the neural retina continue to proliferate for a much longer period compared to the ones in the central retina, thus showing stem-cell-like properties. Wnt2b is expressed in the anterior rim of the optic vesicles, and has been shown to control differentiation of the progenitor cells in the marginal retina. In this paper, we show that stable overexpression of Wnt2b in retinal explants inhibited cellular differentiation and induced continuous growth of the tissue. Notably, Wnt2b maintained the undifferentiated progenitor cells in the explants even under the conditions where Notch signaling was blocked. Wnt2b downregulated the expression of multiple proneural bHLH genes as well as Notch. In addition, expression of Cath5 under the control of an exogenous promoter suppressed the negative effect of Wnt2b on neuronal differentiation. Importantly, Wnt2b inhibited neuronal differentiation independently of cell cycle progression. We propose that Wnt2b maintains the naive state of marginal progenitor cells by attenuating the expression of both proneural and neurogenic genes, thus preventing those cells from launching out into the differentiation cascade regulated by proneural genes and Notch.
Security in the cloud is a top concern for the modern enterprise. Fortunately, provided that organizations do their due diligence when evaluating security tools, storing data in the cloud can be even more secure than storing data on premises. However, this does require deploying a variety of solutions for securing data at rest, securing data at access, securing mobile and unmanaged devices, defending against malware, detecting unsanctioned cloud apps (shadow IT), and more. Amidst this rampant adoption of security tools, organizations often forget to bolster the weakest link in their security chain, their users. In ASEAN countries, enterprises’ rapidly expanding cloud footprints make them a prime target for cyberattacks. While Malaysia is ranked third globally in commitment to addressing cybersecurity issues, it is also ranked sixth in the region and thirty-third globally in vulnerability to cyberattacks. Unfortunately, the country’s current circumstances do not match its admirable intentions. Nevertheless, countries like Malaysia are striving to enhance their cybersecurity efforts. A report from AT Kearney states that ASEAN countries spend 0.06% of their combined GDP (or 1.9 billion USD) on cybersecurity on average. In 2017, Malaysia invested 0.08%, double the .04% of its neighbors in the region. Additionally, while Malaysia currently employs 6,000 cybersecurity professionals, the nation is seeking to reach 10,000 by 2020. According to another survey, 96% of Malaysian enterprises are only in the early stages of security preparedness. While these companies recognise the importance of cybersecurity, most have only deployed basic tools like firewalls and antivirus protections for on-premises and managed devices. Nearly half lack security intelligence and event management systems for monitoring and responding to various threats. Finally, despite the fact that the weakest link in enterprise security is the non-IT employee, only 31% of Malaysian companies want their workers to take part in IT security training. Cybercriminals are constantly growing in sophistication; they leverage an ever-growing number of advanced strategies and tools in order to steal data. As such, it is critical for enterprises to employ proactive cybersecurity that prevents breaches from happening in the first place. While great steps are typically taken to secure data, relatively little thought is given to the behaviors of its users. This is likely due to an ingrained reliance upon static security tools that fail to adapt to situations in real time. Regardless, users make numerous decisions that place data at risk – some less obvious than others. In the search for total data protection, this dynamic human element cannot be ignored. External sharing is one example of a risky user behavior. Organizations need visibility and control over where their data goes in order to keep it safe. When users send files and information outside of the company, protecting it becomes very challenging. While employees may do this either maliciously or just carelessly, the result is the same – data is exposed to unauthorized parties. Somewhat similarly, this can occur through shadow IT when users store company data in unsanctioned cloud applications over which the enterprise has no visibility or control. Next, many employees use unsecured public WiFi networks to perform their work remotely. While this may seem like a convenient method of accessing employers’ cloud applications, it is actually incredibly dangerous for the enterprise. A malicious party can monitor traffic on these networks in order to steal users’ credentials. The fact that many people reuse passwords across multiple personal and corporate accounts only serves to exacerbate the problem. Users place data at risk through a variety of other ill-advised behaviors, as well. Unfortunately, traditional, static security solutions have a difficult time adapting to users’ actions and offering appropriate protections in real time. In the modern cloud, automated security solutions are a must. Reactive tools that rely upon humans to analyze threats and initiate a response are incapable of protecting data in real time. The only way to ensure true automation is by using machine learning. When tools are powered by machine learning, they can protect data in a comprehensive fashion in the rapidly evolving, cloud-first world. This next-gen approach can be particularly helpful when addressing threats that stem from compromised credentials and malicious or careless employees. User and entity behavior analytics (UEBA) baseline users’ behaviors and perform real-time analyses to detect suspicious activities. Whether credentials are used by thieving outsiders or employees engaging in illicit behaviors, UEBA can detect threats and respond by enforcing step-up, multi-factor authentication before allowing data access. Machine learning is helpful for defending against other threats, as well. For example, advanced anti-malware solutions can leverage machine learning to analyze the behaviors of files. In this way, they can detect and block unknown, zero-day malware; something beyond the scope of traditional, signature-based solutions that can only check for documented, known malware. Even less conventional tools like shadow IT discovery are beginning to be endowed with machine learning. Historically, these solutions have relied upon lists generated by massive human teams that constantly categorize and evaluate the risks of new cloud applications. However, this approach fails to keep pace with the perpetually growing number of new and updated apps. Because of this, leading cloud access security brokers (CASBs) are using machine learning to rank and categorize new applications automatically, enabling immediate detection of new cloud apps in use. In other words, organizations can uncover all of the locations that careless and conniving employees store corporate data. To reduce the likelihood of data leakage and cyberattacks, organisations must identify everything that they need to protect, as well as the strategies that they can implement to do so. While training employees in best security practices is necessary, it is not sufficient for defending data in our high-speed business world. Education must be paired with context-aware, automated security solutions (like CASBs) in order to reinforce the weak links in the enterprise’s security chain.
http://www.enterpriseitnews.com.my/reinforcing-the-security-chain/
*NEW* Lead Audio Programmer - Edinburgh Immersive ambient soundscapes and satisfyingly interactive sound design are essential in bringing our expansive open world to life, and audio programming is critical to achieve that. Working closely with both sound designers and the code team, you will help design and implement audio systems, integrate and manage our audio middleware, and be the key link between sound designers and the inner workings of the game. A large part of the role will involve high-level game code, working across the entire codebase, but there is also scope for writing custom DSP effects. Along with fulfilling a technical role, you will be a core member of the audio team with huge scope to shape the overall sound and feel of the game. You will be joining the studio at an exciting time, as we scale up to develop an ambitious and ground-breaking open-world game. The role brings with it great scope for career growth and the satisfaction of making a significant personal contribution to a high-profile project. We believe in iterative development, and you will need to be comfortable working in a fluid environment with competing demands. We also believe the best games are made by diverse teams, and we welcome and encourage applicants of all backgrounds. **Responsibilities: ** - Design audio systems that meet technical and production requirements. - Work with sound designers and other programmers to integrate SFX, dialogue and music into the game. - Integrate and manage our audio middleware. - Create custom audio effects. **Requirements: ** - Strong C++ skills. - Expert knowledge of a modern audio middleware solution such as Wwise. - Expert understanding of the technical limitations of videogame audio, for example memory budgets and channel counts. - Strong communication skills, and comfortable working across multiple departments. - 3 years minimum industry experience, including having shipped at least one AAA PC or console title. - Experience creating DSP effects is desirable but not essential. - Experience with open world game environments, procedural audio and acoustic simulation.
http://www.everywhere.game/work/lead-audio-programmer/
These days and nights around the midsummer on the north part of the Earth, many of us celebrate in various way. One of it is music – World Music Day! I am glad that the town where I live music is loved and often performed on the streets, which was certainly the case on the summer solstice. One of the groups that performed was a choir of women and one-man singing famous international revolutionary songs of freedom and resistance. It was good fun but, also, I got the chills because of the determination to get freedom at any cost expressed in their singing. How important was the role of various genders in the past in relation to freedom? And what is happening today? Are we freer or has the pressure become just more subtle? It is not difficult to relate freedom and health, the right to make your decisions as to how you organize your life, partnerships, families and so on… Such decisions affect employment, good mental and physical health as well as levels of wellbeing. These relations also have a bearing on people’s access to and uptake of health services and on the health outcomes they experience throughout the course of life. Demands for gender equality for girls and women, gender norms that promote health and wellbeing for all, including gender minorities are now highly visible. Grassroots movements, fuelled and democratised by social media, have heightened the prominence of these issues globally. Examples include ending sexual harassment in the workplace (#MeToo, #TimesUp); shining a spotlight on violence against women (#Nirbhaya in India and #NiUnaMenos in South America) and gender-related pay gaps (#EqualPay); advocating against toxic masculinities that underlie male violence (#MenEngage); and promoting lesbian, gay, bisexual, transgender, and queer (LGBTQ) justice (#hrc, #WhereLoveIsIllegal). Simultaneously, a backlash is growing against this progressive agenda. Conservative voices continue to use arguments, often couched in cultural, economic or religious terms, to justify discrimination against women and gender minorities, while upholding the traditional foundations of male privilege. Due to the historical legacy of gender-based injustice, the health consequences of gender inequality fall most heavily on women, especially poor women, but restrictive gender norms undermine the health and wellbeing of all: women, men, and gender minorities. Overall, evidence suggests that greater gender equality has a mostly positive effect on the health of males and females. We can conclude that, unless there is encouragement and support for men to assume more non-traditional roles, further health gains will be hindered.
https://wellbe-ims.com/lifestyle/gender-equality-creates-better-health-for-all/
Ancient Greek physician, Hippocrates, advises a woman and child while other patients wait nearby. Artwork by H.M. Herget Greek leaders and thinkers were influential in their own time, but some of their ideas and work stand the test of time and still have an impact on modern life. Grades 3, 6, 8, 12+ Subjects Social Studies, Ancient Civilizations Select Text Level: When you read the word “ancient,” you likely think of something old and outdated. But you may be surprised to hear that many of the ideas and institutions that came from ancient Greece still exist today. We have the ancient Greeks to thank for things like present-day democracy, libraries, the modern alphabet, and even zoology. Here are some notable Greek figures—from philosophers to mathematicians and scientists—and how they have shaped the world we know today. Socrates was one of the most prominent ancient Greek philosophers. Socrates spent the majority of his life asking questions, always in search of the truth. He is responsible for developing what is known as the Socratic method, a technique still used by professors in law schools today. Instead of lecturing the students, professors will ask them a series of thought-provoking questions. These questions help the students think critically, and they are meant to elicit underlying presumptions and ideas that could be influencing the way a student views a case. Socrates engaged his students in this same fashion. He did not leave any written record of his life or ideas, so most of what we know about Socrates was written by one of his students, Plato. Plato Thanks to Plato, we know a lot about Socrates. Nevertheless, Plato made his own important contributions. Born around 427 B.C.E., Plato influenced Western philosophy by developing several of its many branches: epistemology, metaphysics, ethics, and aesthetics. Plato was also a prominent writer. One of his most famous writings is the Republic. In the Republic, Plato examines justice, its role in our world, and its relationship to happiness, themes familiar to the founding fathers of the United States. Plato is also famous for being the teacher of another important philosopher, Aristotle. Aristotle Aristotle is still considered one of the greatest thinkers in the areas of politics, psychology, and ethics. Like Plato, Aristotle was a prolific writer. He wrote an estimated 200 works during his lifetime; 31 of them are still admired and studied today. Aristotle thought a lot about the meaning of life and about living a moral life. Immensely curious, he also studied animals and sought to classify them into different groups, laying the foundation for zoology today. Through his writing about the soul and its properties, Aristotle laid the foundation for modern psychology. He was also called on to tutor King Philip II of Macedon’s son, Alexander, who would later come to be known as Alexander “the Great.” While the great philosophers are well known, there were many other great Greek political and military leaders who had an impact on the world. Born to notable military leader King Philip II, Alexander III of Macedon proved early on that he was destined for greatness. At a young age, Alexander learned to fight and ride, famously taming the wild horse Bucephalus at age 12. Only a few years later, at age 18, Alexander got his first chance to fight in a war and helped defeat the Sacred Band of Thebes during the Battle of Chaeronea. Soon he took over the throne his father once held and continued to prove himself a strong and able military mind. Alexander eventually created an empire stretching from Macedon across the entire Middle East to the frontiers of India. By 323 B.C.E., Alexander ruled over an enormous amount of land a feat that caused historians to give him the nickname Alexander “the Great.” Pericles At the other end of ancient Greece was another strong leader working to grow the city of Athens. His name was Pericles. Pericles was born over 100 years before Alexander the Great, but he had a similar background. He came from a prominent family in Athens and had a war hero for a father. Pericles did much to help the culture of Athens flourish. Consistently surrounded by the arts, one of the first things he did was to sponsor the playwright Aeschylus. He also helped fund the building of the Parthenon, a temple dedicated to the goddess Athena that still stands today. Soon Pericles made his way into politics and was eventually elected as one of Athens’ leading generals. Like Alexander, Pericles was military minded and led many successful military campaigns. As a statesman, he contributed in many ways to what is considered the golden age of the city of Athens. These philosophers and the Greek military and political leaders left their mark on both ancient Greece and the present-day Western world, but there were also famous mathematicians and scientists whose work and ideas are still popular today. Pythagoras If you’ve ever tried to find the area of a right triangle, you’ve likely had to use something called the Pythagorean theorem, which is named after the mathematician Pythagoras. This theorem is one of the biggest contributions that Pythagoras made to mathematics. Pythagoras used numbers and mathematics to seek meaning in life. He even created a religious order in which the members focused on philosophy and math in order to find personal salvation. Hippocrates Modern medicine has been heavily influenced by the work of Hippocrates, an ancient Greek physician. The methods attributed to Hippocrates are compiled in 60 medical books known as the Hippocratic corpus. It is from these books that we have learned what was done in Hippocratic medicine. This practice of medicine included adopting a healthy diet and engaging in physical exercise—ideas still espoused to the public today. The corpus also included information about the importance of recording case histories and treatments, another practice essential to modern medicine. Hippocrates is best known for the wisdom contained in the Hippocratic oath, modern versions of which still govern the ethical principles by which new doctors promise to observe when practicing medicine. Though these prominent Greeks lived centuries before us, they have left a brilliant legacy. By building on their hard work and great ideas, we’ve been able to establish the thriving world we live in today. study of beauty. (356-323 BCE) Greek ruler, explorer, and conqueror. (384-322 BCE) Greek scientist and philosopher. to sincerely devote time and effort to something. system of organization or government where the people decide policies or elect representatives to do so. beliefs about what is right and wrong. to thrive or be successful. important; having the ability to lead the opinions or attitudes of others. administration of law. material, ideas, or history passed down or communicated by a person or community from the past. place containing books and other media used for study, reference, and enjoyment. person who studies the theory and application of quantities, groupings, shapes, and their relationships. (438 BCE) ancient temple to the goddess Athena on the Acropolis of Athens, Greece. person who studies knowledge and the way people use it. (427-347 BCE) Greek philosopher, mathematician, and founder of the Academy, the first institution of higher learning in Western Civilization. very productive or abundant. important or standing out. important or standing out. study of mental and behavioral patterns and characteristics. person who studies a specific type of knowledge using the scientific method. (469-399 BCE) Greek philosopher and teacher. instructional strategy in which questions are used to elicit an idea, admission, or set of answers. building used for worship. the study of animals.
https://www.nationalgeographic.org/article/lasting-legacy-ancient-greek-leaders-and-philosophers/
A family of RNA viruses, mainly arboviruses, consisting of two genera: ALPHAVIRUS (group A arboviruses), and RUBIVIRUS. Virions are spherical, 60-70 nm in diameter, with a lipoprotein envelope tightly applied to the icosahedral nucleocapsid. By DataStellar Co., Ltd Word of the day copper alginate - A combination alginic acid, obtained from seaweed and copper; used in the anemias of leprosy, cancer, etc.
http://www.dictionary.net/togaviridae
Researchers Point to Bats as Source for Measles & Mumps An international team of researchers studying viruses in Germany have discovered that bats act as a natural host for paramyxoviruses, the family of viruses responsible for measles, mumps, pneumonias and colds. Using modelling, the scientists working at the University of Bonn searched for the origin of paramyxoviruses in wild animals. The results of their research pointed to bats as having “the highest likelihood” of being the original hosts. Furthermore, the researchers more than doubled the number of known paramyxoviruses, discovering a further 66 new species in the family. Although concerning, it is not yet known if any of the newly discovered viruses are harmful or even able to be transmitted to humans. As the source for paramyxoviruses, bats will play an important role in tracking future outbreaks of disease and planning vaccination campaigns. The research is published in the journal Nature Communications.
https://cavingnews.com/20120427-researchers-point-to-bats-as-source-for-measles-mumps-pneumonias-colds-paramyxovirus-reservoir
The normal or Gauss distribution (after Carl Friedrich Gauss) is an important type of continuous probability distribution in stochastics. Its probability density function is also called the Gaussian function, Gaussian normal distribution, Gaussian distribution curve, Gaussian curve, Gaussian bell curve, Gaussian bell function, Gaussian bell or simply bell curve. The normal or Gauss distribution is defined as: The graph of this density function has a "bell-shaped" form and is symmetrical around parameter μ as centre of symmetry, which also represents the expected value, the median and the mode of the distribution. Using the sliders in the lower part of the graph, the parameters of the Gauss distribution can be varied. The adjustable parameter range can be specified in the numeric fields. The red points on the bell curve can be moved. The integral of the bell curve is calculated for the range between the points. As the total area of the Gauss distribution is normalized to one, the integral corresponds to the area fraction. This means, for example, if the points are set to ±σ, the area is 0.68 or 68% of the total area. μ and σ are the parameters of the normal distribution. In μ is the center of the distribution and the bell curve takes its maximum there. The inflection points of the function are located at a distance ±σ from the center of symmetry. For random variables that are normally distributed, the following applies: An alternative input is possible with load data from file. The values may be separated comma or space or semicolon. The values must be given pairwise x1,y1,x2,y2... Load from file: The curve fitting of the Gaussian distribution to the measured values is done by calculation of the weighted average of the measured values. The weighted average corresponds to the μ in the Gaussian distribution. The standard deviation of the measured values from the mean μ is the σ in the normal distribution formula. The displayed bell curve is the fitted Gaussian distribution multiplied by the area A of the measured values. The area A is calculated by the trapeze formula. Print or save the image via right mouse click.
https://elsenaju.eu/Functions/Gaussian-Plotter.htm
After suffering from depression for years, Kelly Martin finally decided enough was enough. He dragged himself to his doctors office, and 10 minutes later the Calgary business manager walked away with a prescription for antidepressants After suffering from depression for years, Kelly Martin finally decided enough was enough. He dragged himself to his doctor’s office, and 10 minutes later the Calgary business manager walked away with a prescription for antidepressants. But from there things went from bad to worse: Paxil caused terrible side effects, so after seven weeks Martin stopped taking the drug. Little did he know that quitting abruptly can exacerbate depressive symptoms. “I felt worse than I have ever felt,” Martin tells alive of that emotional crash he endured in 1996. He went on to search for alternatives. He stumbled upon information about St. John’s wort–which at the time wasn’t nearly as well known as it is today–and tracked down a store that carried it. “It started to work very quickly,” says Martin, 37. “After a few weeks, there was a huge difference … I felt normal for the first time in a long time.” New research into St. John’s wort validates Martin’s experience. As Effective as Antidepressants According to a study published in the Cochrane Database of Systematic Reviews in October 2008, St. John’s wort is just as effective as common antidepressants and has fewer side effects. Headed by Klaus Linde, a doctor at the Centre for Complementary Medicine Research in Munich, Germany, the review examined 29 randomized, double-blind studies involving 5,489 patients with mild to moderately severe major depression. The studies all lasted at least four weeks and compared St. John’s wort to tri- and tetracyclic antidepressants as well as selective serotonin reuptake inhibitors (SSRIs). “Overall, the St. John’s wort extracts tested in the trials were superior to placebo, similarly effective as standard antidepressants, and had fewer side effects than standard antidepressants,” the authors wrote. Researchers also found that people taking St. John’s wort, or Hypericum perforatum, were less likely than patients taking antidepressants to stop treatment because of adverse effects. Comparing Side Effects Side effects of St. John’s wort are “usually minor and uncommon,” the Cochrane review notes. They include dry mouth, dizziness, diarrhea, fatigue, nausea, and sensitivity to sunlight, according to the Bethesda, Maryland-based National Center on Complementary and Alternative Medicine, a division of the US National Institutes of Health. Potential health risks of antidepressants, by contrast, are many. Among the side effects of Paxil (paroxetine hydrochloride) and Prozac (fluoxetine hydrochloride), for instance, are nervousness, anxiety, abnormal ejaculation, abnormal vision, constipation, decreased libido, diarrhea, dizziness, female genital disorders, nausea, sleepiness, sweating, decreased appetite, dry mouth, impotence, tremors, weakness, and infection. Furthermore, all antidepressants carry the increased risk of suicidal thinking and behaviour in children, adolescents, and young adults. Among the signs of suicidality are new or worsening depression, anxiety, or irritability; feelings of agitation, aggression, or restlessness; panic attacks; being angry or violent, or acting on dangerous impulses; and mania, which is marked by an extreme increase in activity and speech. Check with Health Practitioner Linde’s findings on St. John’s wort aren’t without caveats. The authors urge anyone considering taking the herb to consult a health professional first. “Using a St. John’s wort extract might be justified, but important issues should be taken into account,” they wrote. For one, products vary considerably in terms of potency and quality from brand to brand and even from batch to batch. St. John’s wort is available in capsule and liquid extract forms, as well as in teas. In their review, researchers used preparations ranging in dosages from 500 mg to 1,200 mg. For another, the herb can interact with prescription medications and significantly compromise their effect. Health Canada warned people not to use St. John’s wort with any retroviral drugs after a 2000 study found that it greatly reduced the presence of indinavir, a protease inhibitor used to treat HIV infections, in the bloodstream. At the time Health Canada cautioned that St. John’s wort might also negatively interact with anti-epilepsy drugs, oral contraceptives, and immunosuppressant and anticoagulant medications. People taking St. John’s wort in conjunction with conventional antidepressants have experienced “serotonin syndrome,” which is marked by headaches, tremors, and restlessness. Geographical Differences The Cochrane review found that study findings were more favourable in Germany, Austria, and Switzerland, which all have a long history of using the herb, than in other nations. “This difference could be due to the inclusion of patients with slightly different types of depression, but it cannot be ruled out that some smaller studies from German-speaking countries were flawed and reported overoptimistic results,” the researchers noted. Calgary’s Kelly Martin, meanwhile, had such a positive experience with the herb that in 1997 he started a website to tell the world all about it: sjwinfo.org. He still updates the site regularly and says millions of people have visited. “Part of the healing process is wanting to help people,” Martin says, emphasizing that everyone should do their own research before taking anything for depression. “I found something that works for me, and if it [the] helps one person, then it will have all been worth it.” Martin says findings like those from Linde’s research boost his confidence in St. John’s wort. “No drug or herb is perfect,” Martin says. “Everything has some kind of side effect … Without information, a lot of people say, ‘Oh, it’s just that herbal stuff.’ But reputable organizations have done studies that say, ‘Yes, it’s really effective.’ “Studies like this one only support what I and thousands of other people already know.” Depression: The Stats About 8 percent of Canadian adults will experience major depression at some time in their lives, according to the Public Health Agency of Canada. Major depression is also the fourth leading cause of disability and premature death in the world. Symptoms include - Persistent sad, hopeless, or empty feelings - Feelings of hopelessness, worthlessness, helplessness, pessimism, or guilt - Loss of interest in activities - Difficulty concentrating or making decisions - Restlessness - Irritability - Sleep problems - Overeating or loss of appetite With a major depression, symptoms last at least two weeks and prevent people from working, studying, sleeping, or eating.
https://www.jinpei.net/2015/04/24/st-johns-wort/
Font Size: “Attention” for Detecting Unreliable News in the Information Age Last modified: 2018-06-20 Abstract An Unreliable news is any piece of information which is false or misleading, deliberately spread to promote political, ideological and financial agendas. Recently the problem of unreliable news has got a lot of attention as the number instances of using news and social media outlets for propaganda have increased rapidly. This poses a serious threat to society, which calls for technology to automatically and reliably identify unreliable news sources. This paper is an effort made in this direction to build systems for detecting unreliable news articles. In this paper, various NLP algorithms were built and evaluated on Unreliable News Data 2017 dataset. Variants of hierarchical attention networks (HAN) are presented for encoding and classifying news articles which achieve the best results of 0.944 AUROC. Finally, Attention layer weights are visualized to understand and give insight into the decisions made by HANs. The results obtained are very promising and encouraging to deploy and use these systems in the real world to mitigate the problem of unreliable news.
https://www.aaai.org/ocs/index.php/WS/AAAIW18/paper/viewPaper/17071
--- abstract: 'Let $k,l\geq2$ be fixed integers. In this paper, firstly, we prove that all solutions of the equation $(x+1)^{k}+(x+2)^{k}+...+(lx)^{k}=y^{n}$ in integers $x,y,n$ with $x,y\geq1, n\geq2$ satisfy $n<C_{1}$ where $C_{1}=C_{1}(l,k)$ is an effectively computable constant. Secondly, we prove that all solutions of this equation in integers $x,y,n$ with $x,y\geq1, n\geq2, k\neq3$ and $l\equiv0 \pmod 2$ satisfy $\max\{x,y,n\}<C_{2}$ where $C_{2}$ is an effectively computable constant depending only on $k$ and $l$.' address: 'Department of Mathematics, Uludağ University, 16059 Bursa, Turkey' author: - Gökhan Soydan title: 'On the Diophantine equation $(x+1)^{k}+(x+2)^{k}+...+(lx)^{k}=y^{n}$' --- [^1] Introduction ============ In 1956, J.J. Sch[ä]{}ffer [@Sch] considered the equation $$\begin{aligned} \label{eq.1.1} 1^{k}+2^{k}+...+x^{k}=y^{n}.\end{aligned}$$ He proved that for fixed $k\geq1$ and $n\geq2$, (\[eq.1.1\]) has at most finitely many solutions in positive integers $x$ and $y$, unless $$\begin{aligned} (k,n)\in\{(1,2),(3,2),(3,4),(5,2)\},\end{aligned}$$ where, in each case, there are infinitely many such solutions. Sch[ä]{}ffer’s proof used an ineffective method due to Thue and Siegel so his result is also ineffective. This means that the proof does not provide any algorithm to find all solutions. Applying Baker’s method, K. Győry, R. Tijdeman and M. Voorhoeve [@GTV] proved a more general and effective result in which the exponent $n$ is also unknown. Let $k\geq2$ and $r$ be fixed integers with $k\notin\{3,5\}$ if $r=0$, and let $s$ be a square-free odd integer. In [@GTV], they proved that the equation $$\begin{aligned} s(1^{k}+2^{k}+...+x^{k})+r=y^{n}\end{aligned}$$ in positive integers $x,y\geq2$, $n\geq2$ has only finitely many solutions and all these can be effectively determined. Of particular importance is the special case when $s=1$ and $r=0$. They also showed that for given $k\geq2$ with $k\notin\{3,5\}$, equation (\[eq.1.1\]) has only finitely many solutions in integers $x,y\geq1$, $n\geq2$, and all these can be effectively determined. The following striking result is due to Voorhoeve, Győry and Tijdeman [@VGT]. Let $R(x)$ be a fixed polynomial with integer coefficients and let $k\geq2$ be a fixed integer such that $k\notin\{3,5\}$. In [@VGT], same authors proved that the equation $$\begin{aligned} 1^{k}+2^{k}+...+x^{k}+R(x)=by^{n}\end{aligned}$$ in integers $x,y\geq2$, $n\geq2$ has only finitely many solutions, and an effective upper bound can be given for $n$. Later, various generalizations and analogues of the results of Győry, Tijdeman and Voorhoeve have been established by several authors [@Br1], [@Br2], [@BP1], [@BP2], [@Di],[@Ka], [@Pi], [@Ur]. For a survey of these results we refer to [@GP] and the references given there. Here we present the result of B. Brindza [@Br2]. For brevity let us set $S_{k}(x)=1^{k}+2^{k}+...+x^{k}$, $A=\mathbb{Z}[x]$, $\kappa=(k+1)\displaystyle{\prod_{(p-1)|(k+1)!}}p$ ($p$ prime). Let $$\begin{aligned} F(y)=Q_{n}y^{n}+...+Q_{1}y+Q_{0}\in A[y].\end{aligned}$$ Consider the equation $$\begin{aligned} \label{eq.1.2} F(S_{k}(x))=y^{n}\end{aligned}$$ in integers $x,y\geq2$, $n\geq2$. Let $Q_{i}(x)=\kappa^{i}K_{i}(x)$ where $K_{i}(x)\in \mathbb{Z}[x]$ for $i=2,3,...,m$. In [@Br2], Brindza proved that if $Q_{i}(x)\equiv0\pmod{\kappa^{i}}$, for $i=2,3,...,m$; $Q_{1}(x)\equiv\pm1\pmod{4}$ and $k\notin\{1,2,3,5\}$, then all solutions of satisfy $\max\{x,y,n\}<C_{1}$, where $C_{1}$ is an effectively computable constant depending only on $F$ and $k$. Recently C. Rakaczki [@Ra] gave a generalization of the results of Győry, Tijdeman and Voorhoeve and an extension of the result of Brindza to the case when the polynomials $Q_{i}(x)$ are arbitrary constant polynomials. Let $F(x)$ be a polynomial with rational coefficients and $d\neq0$ be an integer. Suppose that $F(x)$ is not an $n$-th power. In [@Ra], Rakaczki showed that the equation $$\begin{aligned} F(S_{k}(x))=dy^{n}\end{aligned}$$ has only finitely many integer solutions $x,y\geq2$, $n\geq2$, which can be effectively determined provided that $k\geq6$. Let $k>1$, $r,s\neq0$ be fixed integers. Then apart from the cases when $(i)$ $k=3$ and either $r=0$ or $s+64r=0$, and $(ii)$ $k=5$ and either $r=0$ or $s-324r=0$, Rakaczki proved that the equation $$\begin{aligned} s(1^{k}+2^{k}+...+x^{k})+r=y^{n}\end{aligned}$$ in integers $x>0$, $y$ with $|y|\geq2$, and $n\geq2$ has only finitely many solutions which can be effectively determined. Recently, Z. Zhang [@Zh] studied the Diophantine equation $$\begin{aligned} (x-1)^{k}+x^{k}+(x+1)^{k}=y^{n}, n>1,\end{aligned}$$ and completely solved it for $k=2,3,4$. Now we consider a more general equation. Let $$\begin{aligned} G(x)=(x+1)^{k}+(x+2)^{k}+...+(lx)^{k}.\end{aligned}$$ In this paper, we are interested in the solutions of the equation $$\begin{aligned} \label{eq.1.3} G(x)=y^{n}\end{aligned}$$ in integers $x,y\geq1$ and $n\geq2$. \[theo.1\] Let $k,l\geq2$ fixed integers. Then all solutions of the equation in integers $x,y\geq1$ and $n\geq2$ satisfy $n<C_{1}$ where $C_{1}$ is an effectively computable constant depending only on $l$ and $k$. \[theo.2\] Let $k,l\geq2$ fixed integers such that $k\neq3$. Then all solutions of the equation in integers $x,y,n$ with $x,y\geq1$, $n\geq2$, and $l\equiv0 \pmod 2$ satisfy $\max\{x,y,n\}<C_{2}$ where $C_{2}$ is an effectively computable constant depending only on $l$ and $k$. We organize this paper as follows. In Section 2, firstly, we recall the general results that we will need. Secondly, we give two new lemmas and prove that these lemmas imply our theorems. In Section 3, we discuss the number of solutions in integers $x,y\geq 1$ of where $n>1$ is fixed $k\in\{1,3\}$ and $l\equiv0 \pmod 2$ and reformulate this case. In the last section, we give the proofs of Theorems \[theo.1\] and \[theo.2\]. Auxiliary Results ================= \[lem.1\] $(x+1)^{k}+(x+2)^{k}+...+(lx)^{k}=\dfrac{B_{k+1}(lx+1)-B_{k+1}(x+1)}{k+1}$ where $$\begin{aligned} B_{q}(x)=x^{q}-\frac{1}{2}qx^{q-1}+\dfrac{1}{6}\binom {q} {2}x^{q-2}+...=\sum\limits_{i=0}^q\binom {q} {i}B_{i}x^{q-i}\end{aligned}$$ is the q-th Bernoulli polynomial with $q=k+1$. It is an application of the equality $$\begin{aligned} \sum\limits_{n=M}^{N-1} n^{k}=\frac{1}{k+1} (B_{k+1}(N)-B_{k+1}(M))\end{aligned}$$ which is given by Rademacher in [@Rd], pp.3-4. Now we give an important result of Brindza which is an effective version of Leveque’s theorem [@Le] \[lem.2\] Let $H(x)\in\mathbb{Q}[x]$, $$\begin{aligned} H(x)=a_{0}x^{N}+...+a_{N}=a_{0}\prod_{i=1}^n (x-\alpha_{i})^{r_{i}},\end{aligned}$$ with $a_{0}\neq0$ and $\alpha_{i}\neq\alpha_{j}$ for $i\neq j$. Let $0\neq b\in\mathbb{Z}$, $2\leq m\in\mathbb{Z}$ and define $t_{i}=\frac{m}{(m,r_{i})}$. Suppose that $\{t_{1},...,t_{n}\}$ is not a permutation of the n-tuples $(a)$ $\{t,1,...,1\}$, $t\geq1$; $(b)$ $\{2,2,1,...,1\}.$ Then all solutions $(x,y)\in\mathbb{Z}^{2}$ of the equation $$\begin{aligned} H(x)=by^{m}\end{aligned}$$ satisfy $\max\{|x|,|y|\}<C$, where $C$ is an effectively computable constant depending only on $H$, $b$ and $m$. See B. Brindza [@Br2]. \[lem.3\] Let $0\neq b \in\mathbb{Z}$ and let $P(x) \in\mathbb{Q}[x]$ be a polynomial with at least two distinct zeroes. Then the equation $$\begin{aligned} P(x)=by^{n}\end{aligned}$$ in integers $x,y>1$, $n$ implies that $n<C$ where $C=(P,b)$ is an effectively computable constant. See A. Schinzel and R. Tijdeman [@ST]. \[lem.4\] For $k \in\mathbb{Z}^{+}$ let $B_{k}(x)$ be the k-th Bernoulli polynomial. Then the polynomial $$\begin{aligned} G(x)=\frac{B_{k+1}(lx+1)-B_{k+1}(x+1)}{k+1}\end{aligned}$$ has at least two distinct zeroes. By Lemma \[lem.1\], we have $G(x)=(\frac{l^{k+1}-1}{k+1})x^{k+1}+(\frac{l^{k}-1}{2})x^{k}+...+cx$ where $c$ is a rational number. Now one can observe that the coefficient of $x^{k}$ is nonzero and that $x=0$ is a zero of $G(x)$. Let’s also assume that there is no other zero of $G(x)$. Thus we have $$\begin{aligned} G(x)=\bigg(\frac{l^{k+1}-1}{k+1}\bigg)x^{k+1}\end{aligned}$$ which is a contradiction. \[lem.5\] Let $q\geq2$, $R^{*}(x)\in\mathbb{Z}[x]$ and set $$\begin{aligned} Q(x)=B_{q}(x)-B_{q}+qR^{*}(x).\end{aligned}$$ Then $(i)$ $Q(x)$ has at least three zeros of odd multiplicity, unless $q\in\{2,4,6\}$. $(ii)$ For any odd prime $p$, at least two zeros of $Q(x)$ have multiplicities relatively prime to $p$. See M. Voorhoeve, K. Győry and R. Tijdeman [@VGT]. \[lem.6\] For $q\geq2$ let $B_{q}(x)$ be the q-th Bernoulli polynomial. Let $$\begin{aligned} \label{eq.1.4} P(x)=B_{q}(lx+1)-B_{q}(x+1)\end{aligned}$$ where $l$ is even. Then $(i)$ $P(x)$ has at least three zeros of odd multiplicity unless $q\in\{2,4\}$. $(ii)$ For any odd prime $p$, at least two zeros of $P(x)$ have multiplicities relatively prime to $p$. We shall follow the proof of Lemma \[lem.5\] of [@VGT]. By the Staudt-Clausen theorem (see Rademacher [@Rd], pp.10), the denominators of the Bernoulli numbers $B_{i}$, $B_{2k}$ $(k=1,2,...)$ are even but not divisible by $4$. Choose the minimal $d\in\mathbb{N}$ such that both the polynomials $d(B_{q}(lx+1)-B_{q}(x+1))$ and $dB_{q}(x)$ are in $\mathbb{Z}[x]$. Using the equality $B_{q}(x+1)=B_{q}(x)+qx^{q-1}$ (see [@Rd], pp.4-5), we have $$\begin{aligned} \label{eq.1.5} dP(x)=d\left( \sum\limits_{i=0}^q\binom {q} {i}\left[ (lx+1)^{q-i}-x^{q-i}\right] B_{i}-qx^{q-1}\right).\end{aligned}$$ Hence by the choice of $d$ and by the Staudt-Clausen theorem, we have $d\binom {q} {i}B_{i}\in\mathbb{Z}$ and $\binom {q} {2k}dB_{2k}\in\mathbb{Z}$ for $k=1,2,...,\frac{q-1}{2}$. If $d$ is odd, then necessarily $\binom {q} {i}$ and $\binom {q} {2k}$ must be even for $k=1,2,...,\frac{q-1}{2}$. Write $q=2^{\mu}r$ where $\mu\geq1$ and $r$ is odd. Then $\binom {q} {2^{\mu}}$ is odd, giving a contradiction unless $r=1$. So $$\begin{aligned} \textrm{$d$ is odd} \Longleftrightarrow q=2^{\mu} \textrm{ for some } \mu\geq1.\end{aligned}$$ If $q\neq2^{\mu}$ for any $\mu\geq1$ then $$\begin{aligned} \label{eq.1.6} d\equiv2\pmod{4}.\end{aligned}$$ We distinguish three cases: <span style="font-variant:small-caps;">I.</span> Suppose $q=2^{\mu}$ for some $\mu\geq1$, so that $d$ is odd. We first prove $(i)$ so we may assume that $\mu\geq3$. Considering modulo 4, we have $$\begin{aligned} \label{eq.1.7} dP(x)&\equiv d\sum\limits_{i=0}^{q-2}\binom {q} {i}(lx+1)^{q-i}B_{i}-d \sum\limits_{i=0}^{\frac{q-2}{2}}\binom {q} {2i}B_{2i}x^{q-2i}\pmod{4}.\end{aligned}$$ Firstly, let $l\equiv 0 \pmod{4}$. Then we obtain $$\begin{aligned} \label{eq.1.8} d\sum\limits_{i=0}^{q-2}\binom {q} {i}(lx+1)^{q-i}B_{i}\equiv d\sum\limits_{i=0}^{q-2}\binom {q} {i}B_{i} \equiv d\sum\limits_{i=0}^{\frac{q-2}{2}}\binom {q} {2i}B_{2i}\pmod{4}.\end{aligned}$$ It is easy to see that $\sum\limits_{i=1}^{q}\binom {q} {q-i}B_{q-i}=0$. Hence we get $$\begin{aligned} \label{eq.1.9} \sum\limits_{i=1}^{\frac{q-2}{2}}\binom {q} {2i}B_{2i}=-B_{0}-qB_{1}.\end{aligned}$$ By using and , one gets $$\begin{aligned} d\sum\limits_{i=0}^{q-2}\binom {q} {i}(lx+1)^{q-i}B_{i}\equiv d \bigg(\binom {q} {0}B_{0}+\sum\limits_{i=1}^{\frac{q-2}{2}}\binom {q} {2i}B_{2i}\bigg) \equiv0\pmod{4}.\end{aligned}$$ Then we deduce by the following: $$\begin{aligned} \label{eq.1.10} dP(x)\equiv-d\sum\limits_{i=0}^{\frac{q-2}{2}}\binom {q} {2i}B_{2i}x^{q-2i}\pmod{4}.\end{aligned}$$ Secondly, let $l\equiv 2 \pmod{4}$. Then we obtain $$\begin{aligned} \label{eq.1.11} d\sum\limits_{i=0}^{q-2}\binom {q} {i}(lx+1)^{q-i}B_{i}\equiv d\sum\limits_{i=0}^{q-2}\binom {q} {i}(2x+1)^{q-i}B_{i}\pmod{4}.\end{aligned}$$ Then the RHS of becomes $$\label{eq.1.12} \begin{aligned} d\sum\limits_{i=0}^{q-2}\binom {q} {i}(2x+1)^{q-i}B_{i}=d(B_{0}.(2x+1)^{q}+qB_{1}.(2x+1)^{q-1}\\+\sum\limits_{i=0}^{\frac{q-2}{2}}\binom {q} {2i}(2x+1)^{q-2i}B_{2i}. \end{aligned}$$ Since $2x+1$ is odd and $q=2^{\mu}, \mu\geq3,$ is even, considering modulo 4 and using , becomes $$\begin{aligned} d\sum\limits_{i=0}^{q-2}\binom {q} {i}(lx+1)^{q-i}B_{i}\equiv0\pmod{4}.\end{aligned}$$ So in all cases reduces to . Note that $\binom {q} {2i}$ is divisible by $8$ unless $2i$ is divisible by $2^{\mu-2}$. We have therefore for some odd $d'$, writing $t=\frac{1}{4}q$ $$\begin{aligned} \label{eq.1.13} dP(x)\equiv d'x^{4t}+2x^{3t}+dx^{2t}+2x^{t} \pmod{4}.\end{aligned}$$ Write $dP(x)=R^{2}(x)S(x)$ where $R(x),S(x)\in\mathbb{Z}[x]$ and $S$ contains each factor of odd multiplicity of $P$ in $\mathbb{Z}[x]$ exactly once. Assume that deg$S(x)\leq2$. Since $$\begin{aligned} R^{2}(x)S(x)\equiv x^{4t}+x^{2t} \equiv x^{2t}(x^{2t}+1)\pmod{2}, \end{aligned}$$ $R^{2}(x)$ must be divisible by $x^{2t-2}\pmod{2}$. So $$\begin{aligned} R(x)=x^{t-1}R_{1}(x)+2R_{2}(x),\end{aligned}$$ $$\begin{aligned} R^{2}(x)=x^{2t-2}R_{1}^{2}(x)+4R_{3}(x),\end{aligned}$$ for certain $R_{1},R_{2},R_{3}\in\mathbb{Z}[x]$. If $q>8$, then $t>2$ so the last identity is incompatible with because of the term $2x^{t}$. Hence deg$S(x)\geq3$, which proves $(i)$. If $q=8$, then by $$\begin{aligned} dP(x)\equiv 3x^{8}+2x^{6}+x^{4}+2x^{2}\pmod{4}.\end{aligned}$$ From here, we follow the proof in the corrigendum paper [@VGT]. This fact can also be reduced from . So, the proof of $(i)$ is completed where $q=2^{\mu}$, $\mu\geq3$. To prove $(ii)$, let $p$ be an odd prime and write $P(x)=(R(x))^{p}S(x)$ where $R,S\in\mathbb{Z}[x]$ and all the roots of multiplicity divisibly by $p$ are incorporated in $(R(x))^{p}$. We have, writing $\delta=\frac{1}{2}q$, by $$\begin{aligned} dP(x)\equiv (R(x))^{p}S(x)\equiv x^{\delta}(x^{\delta}+1)\equiv x^{\delta}(x+1)^{\delta} \pmod{2}.\end{aligned}$$ Since $\delta$ is prime to $p$, $S$ has at least two different zeros, proving $(ii)$ in case <span style="font-variant:small-caps;">I</span>. <span style="font-variant:small-caps;">II.</span> Suppose $q$ is even and $q\neq2^{\mu}$ for any $\mu$. Then $d\equiv 2\pmod{4}$ and hence considering in modulo $2$, we get $$\begin{aligned} dP(x)\equiv d\sum\limits_{i=0}^q\binom {q} {i}(1-x^{q-i})B_{i}\pmod{2}.\end{aligned}$$ Since $B_{i}d\binom {q} {i}\equiv\binom {q} {i}\pmod{2}$ for $i=1,2,3,...,q$, we have $$\begin{aligned} dP(x)\equiv \sum\limits_{k=1}^{\frac{q-2}{2}}\binom {q} {2k}x^{2k}=\sum\limits_{t=1}^{q-1}\binom {q} {t}x^{t}\equiv (x+1)^{q}-x^{q}-1 \pmod{2}.\end{aligned}$$ Write $q=2^{\mu}r$, where $r>1$ is odd. Then $$\begin{aligned} dP(x)\equiv (x+1)^{q}-x^{q}-1\equiv ((x+1)^{r}-x^{r}-1)^{{2}^{\mu}}\pmod{2}.\end{aligned}$$ Since $r>1$ is odd, $(x+1)^{r}-x^{r}-1$ has $x$ and $x+1$ as simple factors $\pmod{2}$. Thus $$\begin{aligned} dP(x)\equiv x^{2^{\mu}}(x+1)^{{2}^{\mu}}K(x) \pmod{2}\end{aligned}$$ where $K(x)$ is neither divisible by $x$ nor by $(x+1)$ $\pmod{2}$. As in the preceding case, $P(x)$ must have two roots of multiplicity prime to $p$. This proves part $(ii)$ of the lemma. In order to prove part $(i)$, first we consider the case $q=6$. In this case $$\begin{aligned} dP(x)\equiv (2l^{6}+2)x^{6}+(2l^{5}+2)x^{5}+(l^{4}+3)x^{4}+(3l^{2}+1)x^{2}\pmod{4}.\end{aligned}$$ Since $l$ is even, we can write $$\begin{aligned} dP(x)\equiv 2x^{6}+2x^{5}+3x^{4}+x^{2}\pmod{4}.\end{aligned}$$ So, $P(x)$ has at least three simple roots. To prove our claim, suppose $dP$ can be written as $$\begin{aligned} \label{eq.1.14} dP(x)\equiv S(x)R^{2}(x) \pmod{4}\end{aligned}$$ with deg$S\leq2$. If deg$S=0$, then clearly $S$ is an odd constant, so $R^{2}(x)\equiv x^{4}+x^{2}\pmod{2}$. Hence $R(x)\equiv x^{2}+x\pmod{2}$ and $R^{2}(x)\equiv x^{4}+2x^{3}+x^{2}\pmod{4}$, which is a contradiction. If deg$S=1$, then either $S(x)\equiv x$ or $S(x)\equiv x+1\pmod{2}.$ In both cases, the quotient of $P$ and $S$ can not be written as a square $\pmod{2}$. If deg$S=2$, then either $S(x)\equiv x^{2}$ or $S(x)\equiv x^{2}+x$ or $S(x)\equiv x^{2}+1\pmod{2}$,\ since $x^{2}+x+1$ does not divide $P \pmod{2}$. In the first case $R(x)\equiv x+1\pmod{2}$, hence $R^{2}(x)\equiv x^{2}+2x+1\pmod{4}$ which is a contradiction. In the second case, the quotient of $P$ and $S$ is not even a square$\pmod{2}$. In the third case $R(x)\equiv x\pmod{2}$, hence $R^{2}(x)\equiv x^{2}\pmod{4}$ which is a contradiction. We conclude that $dP$ cannot be written in form with deg$S<3$, proving our claim. Secondly, as $q=2$ and $4$ are the exceptional cases, $q=6$ case is treated in this section and finally the case $q=8$ was treated in Section I, we may assume that $q\geq10$. Considering modulo $4$ where $d\equiv2\pmod{4}$, we have $$\begin{aligned} \label{eq.1.15} dP(x)&\equiv d\sum\limits_{i=0}^{q-2}\binom {q} {i}(lx+1)^{q-i}B_{i}-dqB_{1}x^{q-1}-d \sum\limits_{i=0}^{\frac{q-2}{2}}\binom {q} {2i}B_{2i}x^{q-2i}\pmod{4}.\end{aligned}$$ Firstly, let $l\equiv 0 \pmod{4}$. Then we obtain $$\begin{aligned} \label{eq.1.16} d\sum\limits_{i=0}^{q-2}\binom {q} {i}(lx+1)^{q-i}B_{i}\equiv dqB_{1}+ d\sum\limits_{i=0}^{\frac{q-2}{2}}\binom {q} {2i}B_{2i}\pmod{4}.\end{aligned}$$ We know that $\sum\limits_{i=1}^{q}\binom {q} {q-i}B_{q-i}=0$. By and , one gets $$\begin{aligned} d\sum\limits_{i=0}^{q-2}\binom {q} {i}(lx+1)^{q-i}B_{i}\equiv d \bigg(q B_{1}+\sum\limits_{i=0}^{\frac{q-2}{2}}\binom {q} {2i}B_{2i}\bigg) \equiv0\pmod{4}.\end{aligned}$$ Then we deduce by the following: $$\begin{aligned} \label{eq.1.17} dP(x)\equiv-dqB_{1}x^{q-1}-d\sum\limits_{i=0}^{\frac{q-2}{2}}\binom {q} {2i}B_{2i}x^{q-2i}\pmod{4}.\end{aligned}$$ Secondly, let $l\equiv 2 \pmod{4}$. Then we have and the RHS of becomes $$\label{eq.1.18} \begin{aligned} d\sum\limits_{i=0}^{q-2}\binom {q} {i}(2x+1)^{q-i}B_{i}=d(B_{0}.(2x+1)^{q}+qB_{1}.(2x+1)^{q-1}\\+\sum\limits_{i=1}^{\frac{q-2}{2}}\binom {q} {2i}(2x+1)^{q-2i}B_{2i}. \end{aligned}$$ Since $2x+1$ is odd and $q\neq2^{\mu}$ is even $(q\geq10)$ and $dq\equiv0\pmod{4}$, considering modulo 4 and using , becomes $$\begin{aligned} d\sum\limits_{i=0}^{q-2}\binom {q} {i}(lx+1)^{q-i}B_{i}\equiv0\pmod{4}.\end{aligned}$$ So in all cases reduces to . Then by we have $$\begin{aligned} \label{eq.1.19} dP(x)\equiv 2x^{q}-qx^{q-1}+\frac{1}{6}d\binom {q} {2}x^{q-2}+...+dB_{q-2}\binom {q} {2}x^{2}\pmod{4}.\end{aligned}$$ Write $dP(x)\equiv R^{2}(x)S(x)$, where $R,S\in\mathbb{Z}[x]$ and $S(x)$ only contains each factor of odd multiplicity of $P$ once. Then deg$S(x)\geq3$. The assertion easily follows by repeating the corresponding part of the proof of Lemma \[lem.5\]. Thus, the proof is completed for the case <span style="font-variant:small-caps;">II</span>. <span style="font-variant:small-caps;">III.</span> Let $q\geq3$ be odd. Then $d\equiv2\pmod{4}$ and for $i=1,2,4,...,q-1,$ $$\begin{aligned} d\binom {q} {i}B_{i}\equiv \binom {q} {i} \pmod{2}.\end{aligned}$$ Now considering modulo $2$, we have $$\begin{aligned} dP(x)\equiv d\sum\limits_{i=0}^{q}\binom {q} {i}(1-x^{q-i})B_{i}\pmod{2}.\end{aligned}$$ Since $\sum\limits_{\lambda=1}^{\frac{q-2}{2}}\binom {q} {2\lambda}=2^{q-1}-1\equiv 1 \pmod{2}$, we have $$\begin{aligned} \label{eq.1.20} dP(x)\equiv x^{q-1}+\sum\limits_{\lambda=1}^{\frac{q-1}{2}}\binom {q} {2\lambda}x^{q-2\lambda}\pmod{2}.\end{aligned}$$ From , we get $$\begin{aligned} \label{eq.1.21} dP'(x)=d(\sum\limits_{i=0}^{q}\binom {q} {i}[(lx+1)^{q-i}-x^{q-i}]B_{i})'-dq(q-1)x^{q-2}\end{aligned}$$ and then $$\begin{aligned} \label{eq.1.22} xdP'(x)\equiv \sum\limits_{\lambda=1}^{\frac{q-1}{2}}\binom {q} {2\lambda}(q-2\lambda)x^{q-2\lambda}\pmod{2}.\end{aligned}$$ Hence by using and $$\begin{aligned} d(P(x)+xP'(x))\equiv x^{q-1}\pmod{2}.\end{aligned}$$ Any common factor of $dP(x)$ and $dP'(x)$ must therefore be congruent to a power of $x \pmod{2}$. Considering modulo $2$, $dP'(x)\equiv \binom {q} {q-1}=q \equiv 1 \pmod{2}$. Since $dP'(0)\equiv 1 \pmod{2}$, we find that $dP(x)$ and $dP'(x)$ are relatively prime$\pmod{2}$. So any common divisor of $dP(x)$ and $dP'(x)$ in $\mathbb{Z}[x]$ is of the shape $2R(x)+1$. Write $dP(x)=Q(x)S(x)$ where $Q(x)=\displaystyle{\prod_{i}}Q_{i}(x)^{k_{i}}\in\mathbb{Z}[x]$ contains the multiple factors of $dP$ and $S\in\mathbb{Z}[x]$ contains its simple factors where $k_{i}$ denotes the multiplicity of the polynomial factor $Q_{i}(x).$ Then $Q(x)$ is of the shape $2R(x)+1$ with $R\in\mathbb{Z}[x]$, so $$\begin{aligned} S(x)\equiv dP(x) \equiv x^{q-1}+...\pmod{2}.\end{aligned}$$ Thus the degree of $S(x)$ is at least $q-1$, proving case III whence $q>3$. If $q=3$, then $$\begin{aligned} \label{eq.1.23} dP(x)=(l-1)x(2(l^{2}+l+1)x^{2}+3(l+1)x+1).\end{aligned}$$ Considering where $l\equiv 2\pmod{4}$, it follows that $$\begin{aligned} dP(x)\equiv x(2x+1)(3x+1)\pmod{4}.\end{aligned}$$ So, $P(x)$ has three simple roots if $l \equiv 2 \pmod{4}$. Now, we consider the case $l \equiv 0 \pmod{4}$ in . Then we have $$\begin{aligned} 2P(x)\equiv 2x^{3}+x^{2}+3x \pmod{4}. \end{aligned}$$ $P(x)$ has also three simple roots if $l \equiv 0 \pmod{4}$. To prove this, suppose $$\begin{aligned} \label{eq.1.24} 2P(x)\equiv Q(x)T^{2}(x)\pmod{4} \end{aligned}$$ with deg$Q\leq2$. If deg$Q=0$, then $Q$ is an odd constant. So the quotient of $2P$ and $Q$ can not be written as a square $\pmod{2}$. If deg$Q=1$, then either $Q(x)\equiv x$ or $Q(x)\equiv x+1 \pmod{2}$. In both case, the quotient of $2P$ and $Q$ can not be written as a square $\pmod{2}$. If deg$Q=2$ then either $Q(x)\equiv x^{2}$ or $Q(x)\equiv x^{2}+x$ or $Q(x)\equiv x^{2}+1$ or $Q(x)\equiv x^{2}+x+1$. None of the $Q(x)$’s does divide $P \pmod{2}$. We conclude that $2P$ cannot be written in the form with deg$Q<3$, proving our claim. So, the proof of lemma is completed. Exceptional values for $k$ ========================== Consider the equation for fixed $k\in\{1,3\}$ and fixed $n=m>1$. Then the equation is equivalent to the equation $$\begin{aligned} \label{eq.1.25} (k+1)y^{m}=P(x)\end{aligned}$$ where $P(x)=B_{q}(lx+1)-B_{q}(x+1)$, $q\in\{2,4\}$, $q=k+1.$ If $q=2$, then the equation becomes $$\begin{aligned} \label{eq.1.28} 2y^{m}=(l-1)x((l+1)x+1).\end{aligned}$$ By using Lemma \[lem.2\], we have $r_{1}=r_{2}=1$ and so $t_{1}=t_{2}=1$. From here we get $m=2$. In the case $m=2$, the equation becomes $$\begin{aligned} \label{eq.1.26} u^{2}-2(l-1)v^{2}=1\end{aligned}$$ where $u=2x(l+1)+1$, $v=2(l+1)y$, $l \equiv 0 \pmod{2}$. By theory of Pell’s equation (see e.g. [@Mord Ch.8]), for infinitely many choices of $l$, has infinitely many solutions. If $q=4$, then the equation becomes $$\begin{aligned} \label{eq.1.29} 4y^{m}=x^{2}(l-1)((l^{2}+1)x+l+1)((l+1)x+1).\end{aligned}$$ Similarly to the former case, by Lemma \[lem.2\] we get $m=2$. In this case, the equation becomes $$\begin{aligned} \label{eq.1.27} u^{2}-(l^{4}-1)v^{2}=-l^{2}(l+1)(l^{2}+1)(l-1)^{3}\end{aligned}$$ where $u=(l^{4}-1)t$ ($t\in\mathbb{Z}$), $v=(l^{4}-1)x+(l^{3}-1)$, $l \equiv 0 \pmod{2}$. So, has infinitely many solutions. Even if $l$ is odd, the equations and are Pell’s equations. But in this work, we consider the title equation where $l$ is even. Proof of the theorems ===================== <span style="font-variant:small-caps;">Proof of Theorem \[theo.1\].</span> Let $x,y\geq1$ and $n\geq2$ be an arbitrary solution of in integers.We know from Lemma \[lem.4\] that $G(x)$ has at least two distinct zeroes. Hence by applying Lemma \[lem.3\] it follows from the equation that we get an effective bound for $n.$ <span style="font-variant:small-caps;">Proof of Theorem \[theo.2\].</span> We know from Theorem \[theo.1\] that $n$ is bounded, i.e. $n<C_{1}$ with an effectively computable $C_{1}$. So we may assume that $n$ is fixed. Then we get the following equation in integers $x,y\geq1$ $$\begin{aligned} P(x)=y^{n}\end{aligned}$$ where $P$ is given by with $q=k+1$. Write $$\begin{aligned} P(x)=a_{0}\displaystyle\prod_{i=1}^{n}(x-x_{i})^{r_{i}}\end{aligned}$$ where $a_{0}\neq 0$, $x_{i}\neq x_{j}$ if $i\neq j$ and, for a fixed $n$ let $t_{i}=\frac{n}{(n,r_{i})}.$ If $n$ is even, then by Lemma \[lem.6\] at least three zeroes have odd multiplicity, say $r_{1},r_{2},r_{3}$. Hence $t_{1}$, $t_{2}$ and $t_{3}$ are even. Consequently the exceptional cases in Lemma \[lem.2\] cannot occur. If $n$ is odd and $p|n$ for an odd prime $p$, then by Lemma \[lem.6\] at least two zeroes of $P(x)$ have multiplicities prime to $p$. We may assume that $(r_{1},p)=(r_{2},p)=1$, so $p|t_{1}$ and $p|t_{2}$. Using Lemma \[lem.2\], we have $\max\{x,y\}<C_{2}(n)$ with an effectively computable $C_{2}(n)$. Finally $n<C_{1}$ implies the required assertion. This proves the theorem. Acknowledgements {#acknowledgements .unnumbered} ================ I would like to thank Professor Ákos Pintér for his useful remarks and guidance and also I would like to cordially thank to all people in t​he Institute of Mathematics, University of Debrecen for ​their ​hospitality​. Finally I would like to thank referees for valuable comments. This work ​was supported by the Scientific and Technical Research Council of Turkey (TÜBİTAK) ​under ​2219-International Postdoctoral Research Scholarship. [8]{} , [*On some generalizations of the Diophantine equation $1^{k}+2^{k}+...+x^{k}=y^{z}$,*]{} Acta Arith. [**44**]{} (1984), 99-107. , [*On S-integral solutions of the equation $y^{m}=f(x)$,*]{} Acta Math. Hung. [**44**]{} (1984), 133-139. , [*On equal values of power sums,*]{} Acta Arith. [**77**]{} (1996), 97-101. , [*On the number of solutions of the equation $1^{k}+2^{k}+...+(x-1)^{k}=y^{z}$,*]{} Publ. Math. Debrecen [**56**]{} (2000), 271-277. , [*On a Diophantine equation involving quadratic characters,*]{} Compositio Math. [**57**]{} (1986), 383-403. , [*On the equation $1^{k}+2^{k}+...+x^{k}=y^{z}$*]{} Acta Arith. [**37**]{} (1980), 234-240. , [*On the equation $1^{k}+2^{k}+...+x^{k}=y^{z}$,*]{} Publ. Math. Debrecen [**62**]{} (2003), 403-414. , [*On the equation $s(1^{k}+2^{k}+...+x^{k})+r=by^{z}$,*]{} Tokyo J. Math. [**13**]{} (1990), 441-448. , [*On the equation $y^{m}=f(x)$,*]{} Acta Arith. [**9**]{} (1964), 209-219. , [*Diophantine equations,*]{} Academic Press, London, 1969. , [*A note on the equation $1^{k}+2^{k}+...+(x-1)^{k}=y^{m}$,*]{} Indag. Math. (N.S.) [**8**]{} (1997), 119-123. , [*On some generalizations of the Diophantine equation $s(1^{k}+2^{k}+...+x^{k})+r=dy^{n}$,*]{} Acta Arith. [**151**]{} (2012), 201-216. , [*Topics in Analytic Number Theory,*]{} Springer, Berlin, 1973. , [*The equation $1^{p}+2^{p}+...+n^{p}=m^{q}$,*]{} Acta Math. [**95**]{} (1956), 155-189. , [*On the equation $y^{m}=P(x)$,*]{} Acta Arith [**31**]{} (1976), 199-204. , [*On the equation $f(1)\cdot1^{k}+f(2)\cdot2^{k}+...+f(x)\cdot x^{k}+R(x)=by^{z}$,*]{} Acta Arith. [**51**]{} (1988), 349-368. , [*On the Diophantine equation $1^{k}+2^{k}+...+x^{k}+R(x)=y^{z}$,*]{} Acta Math. [**143**]{} (1979), 1-8; Corrigendum Acta Math. [**159**]{} (1987), 151-152. , [*On the Diophantine equation $(x-1)^{k}+x^{k}+(x+1)^{k}=y^{n}$,*]{} Publ. Math. Debrecen [**85**]{} (2014), 93-100. [^1]:
TBB BLOG- WHAT EXACTLY ARE ESSENTIAL OILS? Essential oils are basically oils extracted from plants: Essential oils are potent natural extracts derived from leaves, flowers, roots, or stems. They have been discovered to have a variety of medical and therapeutic properties, including: - Reducing stress and anxiety. - Headache and migraine relief. - Aiding in sleep and insomnia. - Assisting in aromatherapy by reducing inflammation. Some of the uses of common essential oils are listed below: - Bergamot can be used to alleviate stress and improve skin conditions such as eczema. - Chamomile is a flower that is used to improve mood and relaxation. - Jasmine has been used to treat depression, childbirth, and libido. - Lavender is a stress-relieving herb. - Lemon is used to help with digestion, mood, headaches, and other issues. - Peppermint, which is used to boost energy and aid digestion, is one of the most common essential oils. - Rose is used to boost mood and alleviate anxiety. - Sandalwood is used to calm nerves and aid concentration. - Tea tree oil is used to combat infections and boost immunity. - Ylang-ylang can be used in the treatment of headaches, nausea, and skin conditions.
https://thebetterbutters.com/blogs/news/what-exactly-are-essential-oils
Metropolia Krakowska (Kraków Metropolis) in Poland was created in 2014 as a cooperation platform for Kraków (800,000 inhabitants, 324 km2) and its 14 surrounding municipalities (a combined population of 200,000 inhabitants and an area of 951 km2). Metropolia Krakowska coordinates a number of metropolitan tasks, including those relating to environmental and ecological issues. New housing areas and the development of transport infrastructure are fragmenting and degenerating local ecosystems, threatening the continuity of ecological corridors and limiting the access of residents to areas of recreation. Metropolia Krakowska serves as a platform to integrate activities for sustainable urban planning and the protection of urban green space and ecosystems, while preserving the strong role of local communities and governments. To this end, Metropolia Krakowska applies the concept of “the Metropolis of Standards”, a tool to develop the strategic framework for governing the region. This approach incorporates the development of strategies in key thematic areas and linking these to strategies and actions at the municipal level. Metropolia Krakowska collaborates with universities and NGOs to develop and implement best practice principles in strategic documents and municipal policies. It has already developed concepts to apply nature-based solutions for sustainable stormwater management in the new transport infrastructure investments. The INTERLACE project activities in Metropolia Krakowska will focus on the 14 smaller municipalities surrounding Kraków, rather than the metropolitan city itself. Through its participation in INTERLACE, Metropolia Krakowska is keen to gather experience and good practice for protecting, restoring and managing the urban ecosystems. The association wants to verify and develop the existing tools and instruments in its practice and include the citizens, local authorities and all other stakeholders in participatory processes of urban ecological restoration and management.
https://interlace-project.eu/node/35
Over the last five years there has been increased scientific interest in the role carbon dioxide removal (CDR), or ‘negative carbon dioxide emissions’, might play in addressing anthropogenic climate change. CDR is typically understood to include approaches such as large scale afforestation and reforestation, biomass energy based carbon capture and storage, direct air capture, ocean fertilization, and enhanced weathering. Each of these could remove emissions from the atmosphere, slowing (or perhaps ultimately reversing) the accumulation of carbon dioxide contributing to an enhanced greenhouse effect. Along with solar radiation management (SRM), CDR has been presented as a prospective avenue for ‘geoengineering’—the deliberate attempt to modify the global environment, in this case to counteract harm associated with human induced climate change (Royal Society 2009). This article engages with these issues, considering the significance of CDR approaches for climate policy. It is organized in three sections: the first provides a brief introduction to CDR; the second explores its possible place in long term climate policy; the third considers nearer term policy issues. Approaches to CDR While there are now rapidly developing technical literatures on CDR approaches (for example, Azar et al. 2010; Lackner 2009; Lenton and Vaughan 2009; Ranjan and Herzog 2011) the following brief descriptions are adequate to ground the discussion here. Afforestation and reforestation remove CO2 from the atmosphere, and result in a net accumulation of carbon in living biomass. However, if the forest is subsequently destroyed the carbon will be released, so this option depends on addressing issues of long term forest management. For large scale applications there is a potential tension with competing land uses (for example, food production, commercial forestry, bio-energy crops, settlements). Costs are assumed to be initially relatively low, at least until land use dilemmas become more acute. Bioenergy carbon capture and storage (BECCS) applies CCS to biomass feed-stocks, for example in electricity generation or the production of liquid biofuels. Since the biomass fuel cycle is assumed to be approximately carbon neutral (the next crop will absorb the CO2 emissions released by exploiting this year’s harvest), sequestering CO2 emissions results in net atmospheric removal. BECCS is likely to be somewhat more costly than fossil fuel based CCS because of scale issues (more expensive biomass feedstock and smaller facilities). BECCS potential is related to the place of biomass/biofuels in the energy system. The greater the societal reliance on bio-energy, the greater the volume of CO2 emissions that could be sequestered by this route. BECCS also requires suitable geological storage sites. Near-surface sequestration stores carbon from material of biological origin in soils or the near sub-surface. One alternative is biochar (charcoal) which can be mixed into soil, trapping substantial amounts of carbon with potential benefits for agricultural productivity. Another option would be to bury biomass in conditions that would prevent normal decay. While energy could be recovered from creating biochar, large scale burial of biomass would provide no direct co-benefits. Direct Air capture (DAC) involves the direct draw down of CO2 from the atmosphere and its subsequent sequestration, typically in a geological formation. Air capture facilities could be sited near suitable storage sites so removing the transport requirement of industrial-based CCS. The difficulty is that atmospheric concentrations of CO2 are much lower than those found in combustion flue gases. Air capture remains at a relatively early stage of development. As with conventional CCS and BECCS, it requires suitable storage sites. Ocean fertilization relies on mineral seeding of the ocean to encourage the uptake of carbon dioxide by biological organisms which will eventually die and transport CO2 into the deep ocean. Iron fertilization appears the current favorite although other candidates have been proposed including nitrogen and phosphorous. Important uncertainties remain about the effectiveness of the process and about potential side effects including the impact on ocean ecology. Enhanced weathering would exploit natural chemical processes whereby exposed minerals are slowly transformed by the absorption of CO2. A variety of chemical pathways could be used, but most would involve mining and crushing rock to accelerate CO2 uptake, with deposition to land or ocean. For the purposes of this discussion we do not deal with CO2 removal that is part of a closed loop fuel cycle—for example air capture coupled with gasoline production (green gasoline) or stand alone biofuels—because the captured CO2 is released with combustion, and there is no net withdrawal. Nor do we consider approaches that use industrial CO2 sources such as reacting limestone and power-plant CO2 in sea water and releasing the resultant calcium bicarbonate to the ocean—even if they are similar to CDR pathways—because there is no net atmospheric draw down. On the basis of the short descriptions given above, several initial observations can be made. First, CDR approaches vary widely. As a group they share a capacity to remove CO2 from the atmosphere, but not a lot more. The CO2 is captured and stored by varied mechanisms, involving different natural processes and forms of human activity. The approaches present varied profiles of costs and benefits, potential side effects and risks, and limiting factors (Bipartisan Policy Center 2011). Preliminary efforts have been made to systematize some of these issues. For example, the Royal Society’s report on geoengineering scored CDR approaches with respect to cost, maximum potential CO2 reduction (in ppm) over this century, ultimate constraints, significance of anticipated environmental effects, and risk of unanticipated environmental effects (2009). One important difference relates to the destination of the stored carbon. For afforestation and reforestation this is the terrestrial biosphere. For near-surface sequestration it is soil or the shallow underground. BECCS and air capture are today primarily considered in relation to geologic storage (although in principle the CO2 could be deposited in the oceans). With ocean fertilization the destination is by definition the ocean. For enhanced weathering it could go either for surface or ocean deposition. Each storage option has advantages and disadvantages. Carbon sequestered in forests, for example, remains highly vulnerable to natural or human disturbance. Direct intervention in ocean ecosystems stands out as particularly problematic. The ocean constitutes an open ecosystem, and changes potentially affect all ocean waters and the life they contain. Moreover, scientific knowledge of the oceans lags significantly behind our understanding of the terrestrial ecosphere. Of course, atmosphere and oceans are in long term equilibrium, so a large fraction of the CO2 emitted by humans to air ultimately finds its way into the ocean. From a policy perspective another important distinction is that, unlike other CDR approaches, afforestation and reforestation are already practical options that can be carried out at reasonable cost and are recognized under existing international climate agreements. Long before they were re-conceptualized as forms of ‘CDR’ they had been integrated (as land use changes) into national GHG inventory reporting under the UNFCCC, and recognized as carbon offset strategies under the Clean Development Mechanism (CDM) of the Kyoto Protocol. Measures to support developing countries reduce emissions from deforestation and forest degradation (REDD) are a focus of continuing international negotiation, with current estimates of gross carbon releases from the destruction of tropical forests ranging from about 0.8 PgC per year to 2.8 PgC per year (Pan et al. 2011; Harris et al. 2012; Baccini et al. 2012). The diversity of potential CDR approaches can be understood as an advantage: it provides a number of avenues that might prove fruitful. But it also suggests that it makes only limited sense to talk about CDR in the abstract. The key lies in the specific approach (with its particular operative mechanisms, limits, risks, and costs), and in determining the particular conditions under which it could be deployed in a socially beneficial manner. Second, scale obviously matters. In the first instance scale relates to the quantity of carbon that could ultimately be drawn down, the rate at which this might be accomplished and the span of time it would remain isolated from the atmosphere. Estimates of these magnitudes are sensitive to initial assumptions (availability of the pathway, limiting factors, costs and so on). One recent study of terrestrial biological CDR, for example, suggested by 2050 an afforestation/reforestation potential of 1.5 PgC year (with an ultimate potential of 300 PgC); a biochar potential of up to 0.87 PgC year (with an ultimate potential of 500 PgC); and a BECCS potential of up to 4 PgC year (limited ultimately by the availability of geological storage at 500–3000 PgC) (Lenton 2010). These are very large annual flows that would make a noticeable impact on the approximately 8 PgC currently released each year by fossil fuel combustion. Similarly large numbers have been discussed for other CDR approaches such as direct air capture at a rate of 1 PgC per year after fifty years of effort (Socolow et al. 2011), or enhanced weathering (Kohler et al. 2010). But in each case the reductions would require an immense societal effort: planting and managing millions of square kilometers of forest; processing biological materials to generate energy and capture emissions and/or incorporate carbon into soils; deploying direct air capture and injecting CO2 underground; or extracting and processing minerals at ‘the same order of magnitude’ as ‘the energy system that produces CO2’ (Royal Society 2009, 13). So, scale must also be thought about in terms of the human activity required to realize the approach, the scope of existing societal practices that must be transformed, the land-use footprint and potential environmental impacts. And this suggests ambitious estimates of CDR potentials—especially by mid-century—must be viewed with caution. Third, all these approaches confront uncertainties on multiple dimensions. Although human societies have long practical experience with forestry and forest management we are still learning about forest ecosystems, and knowledge spanning the centuries-long life cycle of forest biomes is limited. Research on biochar is in its infancy (even if practices using biochar for soil enhancement go back millennia). There is no research on burying biomass. Knowledge of ocean ecosystems is limited and ocean/atmospheric linkages are only partly understood. Even knowledge of the subsurface is partial—with quite a bit understood about the geology of fossil fuel bearing formations, but much less about the rest, including the connections between biological processes at depth and the surface biosphere (Lovley and Chapelle 1995). CDR and long term climate management To explore the policy significance of CDR it is helpful initially to distinguish two ways of understanding its potential contribution: - CDR as an emission offset strategy: On this vision CDR approaches are considered as an emission reduction option: the focus is on offsettingFootnote 1 current emissions to slow atmospheric accumulation of greenhouse gasses and to avoid overshoot of a desired atmospheric stabilization target for CO2. Thus CDR approaches join the vast array of other mitigation options, and they can be compared with these alternatives when designing an appropriate portfolio of climate responses. However, since CDR options (with the exception of afforestation and reforestation) are expensive, they would be unlikely to be deployed at significant scale in the near term. - CDR as a climate recovery strategy: On this vision CDR could play a critical role in the societal response to a serious emissions overshoot. If societies prove unable to bring down greenhouse gas releases in a timely manner, and the resultant atmospheric CO2 concentrations are judged to be too high, CDR provides an avenue to roll back the accumulation to more comfortable levels (Keith et al. 2006). Here CDR appears as a basket of restorative technologies that can gradually undo the change in the composition of the atmosphere that has been driven by fossil fuel combustion since 1800. Such an effort could require large scale deployment of several CDR approaches, over many generations. The two patterns of deployment are logically distinct, and can be considered as alternatives or as complements. It would be possible (a) to employ CDR as an offset strategy, but then accept to live with the atmospheric CO2 levels that resulted from the historic emissions trajectory. Alternatively (b), large scale CDR deployment might take place (as a climate recovery strategy) only after the ongoing emissions problem had been cracked by conventional mitigation approaches. Or (c) the second pattern could flow from the first, with CDR employed initially to aid emissions control (offset) and then subsequently to secure net negative global emissions (recovery). Considered as an emissions offset strategy CDR offers several potential advantages. First, it could be applied to ‘buy time’ for the development of technological alternatives and the adjustment of societal practices. For a given a stabilization target, successful CDR would enable emissions to continue longer, either because some sources were counterbalanced by contemporaneous CDR, or because immediate emissions reductions could be delayed since larger reductions could be secured in the future when CDR was added to the mitigation portfolio. Second, CDR could reduce the costs of meeting a given abatement target. Although CDR will be expensive, at some point cheaper mitigation options will be exhausted and CDR could help contain the overall cost of the abatement effort. Third, CDR could make feasible a more aggressive emissions reduction effort. By driving net emissions down more quickly atmospheric accumulation could peak at a lower level and over a given time period lower stabilization levels could be achieved. Forth, CDR could enable fossil fuels to remain a significant part of the energy mix for a longer period. Fossil fuels have powered industrial growth for several centuries and moving away from such an energy trajectory is a major challenge. The direct application of CCS to large fossil fuel facilities (power plants, refineries, and so on) would dramatically reduce emissions from these sources. But CDR could provide additional ‘head room’ for continued fossil fuel emissions. This could be understood in relation to dispersed or mobile sources where CCS is not practical, or to that fraction of the life-cycle emissions from large point sources which CCS could not trap except at prohibitive cost. Each of these potential advantages would be accompanied by costs and risks. To ‘buy time’ suggests time will be spent in developing alternatives and preparing a greater emissions reduction effort in the future. But it might simply be an excuse for deferring inconvenient societal adjustment. Reduced costs or the possibility of attaining a lower initial stabilization target sound great, but one must be confident that side effects and unintended consequences will not cause commensurate difficulties. While extending the use of fossil fuels has advantages, it also has disadvantages. Fossil fuels are associated with a wide range of health and environmental impacts. Continuing reliance on fossil fuels, involving a further investment in infrastructure (including that required for CCS), is likely to intensify technological and social ‘lock in’ (Unruh 2000; Vergragt et al. 2011), and slow the transfer of investment towards alternative energy technologies. It is also important to note that some of the potential CDR advantages are mutually contradictory: one cannot use CDR to ‘buy time’, and simultaneously use it to pursue a more aggressive climate stabilization target. Considered as a climate recovery strategy CDR has one central advantage: drawing CO2 down from the atmosphere can reduce the risks attendant upon maintaining concentrations substantially above pre-industrial levels for a protracted period of time. Return from an overshoot trajectory is far from the ideal response to the risks posed by anthropogenic climate change. In the first place, CDR could not reverse the damage incurred during the period where CO2 concentrations are artificially elevated. Second, CDR does not offer a quick fix: with a heroic effort it might achieve reductions of perhaps 1–2 ppm a year, so it would be a long haul if atmospheric concentrations of CO2 were to be brought down to close to pre-industrial levels. Third, there is an unquantifiable risk of exposure to climate ‘tipping points’ which, once crossed, might shift elements of the climate system into a configuration that was not readily reversible (Schellnhuber 2006). Presumably, the further the climate moves into overshoot territory, and the longer the time spent there, the more serious these risks become. Whether or not CDR was deployed earlier as an offset strategy, its use as a recovery strategy would raise the issue of how far back to roll CO2 concentrations, the pace at which this should be accomplished, and the scale of the associated effort. It seems likely that this would prove contentious. Uncertainties about the risks of maintaining high CO2 levels are likely to remain for some time. A large scale CDR effort requires resources and is not without risks. Impacts of a changing climate will be unevenly distributed, and some regions or groups are likely to gain from a warmer world. Perhaps most importantly, adaptation (migration, changing crops, rebuilding infrastructure) can be effected on a more rapid timescale than the one on which CDR operates. And to the extent that some societies have come to terms with the new climate, will they be so eager to see it disappear? In any case, it is hard to imagine remedial CDR operating outside the context of international agreement (Virgoe 2009; Horton 2011). After all, international emissions controls would almost certainly be required to secure atmospheric stabilization, and CDR could easily be undone by renewed fossil fuel combustion. Finally, it is worth noting that to the extent that overall CDR potential is limited (by land use competition, shortage of available geological storage sites, growth of side effects of sequestration activities, and so on), the more that CDR is used as an offset strategy the less the CDR potential that would remain available for subsequent climate recovery (and the higher its eventual cost). The upshot of this discussion is that as a climate recovery strategy CDR is not of great practical relevance to climate change policy today. Of course, additional research that establishes more clearly the ultimate potential—but also the risks—of distinct CDR approaches is useful. But at the moment we are having difficulty achieving even modest (and relatively inexpensive) emissions cuts in the developed countries, to say nothing of capping global GHG emissions or stabilizing atmospheric concentrations. Deployment of CDR as a climate recovery strategy lies beyond the planning horizons of public and private sector organizations, and almost certainly beyond the lifetimes of current decision-makers. The one substantive choice that could be made in the short term in relation to CDR as a recovery strategy would be to defer GHG mitigation efforts today because large scale CDR might be possible in the future. In light of current projections of where ‘business as usual’ will take us over the course of this century, the substantial risks associated with warming above a few degrees, and the long time frames over which CDR would need to operate, it would seem imprudent to make such a case. So we must simply acknowledge that future generations will make the choices over CDR as a recovery strategy—although of course, by our current action or inaction on climate policy, we can influence the environment in which they will ultimately make these choices. CDR and current policy concerns Where CDR approaches do enter current policy space is as potential offsets to ongoing emissions over coming decades. Here the first issue is to distinguish among CDR pathways. As we have seen, CDR is not a single approach or technology. So what is needed is not a ‘policy perspective on CDR’, but rather a suite of perspectives appropriate to the variety of techniques and contexts. Such a differentiated policy stance is important because CDR approaches have different fundamental characteristics as well as distinct implications for established human practices relating to agriculture, energy systems, land use, biodiversity protection, and so on (see Table 1). Moreover, societal actors can be expected to respond to these approaches (embracing or condemning them) on the basis of specific development proposals—for example, to build new infrastructure, to change land use, to geologically sequester CO2 nearby, to initiate afforestation, and so on—and not as manifestations of some abstract process of CDR. Thus the policy and regulatory frameworks governing CRD activities must be tailored to each particular approach. Moreover, policy differentiation is not just about selecting preferred techniques, but above all about ensuring that techniques that are deployed are implemented appropriately. For example, some studies suggest that poorly implemented BEECS strategies could actually increase GHG emissions through indirect land use changes (for example, energy crops displacing food production and encouraging additional forest clearances) (Creutzig et al. 2012). One could imagine governments directly financing activities to draw down CO2 as a public good, but the thrust of climate policy to this point has been to make emitting entities responsible for abatement, and to enlist the price mechanism (through a carbon tax or cap and trade system) to encourage adaptation across the economy. On this model, CDR activities would result in carbon dioxide reduction credits. Whichever CDR approaches are favored, their integration into mitigation frameworks requires mechanisms to: a) establish atmospheric withdrawals are actually taking place at the intended levels, b) ensure the long term security of the sequestered carbon, and c) minimize collateral damage. Quantitative verification of CO2 flows is required for national greenhouse gas inventories and compliance with international accords; but it is also critical for businesses. While reasonably accurate estimates of emissions and emissions reductions can be made from data on fossil fuel consumption, things are more complex with removals. Consider forests: actual CO2 uptake varies according to forest types, species mix, maturity, and climactic conditions. Carbon uptake will be spread over decades and historic data on forest growth may prove misleading, especially as the climate changes. While quantifying CO2 sent for geological storage from BECCS or air capture may be reasonably straightforward, the same is far from true for biochar, enhanced weathering or ocean fertilization. This implies strict protocols for the operation of CDR projects, and appropriate measurement and verification regimes. For CDR to be effective, sequestration must be for the long term. What happens if a newly planted forest is eroded by agriculture, settlement, fire, or insect attack; if geologic storage for air capture or BECCS proves insecure; or fraud exaggerates long term sequestration figures? In many cases measurement and monitoring will have to continue for the long term. It also implies the preparation of appropriate remediation and compensation plans (if storage breaks down) and associated liability regimes. With respect to collateral damage, the standard assumption is that CDR would be pursued because it could be secured at favorable cost as compared to emissions abatement options. But this is only to the overall benefit of society if the full social cost (as opposed to the cost for the specific actors realizing a project) is lower for the CDR pathway. This means other ‘externalities’ accompanying CDR projects must be taken into consideration. Ultimately, the only way to deal with these is by the regulation of authorized technologies/approaches by public authorities: on the one hand, through international rules relating to each CRD class (with project compliance a pre-condition for international recognition of carbon removal credits); and on the other, through national and local rules relating to land use planning, environmental and safety issues, and so on. But the difficulty which policy systems have in managing issues of such complexity, with multiple cross-cutting interconnections and uncertainties, cannot be overstated (Meadowcroft 2007). A recurrent theme in the political science literature is the incremental, contingent and fragmented character of policy making in modern democracies; and the difficulty in pursuing ‘rational-comprehensive’ approaches to problem solving (Lindblom 1979; Kingdon 1984). It is impossible to anticipate in advance all the consequences from hypothetical CDR projects. Only with time will some impacts be appreciated, and will the benefits and costs of each CDR option be fully understood. Appropriate policy frameworks must therefore include opportunities for regular review, and iterative policy learning (Bennet and Howlett 1992). Above all, this suggests that it is prudent to introduce these approaches gradually and at a modest scale that allows careful assessment of difficulties, adjustment to regulatory frameworks, and time for societal debate about the implications of different choices to mature. Presently, a certain ‘unreality’ continues to cloud discussion of CDR. This is related to the relative immaturity of many of the proposed techniques as well as to the continuing impasse in the international climate negotiations. The recent framing of CDR, which coupled it to SRM as a core technique for ‘geoengineering’, has not been entirely helpful. The approaches share one essential feature: they could be deployed to address climate risks, even if significant abatement of emissions from fossil fuel combustion is postponed. So, in a context where many in the climate science community were frustrated with the inability of political leaders to effect serious GHG emission reductions, despite several decades of international talks and increasing evidence of the climate threat, it seemed to make sense to contemplate a ‘Plan B’—to explore a menu of technical options for large scale management of the climate system should humanity fail to abandon promptly its emission-dependent development trajectory. Yet the same political impasse that prompted the call for an expanded research agenda on ‘geoengineering’ made the scope of its ambition and associated techniques appear far removed from immediate concerns. Moreover, there are important differences among the technologies listed under these umbrella headingsFootnote 2: and while it has been argued that SRM options could be deployed at low cost to relatively quickly attenuate a temperature rise, CDR approaches operate comparatively slowly and (generally) at significant cost. Conceptually, CDR can be understood as a pollution clean-up approach (drawing down offending emissions), while SRM intervenes at another point in the climate system to reduce harm (reflecting incident solar radiation to reduce warming). The concern is that such upstream intervention: (a) may generate broader system impacts (for example, negative regional effects) and b) fail to deal with all salient features of the original disturbance (for example, ocean acidification). Of course, pollution clean-up can also generate harm and some of the potential concerns with specific CDR pathways have been discussed above. In the absence of real world experience with most CDR technologies, their potential is being probed by models that remain at a fairly high level of abstraction and which suggest these approaches may be rolled out as the carbon price rises over the course of the century. Although this modeling is already generating important insights (see the contributions in this issue), it is important not to lose sight of its limitations. The time frame over which developments are being explored is one concern: while an extended horizon makes sense from the perspective of the overall character of the climate problem, it goes far beyond our capacity realistically to anticipate societal development. In looking forward a century we face challenges similar to those that would have confronted researchers trying to foresee the present world from a vantage point in 1913. Over such a span geopolitical realities, technological capacities, societal practices and cultural mores can shift dramatically. Even a forty year horizon is ambitious. Think how just one change—the advent of shale gas over the last five years—has dramatically altered the energy picture in North America and beyond (IEA 2012). Moreover, when we contemplate a particular modeled mix of mitigation options—generated by interactions among a high carbon price, and the relative cost of diverse emission abatement/carbon removal technologies—it is worth remembering that our ability to anticipate technology learning curves and societal attitudes towards the deployment of particular technologies is severely constrained (Torvanger and Meadowcroft 2011). The relative cost profiles which will confront future societies will be determined not just by inherent properties of these technologies but also by their actual historical development, which will be influenced by public and private investment decisions, policy choices and regulatory frameworks, operational experiences (reliability, accidents, etc.), and public attitudes and political struggles. In an area as politically charged as energy policy, where governments intervene continuously to alter the landscape within which technological development occurs (through R&D expenditure, deployment subsidies, tax policy, environmental and land use policies), it is not just the carbon price that is ‘politically constructed’, but to a certain extent the relative prices of all energy and climate technologies. So it is not clear what we can infer today about relative costs of mitigation options confronting decision makers fifty or more years from now. In any case, with respect to actual choices over technology deployment, comparative cost is not the only factor which governments (or even private actors) take into consideration. In deciding to turn their back on nuclear power German governments have not been driven principally by a subtle calculus of relative costs, but by persistent public opposition which is deeply rooted in national political culture. In real world energy policy governments routinely rule out options that may have a cost advantage to pursue other ends, and often expend public resources to protect the economic position of powerful producer groups. In the most straightforward sense this matters for CDR because government action at many levels (local, regional, national, and international) and in many forms (R&D support, establishing regulatory frameworks, permitting of facilities, authorizing land use changes, establishment of liability regimes, and so on) is necessary if these technologies are ever to be introduced at scale. There is now a substantial literature (Dosi 1982; Freeman 1996; Sanden 2004) on long term socio-technical transitions that emphasizes (Geels 2005; Geels and Schot 2007): the long lead times required for basic discoveries to achieve their full societal potential; the contrast between normal process of incremental change within a dominant technological paradigm and more radical innovations that lead to regime change; the critical importance of societal practices (finance, training, maintenance, industry standards, regulation and property rights, investor and consumer expectations, and so on) in ‘locking in’ ‘dominant designs’; and the significance of interactions among ’niche’, ‘regime’ and ‘landscape’ factors in facilitating transformation (Smith et al. 2010). This is of relevance not just to the future technical development of CDR, but to the evolution of all the other mitigation options against which it will eventually compete. It suggests that despite the enormous capacity of established interests to achieve incremental efficiency gains with existing technologies and to deploy political lobbying to frustrate change, over time alternatives can acquire increasing traction. In particular, the potential for non carbon-based energy technologies to achieve major advances, that would alter their appeal (in terms both of functionality and cost) in relation to fossil energy systems and eventual CRD mitigation approaches, should not be ignored. There is another way in which current consideration of CRD remains abstract: to date we have a limited appreciation of the range of societal impacts that will be associated with CDR deployment at scale. Typically it is only as a technology is rolled out into society that one can get a firm grip on the timing and strength of side effects, the operation of countervailing forces, and the mobilization of direct opposition. Biofuels and wind provide recent examples. For several decades technical arguments about first generation biofuels continued in the scientific literature, but only when biofuels emerged into international markets did impacts become concrete, and critiques dealing with land tenure, water use, food prices, and so on, gained wider purchase. Similarly, in many countries wind deployment increasingly has been met with organized opposition. In both cases land use conflicts have been central; and we can anticipate that this will ultimately present serious challenges to bio-based terrestrial CDR. Similarly, the issue of underground storage may be a problem for BECCS and DAC, for pilot CCS projects have already encountered public opposition (for example, in the Netherlands and Germany). Although such objections ultimately may be overridden, it is clear that over time different sorts of environmental, social and economic consequences of CDR deployment will become more concrete and will inspire societal responses. Above all, it is important to remember that while these reactions may be articulated by particular groups and interests, they reflect real perceptions about societal impacts and underlying conflicts over the distribution of social resources. The decision to deploy a particular CDR approach to cancel ongoing emissions will always be about relative costs and benefits. Every mitigation option has advantages and disadvantages, and only by comparing these can reasonable abatement strategies be designed. In making energy and climate mitigation related choices, governments weigh many considerations including energy security, economic prosperity, regional development, non-GHG environmental effects, and so on. In this sense, it is appropriate to assess CDR approaches not just from the perspective of their mitigation potential (tons removed over time), but also by asking what sort of societal development trajectory they imply. A civilization that employed large scale afforestation and reforestation, for example, would look very different from one that declined this option; widespread BECCS implies an extensive bio-energy economy, and so on. Choices about CDR approaches, and the scale at which they are to be deployed, cannot be isolated from broader decisions relating to societal practices driving emissions growth. Lower levels of population growth or material consumption (for example, the proportion of meat in diets) would moderate land use conflicts, so facilitating some CDR approaches. On other hand, easing of these drivers might make large scale CDR less pressing. Although there are major political and cultural barriers to putting issues of population and consumption on the table there is no doubt they can be influenced significantly by policy levers, especially when looking forward over many decades. And discussion of these sorts of options will form part of the political context within which choices about CDR will ultimately be made. Coming to grips with climate change is about learning to negotiate environmental limits. If global trends continue, these limits will be increasing evident in problems such as the provision of fresh water, the health of the oceans, the loss of biodiversity, disruption of the nitrogen cycle, constraints on food supply and chemical loadings (MEA 2006; Rockström et al. 2009). Although climate change can be approached as a technical issue of managing positive and negative emissions and the radiative balance of the atmosphere, it can be seen more broadly as the result of a collision between the established societal development path and the material limits of the biosphere. Twenty-five years ago the World Commission on Environment and Development suggested the concept of ‘sustainable development’ to capture the idea of a modified development trajectory which could satisfy human aspirations for a better life without tipping the planet towards ecological disaster (WCED 1987). Although in political argument ‘sustainable development’ has been subject to countless interpretations and much misuse (sharing the fate of other normative concepts such as ‘democracy’, ‘freedom’ or ‘justice), it has the virtue of highlighting two major realities (Lafferty 1996). First, that in the modern world environmental issues such as climate change cannot be understood or managed successfully without addressing the development pressures which lie at their source. And this implies movement away from a pattern of economic activity based on crude material growth (the endless expansion in the numbers of people, in the absolute consumption of renewable and non-renewable resources, and in the generation of wastes) (OECD 2011). Second, that the problems of the rich and the poor countries are entangled: so attempts to address climate change ultimately will require some accommodation of the perspectives of each. The societal response to human induced climate change will unfold over many decades, and we can anticipate false starts and reverses, alternating periods of innovation and stagnation, and dramatic reversals of direction in light of new knowledge and continuing experience. This essay has argued that in coming years the CDR-related policy challenge is to develop a nuanced approach, which differentiates among options and the specific ways they are to be governed, and which trials them at modest scale to allow learning from experience, the operation of social feed-back mechanisms, and the careful adjustment of regulatory frameworks. While some CDR approaches may offer useful additions to the mitigation ‘tool kit’, issues of cost, environmental risk, physical limits and tension with other societal practices mean they can represent only part of any solution. Above all, they do not alter the basic fact (and the most urgent climate-related policy challenge), that human societies need to curtail releases of greenhouse gasses associated with fossil fuel usage as quickly as possible. Notes - 1. ‘Offset’ is used here in the generic sense of ‘cancelling’, ’balancing’, or ‘neutralizing’ (The New Shorter Oxford English Dictionary 1993). Thus CDR can be understood to counteract CO2 emissions just as positive and negative entries cancel out in financial accounts. There is a more specific technical understanding of ‘carbon offset’ where reductions are secured outside the formal boundaries of a regulatory regime. Such reductions may be recognized as providing credits within that system (for example, CDM offsets in the EU ETS), or not (as in the case of the voluntary carbon offset market). - 2. For a good discussion of the conceptual ambiguity of ‘geoengineering’ and the diversity of technologies/approaches with which it can be linked see Keith, 2000. References Azar C et al (2010) The feasibility of low CO2 concentration targets and the role of bio energy with carbon capture and storage (BECCS). Climactic Chang 100:195–202 Baccini A et al (2012) Estimated carbon dioxide emissions from tropical deforestation improved by carbon-density maps. Nat Clim Change 2:182–185 Bennett C, Howlett M (1992) The lessons of learning: reconciling theories of policy learning and policy change. Pol Sci 25:275–292 Bipartisan Policy Center (2011) Report of the task force on climate remediation research Creutzig F et al (2012) Reconciling top-down and bottom-up modeling on future bioenergy deployment. Nat Clim Change 2:320–327 Dosi G (1982) Technological paradigms and technological trajectories: a suggested interpretation of the determinants and directions of technical change. Res Policy 11:147–162 Freeman C (1996) The greening of technology and models of innovation. Technol Forecast Soc Chang 53(1):27–39 Geels F (2005) Technological Transitions and System Innovations: A Co-evolutionary and Socio-technical Analysis, Edward Elgar Geels F, Schot J (2007) Typology of sociotechnical transition pathways. Res Policy 36:399–417 Harris N et al (2012) Baseline map of carbon emissions from deforestation in tropical regions. Science 336(6088):1573–1576 Horton J (2011) Geoengineering and the Myth of Unilateralism. Stanf J Law Sci Pol 4:56–69 IEA (2012) World Energy Outlook 2012, International Energy Agency Keith D (2000) Geoengineering the climate: history and prospect. Ann Rev Energy Env 25:245–284 Keith D, Ha-Duong M, Stolaroff J (2006) Climate strategy with CO2 capture from air. Climactic Chang 74:17–45 Kingdon J (1984) Agendas, alternatives and public policies, Little, Brown and Company Kohler P, Hartmann J, Wolf-Gladrow D (2010) Geoengineering potential of artificially enhanced silicate weathering of olivine. Proc Natl Acad Sci 107(47):20228–20233 Lackner K (2009) Capture of carbon dioxide from ambient air. Eur Phys J Spec Top 176:93–106 Lafferty W (1996) The politics of sustainable development: global norms for national implementation. Environ Polit 5:185–208 Lenton T (2010) The potential for land-based biological CO2 removal to lower future atmospheric CO2 concentration. Carbon Manag 1:145–160 Lenton T, Vaughan N (2009) The radiative forcing potential of different climate geoengineering options. Atmos Che Phys Discuss 9:2559–2608 Lindblom C (1979) Still muddling, not yet through. Publ Admin Rev 39:517–526 Lovley D, Chapelle F (1995) Deep subsurface microbial processes. Rev Geophys 33:365–381 MEA (2006) Ecosystems and Human Well-Being: Synthesis Report, Millennium Ecosystem Assessment, Earthscan Meadowcroft J (2007) Who is in charge here? Governance for sustainable development in a complex world. J Environ Pol Plan 9:299–314 OECD (2011) Towards green growth. Organization for Economic Co-operation and Development, Paris Pan Y et al (2011) A large and persistent carbon sink in the world’s forests. Science 333(6045):988–993 Ranjan, Herzog (2011) Feasibility of air capture. Energy Procedia 4:2869–2876 Rockström J, Steffen J, Noone K, Persson Å, Chapin F III, Lambin E, Lenton T et al (2009) A safe operating space for humanity. Nature 461:472–475 Royal Society (2009) Geoengineering the Climate. The Royal Society, London Sanden B (2004) Technology path assessment for sustainable technology development. Innov: Manag Pol Pract 6:316–330 Schellnhuber H (2006) Avoiding dangerous climate change, Cambridge University Press. Smith A, Voß J, Grin J (2010) Innovation studies and sustainability transitions: the allure of the Multi-Level Perspective and its challenges. Res Policy 39:435–448 Socolow R et al (2011) Direct air capture of CO2 with chemicals. APS Phys 1:1–119 Torvanger A, Meadowcroft J (2011) The political economy of technology support: making decisions about CCS and low carbon emission energy technologies. Glob Environ Chang 21:303–312 Unruh G (2000) Understanding carbon lock-in. Energy Pol 28:817–830 Vergragt P, Markusson N, Karlsson H (2011) Carbon capture and storage, bio-energy with carbon capture and storage, and the escape from the fossil fuel lock in. Glob Environ Chang 21:282–292 Virgoe J (2009) International governance of a possible geoengineering intervention to combat climate change. Clim Chang 95:103–119 WCED (1987) Our common future, World Commission on Environment and Development, Oxford University Press Acknowledgement The author acknowledges the support of the Canada Research Chairs program. Additional information This article is part of a Special Issue on "Carbon Dioxide Removal from the Atmosphere: Complementary Insights from Science and Modeling" edited by Massimo Tavoni, Robert Socolow, and Carlo Carraro. Rights and permissions Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. About this article Cite this article Meadowcroft, J. Exploring negative territory Carbon dioxide removal and climate policy initiatives. Climatic Change 118, 137–149 (2013). https://doi.org/10.1007/s10584-012-0684-1 Received: Accepted: Published: Issue Date:
https://link.springer.com/article/10.1007/s10584-012-0684-1
Purpose of review: To discuss recent applications of artificial intelligence within the field of neuro-oncology and highlight emerging challenges in integrating artificial intelligence within clinical practice. Recent findings: In the field of image analysis, artificial intelligence has shown promise in aiding clinicians with incorporating an increasing amount of data in genomics, detection, diagnosis, classification, risk stratification, prognosis, and treatment response. Artificial intelligence has also been applied in epigenetics, pathology, and natural language processing. Summary: Although nascent, applications of artificial intelligence within neuro-oncology show significant promise. Artificial intelligence algorithms will likely improve our understanding of brain tumors and help drive future innovations in neuro-oncology.
https://pubmed.ncbi.nlm.nih.gov/31609739/
PROBLEM TO BE SOLVED: To provide a multi-dimensional data display method suitable for analyzing multi-dimensional data. SOLUTION: Data being one or more dimensional elements are selected from object multi-dimensional data. When the number of the selected dimensional elements is 3, the selected two dimensional elements are arranged as crossing X/Y axes, and a region is divided by the members of the respective dimensional elements. The scales of the regions are assigned so as to be proportional to the number of data belonging to the members so that the total of the data belonging to the members to be displayed can be a display size. A pie chart in which one residual element is discriminated according to a pattern is displayed in the cell regions combined by the X/Y axes. Thus, the total four dimensions of the three-dimensional elements selected from the multi-dimensional data and a graph meaning the number of data can be expressed in a visually appealing form such as areas and patterns. COPYRIGHT: (C)2006,JPO&NCIPI
The basic elements of Nature are FIRE, EARTH AIR, WATER. These elements are addressed in some manner in all Earth based Spiritual practices. The four elements are recognized and addressed in magickal and divination systems such as Astrology and Tarot. The Chinese recognize 5 elements, by adding the properties of Metal to the mix. In the holistic healing arts the element of Ether is included. Elemental energies and forces are not to be confused with DIRECTIONAL energies and forces, even though they often work in tandem and are employed together in magickal practices. In Ceremonial Magick the element of AEther is added as Spirit to the mix, and the powers of the Elements are assigned directional energies and associations with Elemental beings, Guardians or Angelic beings/forces. On this page you will find a series of articles addressing the Elements from different Earth-based and Magickal traditions and practices. | | THE FIVE ELEMENTS OF MAGICK Within many spiritual and magickal traditions, the elements - Earth, Air, Water and Fire - play a foundational role. Understanding how energies express them and what correspondences are associated with these energies helps us to focus and release the energy 'flavors' with intention during meditation and ritual. EARTH is the element of secrecy, deep wisdom, possessions, creation, perseverance, birth and death. It is considered passive and receptive. The direction and quarter of the Circle which corresponds with Earth is the North, the place of endings. Earth corresponds to the bone structure of the human body. The season is Winter and the concept within magick which corresponds to Earth is Secrecy. Hidden beneath the Earth are many treasures – jewels, minerals, oil – which must be sought out with forethought and labor. So it is with the magickal secrets that the element Earth hides; only through dedication and patience can these secrets be brought to light, yet they must always be protected and guarded. The tool of the Earth is the Pentacle which is used both for drawing in and for projecting energy. The Pentacle is also used for defense as a shield and the image can be used as a magnet to draw manifestation of a goal. The magickal Cords are also of Earth and are used to bind energy. The Sabbat which corresponds to Earth is Beltane, the festival of growth and fertility. Beltane falls in the fixed earth sign of Taurus, which is the sign associated with sensuality, acquisition, resources and practicality. The archangel of Earth is Uriel (pronounced Urh-ree-el), also called the Lord of Awe, who presides over protection and strength. This angel is visualized in robes of olive and russet, bearing a Pentacle. Gnomes are attributed to Earth and are seen in very small human form – your basic elf. Their king is Gheb (also known as Geb or Gob, as in goblin). Earth elementals are the most mischievous and love a good practical joke. They prefer the forest, the crags, the heaths and the caverns in which to live, although with the human population cutting down on gnomes’ habitats, the more sociable enjoy being around sensitive and sympathetic human beings and will share living quarters. Gnomes love jewels, gold, interesting rocks, moss and living plants of all kinds. Their favorite scents are resins and woodland smells such as patchouli and vetivert. AIR is the element of the intellect and communication. It is considered an Activating element. The direction and quarter of the Circle that corresponds to Air is the East, where the day begins. Air is the springtime, beginnings and, in the foundational concepts of magick, it is Faith. Faith (confidence) in magick comes from knowledge (intellect) and from understanding the processes of ritual and applying them with sure result. The tool of Air is the Wand, which is used to direct and channel energy in magick. In its application for healing where it is used to absorb the energy of the universe and channel that energy into the subject of the healing, it corresponds to the caduceus of Mercury, a god of Air – you will notice that this same symbol is used by physicians today, The Sabbat which corresponds to Air is Candlemas (also known as Oimelc or Imbolc). This Sabbat falls in the fixed air sign of Aquarius, the astrological sign noted for intellectualization, eccentricity, originality and genius. The archangel of Air is Raphael, the archangel of healing, teaching and travel. He is visualized on a mountaintop, in robes of yellow and purple (the colors associated with the element Air) which blow gently in the wind. In magick, angels represent invisible forces, powers ascending and descending between the Source of all things and the world. An Archangel, then, is the angelic entity in its highest and most pure force. Sylphs are the elementals attributed to Air and are similar to human in form, although they are transparent and have lovely, delicate wings. They travel on the wind and you can hear them talking and laughing as they drift in and out of the trees. Paralda is the name of their king. As you might imagine, they are very articulate and logical. Sylphs prefer the mountaintops where the air is the thinnest. They are related to the nervous system in the human body. Their favorite scents are the mints and light flowery essences such as tulip. They love the sound of bells and windchimes. FIRE Is the element of Will and Passion and, like Air, is considered an Activating element. The direction and quarter of the Circle that corresponds to Fire is the South, which we associate with heat. The season is, of course, summer – the time of growth and culmination of that growth. In the conceptual foundations of magick Fire is the Magickal Will. The force of the magickal will enables the magickian to carry through with goals, plans and dreams. Will serves as the impetus for the magickal energy which is sent forth to act on the physical plane to manifest the desire. Fire is the application of the ideas of Air into physical reality as we perceive it. The tool of Fire is the Athame, the ritual dagger, which is used to inscribe the magickal sigils in the air during ritual; to describe the circumference of the magickal Circle; to banish phantasms and defend against them and to heal via the act of removing holes in the aura through cauterization on the astral plane. The Sabbat which corresponds to Fire is Lammas; the Sabbat dedicated to Lugh the Sun God. Lammas falls in the fixed fire sign of Leo, which is characterized by dignity authority, creativity and flamboyance. The Archangel of Fire is Michael (pronounced Mee-kee-al), the Archangel of authority, victory, initiative and splendor. Michael is visualized in robes of scarlet and green, bearing before him a flaming sword. Salamanders are attributed to the element of Fire. They are not considered a part of the physical flame as such, but the essence which enables the flame to burn. Naturally, they are most active in the summer months and geographically prefer the hotter regions. When they live in cold places, salamanders reside in the hearth. They are full of passion and enthusiasm and for this reason are sometimes considered dangerous as their unpredictability can be disconcerting. However, they are actually very generous and warmhearted, if treated with the respect due them. Salamanders relate to the heart in the human body as well as the circulatory system. Their king is called Djinn. Salamanders love the smell of burning wood and spicy odors such as cinnamon and nutmeg. Candles, lanterns and mirrors are attractive to them. WATER is the element of love, intuition, emotion, fertility, compassion, understanding and imagination. It is considered a passive and receptive element. The direction and quarter of the Circle that corresponds to Water is the West. The season is Autumn and in the conceptual foundations of magick, Water is imagination. Imagination begins in our dreams, the language of our subconscious minds and without it our rituals would be dry and emotionless. Imagination allows us to see what might be and is therefore creativity, fertility and change. The tool of Water is the Chalice which is used to contain the water of purification and exorcism. It can hold the ritual wine and be used for seeing past, present and future in the practice of scrying. The chalice represents wisdom, transformation and receptivity. In the legends of ancient times, the Grail of Immortality was sought by the valiant for its life-giving and regenerative powers as well as for the knowledge it brings. The ritual Cauldron is also of the element of Water and the stories that surround it (as the cauldron of Dagda, the cauldron of Cerridwen and the cauldron of Baba Yaga) reaffirm the theme of wisdom, life, transformation and regeneration. The Sabbat which corresponds to Water is Samhain/Hallowmas, the festival of death, change and regeneration. Samhain falls in the fixed water sign of Scorpio, known for occult ability, psychic power, death and regeneration. The Archangel of water is Gabriel, known as the Prince of Change and Alteration. This angle can be seen on the Judgment card of the Tarot, blowing the horn of fertility and authority. Gabriel is visualized in robes of clear blue and orange, holding a chalice from which torrents of water spill. Undines are the elements of Water. They are extremely graceful and seductive. Undines are similar to humans in form and majority of them are female. They will impart psychic knowledge and ability. If you work with them in this area, be sure to give them extravagant and appealing gifts in return, for they have feelings that can be easily bruised. They correspond to the human digestive system and Necksa is their King. Undines live in the oceans, the rivers, springs, creeks and raindrops. Their most beloved scents are cool ones - camphor, cucumber and citrus fruits such as lime. They delight in beautiful shells, silver jewelry, boxes for their treasures and flutes. SPIRIT is the Fifth Element within magick. It is the ether, the divine miasma that gives life force to earth, air, fire and water. Spirit imparts the spark that enlivens each element and allows it to be expressed throughout the universe. It balances the energies of each element while giving each its own 'intelligence' of how and where to act within our world. When a Circle is cast and each of the four elements is called upon to impart their energies within a working, Spirit is called forth to fill the space and rise into the world bringing manifestation.
http://www.wsla-co.org/elements.html
Young ice kitsunes are used as messengers, scrying through ice crystals and delivering short missives via frost patterns. They thrive on the constant challenges and intellectual stimulation, and even their play is a form of practice. These creatures are intense investments, for a Magi who would bond with one must not only constantly engage with their charge, but must also provide for their magical growth. The hatchlings sleep in small igloos, and require a highly magical environment to thrive. If either of these needs are not met, an adult ice kitsune will use its powers to whisk away the hatchling in a flurry of snow, leaving behind only an icy paw print. Bred over thousands of years from their wild kitsune cousins, these highly sought-after creatures have exchanged some of their innate stealth for strong elemental powers and increased intelligence. Several breeds were developed, each specializing in a separate field, with fire and ice the most famous and popular. Although their abilities allow them to manipulate energies of their element with ease and grace, they do not generate magic themselves. Instead, they act as channel for magic energy. They can imperceptibly siphon energy from their environs and even other creatures to fuel spells. Likewise, their magi can channel spells through them, increasing their power and precision. These creatures are an invaluable aid to any magi studying the forces they specialize in.
https://magistream.com/creature/13822410
Biliary cirrhosis is a health condition that happens when the bile ducts are slowly destroyed. When the bile ducts are damaged, the body has trouble digesting and harmful substances can build up inside the liver. Programs + Services | Doctors + Care Team | Research + Clinical Trials | News Tuft's Center for Liver Disease specializes in treating liver disease, such as cirrhosis, fatty liver, viral hepatitis, primary biliary cirrhosis and primary sclerosing cholangitis. More information about programs and services Your privacy is important to us. Learn more about our web privacy policies.
https://www.tuftsmedicalcenter.org/patient-care-services/Conditions-We-Treat/B/Biliary-Cirrhosis
The Jugurthine War was a key war in the final century of the Roman Republic. Like the Americans in Iraq, Rome assumed that their war against Jugurtha, King of Numidia (a nation in north Africa), would be a cakewalk. They believed that Numidia was a nation of savages with a bizarre religion. They assumed that their own “shock and awe” attacks by the superior legions would decapitate and destroy the “evil doer” Jugurtha. They believed that in order to liberate the Numidians of their primitive ways, they had to impose the civilized will of the Roman state on this backward nation. Rome never expected that the Numidians would wage an insurgent war against their Roman occupiers. This war ended up dragging on for almost a decade. And in the end, it showed the depravity of the ruling party (the ultra-conservative republican Optimate party), which was sending the Roman Republic on its way to tyranny, empire and ruin. In 148 BC, the King of Numidia, Masinissa, died. The Roman proconsul, Publius Cornelius Scipio Aemilianus, had been given authority by Masinissa to divide Masinissa’s estate. He divided it between Masinissa’s three sons, Micipsa, Gulussa, and Mastarnable. Soon after, Gulussa and Mastarnable died, leaving Micipsa as the sole King of Numidia. Around the year 134 BC, Micipsa sent Jugurtha (who was Masinissa’s grandson, but the son of another Numidian) to Spain with Scipio Aemilianius. Scipio was fighting the Celtiberians, who lived in a part of what is now Spain. Jugurtha was able to raise an army to help Scipio. Because of the valor of Jugurtha and his army at the Siege of Numantia, Scipio was able to win his war against the Celtiberians. While fighting for Rome, Jugurtha worked alongside his future enemy, Gaius Marius. Jugurtha not only learned the superior Roman style of fighting, but he also learned of Rome’s weakness for money and thus bribery. Jugurtha described Rome as “urbem venalem et mature perituram, si emptorem invenerit” (“a city for sale and doomed to quick destruction, if it should ever find a buyer”). When Jugurtha returned to Numidia, Micipsa adopted Jugurtha, and decided to include Jugurtha in his will. After the fall of Numantia, Jugurtha returned home with a letter from Scipio addressed to his uncle; in it, the commander praised Jugurtha’s exploits and congratulated Micipsa for having “a kinsman worthy of yourself, and of his grandfather Masinissa” (Sallust Iug. 9). On this recommendation the king formally adopted Jugurtha and made him co-heir with his own children In 118, Micipsa died. He left his kingdom to Jugurtha and his two natural sons, Hiempsal and Adherbal. Shortly after Micipsa’s death, Jugurtha had Hiempsal killed. Adherbal fled to Rome. The Roman Senate sent a commission to Numidia to make peace. Jugurtha bribed the Romans on the commission, and thus the commission gave the better regions of the kingdom to Jugurtha. In 113 BC, Jugurtha took his army and cornered Adherbal in his capital city of Citra. According to Sallust, Adherbal had the support of the people, but Jugurtha had the support of the best soldiers. A Roman Commission was sent to Numidia to forge a new peace. Jugurtha then bribed the Romans on this commission. The Romans thus allowed Jugurtha to storm Citra, and slaughter Adherbal and his supporters. Because Jugurtha slaughtered a number of Italian business people (including Roman Equites, or “Knights“), the Roman senate declared war on Jugurtha. The Roman Senate sent an army under the command of the consul Lucius Calpurnius Bestia to fight Jugurtha. Bestia decisively defeated Jugurtha. But Jugurtha bribed Bestia, and thus was given unusually favorable terms. The Roman Senate viewed the favorable terms with suspicion, so it summoned Jugurtha to Rome. When Jugurtha arrived in Rome, he bribed two Tribunes, who thus prevented him from testifying. While in Rome, Jugurtha attempted to have his cousin and rival Massiva assassinated. Because of this, he was expelled from the city and returned to Numidia. In 110 BC, the Roman Senate sent the praetor Aulus Postumus Albinus (who was the cousin of a consul for that year) to defeat Jugurtha. Because Jugurtha bribed key Romans involved in Albinus’ army (who then betrayed Albinus), Albinus was defeated. The Roman Senate then sent the consul Quintus Caecilius Metellus to fight Jugurtha. At the Battle of the Muthul, a young Roman officer named Gaius Marius helped to reorganize Metellus’ legions, which then defeated Jugurtha. But Jugurtha was defeated because he forced his army to retreat before it could suffer heavy losses. The Romans did suffer their own heavy losses. Jugurtha disbanded his army, and had his soliders mount an insurgency to fight the Roman occupiers. Marius returned to Rome. Dissatisfied with the slow pace of the war under Metellus, the Roman Military Assembly (one of the two Roman legislative assemblies, similar to the US Senate) appointed Marius consul (the Military Assembly, not the senate, appointed consuls). The Roman consuls had similar powers as the US President. The consulship was the highest constitutional office, and the consuls had imperium powers, which allowed them to command armies and conduct wars. The senate didn’t want Marius to be consul, because at this time it was dominated by an ultra-conservative republican party of aristocratic elites known as the Optimates. Marius belonged to the party that opposed the Optimates, the Populares. Partly because the senate didn’t like Marius, and partly because of the increasing difficulty Rome was having in recruiting armies, Marius was forced to raise his own army. Marius took his army to Numidia to fight Jugurtha. But while Marius had been raising his army, Jugurtha allied with his father-in-law, Bocchus, the King of Mauretania. Marius defeated Jugurtha and Bocchus in several key battles. But much like with the American occupation in Iraq, Jugurtha’s strategy of insurgency warfare against the occupiers rendered all conventional victories irrelevant. Marius was playing a game of whack-a-mole. No matter how many times the Numidians were defeated, Jugurtha’s insurgents would regroup and keep fighting. It became clear that because of this, Rome could not defeat Jugurtha. Marius sent his young Quaestor, Lucius Cornelius Sulla, to Bocchus. Sulla bribed Bocchus, and told him that Bocchus would be given a part of Numidia if he would betray Jugurtha. Bocchus then decided to give Jugurtha to Sulla. Sulla took Jugurtha to Rome, where Jugurtha was strangled in the Tullianum in Rome after marching in Marius’ January 1, 104 B.C. Triumph. The Triumph of Marius (1729) by Giovanni Battista Tiepolo. The inscription in Latin reads “The Roman people behold Jugurtha laden with chains”. The Jugurthine War was over. But in the process, several problems were exposed that would cause Rome serious pain in the future. Republicans in this country love to tell us that money in politics is harmless free speech. But as we saw in the Roman Republic during the Jugurthine War, money can be very corrupting. Rome almost lost the war because of money in politics, and the susceptibility of public officials to bribery. In addition, this war saw the rise of two Romans who would play a key role in the events that directly preceded the fall of the Roman Republic. The first Roman made famous through this war was Gaius Marius. Gaius Marius would later hold the Roman Consulship an unconstitutional 7 times in 21 years (constitutionally, a Roman had to wait 10 years before being reelected consul). The second Roman made famous through this war was Lucius Cornelius Sulla. Sulla and Marius would fight an unconstitutional civil war with each other several years after this war had ended. Sulla would illegally march his troops on Rome, and unconstitutionally legalize the mass killing of Marius’ supporters. Marius’ supporters in the senate would unconstitutionally prevent Sulla from fighting a war during one of Sulla’s consulships. Sulla would eventually seize absolute power for himself. Sulla would be the first Roman to be Dictator in almost 150 years. He would also be the first Roman in history to hold the dictatorship without the traditional six month term limit. As dictator, Sulla would illegally change the Roman constitution to make himself and his party (the ultra-conservative republican Optimates) even more powerful. And most importantly, Sulla would set the example (of civil war on Romans, and then the seizing of absolute power) that the future tyrant Gaius Julius Caesar would follow. In the end, the actions taken by key players in the war against Jugurtha would be repeated in the final destruction of the Roman Republic. The future triumvir Pompey would unconstitutionally hold multiple consulships in a short period of time. Crassus, another future triumvir, would illegally bribe politicians to get his way. And the future tyrant Julius Caesar would bribe, unconstitutionally hold the consulship, and become dictator for life (as Sulla had done). It was Caesar’s actions in this regard, as well as the similar actions of his adopted son and heir, Gaius Octavius (later Gaius Julius Caesar Octavianus, the Emperor Augustus) that would once and for all destroy the Roman Republic, and create the Roman Empire. 2. Moving naked over Acheron 3. Upon the one raft, victor and conquered together, 4. Marius and Jugurtha together, 5. one tangle of shadows. 6. Caesar plots against India, 7. Tigris and Euphrates shall, from now on, flow at his bidding, 8. Tibet shall be full of Roman policemen, 9. And the Parthians shall get used to our statuary 10. and acquire a Roman religion; 11. One raft on the veiled flood of Acheron, 12. Marius and Jagurtha together. 13. Nor at my funeral either will there be any long trail, 14. bearing ancestral lares and images; 15. No trumpets filled with my emptiness, 16. Nor shall it be on an Atalic bed; 17. The perfumed cloths shall be absent. 18. A small plebeian procession. 19. Enough, enough and in plenty 20. There will be three books at my obsequies 21. Which I take, my not unworthy gift, to Persephone. From Homage to Sextus Propertius, Canto VI by Ezra Pound On this day..
https://www.executedtoday.com/tag/optimates/
Patient: i. As patient/surrogate: This can be ourselves (patient) or our dependents, parents, spouses, etc. (surrogates). The person that is ultimately responsible for seeking out and making decisions about care. ii. As purchaser: Note that the patient may not actually be paying directly for healthcare. May come from insurance (provided by government, parents, employer, etc.) or direct from the provider. Provider: i. Individual-Medical: 1. Hierarchy of single person providers of health care: doctor (e.g., general practitioner or specialist) nurse (physician assistant), EMT/paramedic, non-traditional/traditional healer (e.g., acupuncturist, priest), social worker, family member or friend (e.g., home health care), self-medication. a. Note on Self-Medication: i. This is always your first line of defense in health care: self-diagnosis and self-treatment or –medication. For example, if you skin your knee you put on a band-aid. If you have a headache you take aspirin. ii. Importantly: This is a huge part of health care that we don’t know how to value. b. Note on Hierarchy, generally: i. As you move down the list you see decreasing degrees of formal, Western medical training. There are also varying degrees of regulation. ii. There are also qualitative differences among individuals within categories. 2. Consider: Who is your provider? a. Is she young (don’t go to the hospital in July, when all the new interns and residents arrive) or is she old (Many docs don’t keep up with medical literature as they practice, so their knowledge and techniques may be out of date)? b. Is she a specialist, and why did she choose that specialty (was it because of need or for other, e.g., financial or availability, considerations)? i. [See pg. 29, “Variations in Physician Practice”] ii. See also “Variation in Practice” section below. ii. Institutional Providers: 1. Hospital: a. Generally: large sector of health care (about 40% of health care dollars) and heavily (tax) subsidized. b. Two Levels of Care: Inpatient (e.g., ER) and Outpatient (e.g., ambulatory care, clinics) c. Two Kinds of Care: Emergency and Non-Emergency d. Three Kinds of Hospitals: non-profit, government, for-profit. i. Note that non-profit hospitals comprise roughly 2/3 of hospital beds AND ii. Are generally divided into three sub-categories (academic hospitals, religious non-profit, and standard non-profit). 2. Specialty Hospitals a. Generally: large trend in the last ten years where specialty groups spin off to start their own hospitals. b. Possible Motivations: search for profit center, “cream-skimming” (take the best – least sick – patients away), or seeking to extract more money from the main hospital? c. Examples: dialysis centers, radiology centers, cardiology centers, etc. 3. Nursing Homes a. Distinct from Hospitals: i. Insurance Coverage: most hospital care is covered by insurance, whereas most nursing home care requires separate long-term care insurance for coverage. ii. Profit: Two thirds of all hospitals are not-for-profit (note: same for hospices), whereas two thirds of nursing homes are for profit. b. Two Types of Nursing Homes: i. Skilled Nursing Facilities (SNFs) ii. Long-Term Care Facilities (LTCs) 1. Note: this is often what is traditionally thought of as a “nursing home.” 2. When the elderly get sick they frequently go from a hospital (acute care) to a SNF (for specialized, short-term treatment) and then to a LTC (or a “nursing home”). 4. Home Care / Assisted Living: Hire somebody to come into your home and provide care or assistance. 5. Hospice or Palliative Care: Non-therapeutic care. Only palliative care for patients with terminal conditions. 6. Other Institutions: a. Schools, jails, or mental health. Note that provision of care might be mandatory in confinement settings. b. Independent standing Clinics (“Minute Clinic”): provide ambulatory care in between doctor and hospital. Often adjunct institutions in Wal-Mart, CVS, etc. c. Experimental Clinical Trials: Conducted by academic institutions, drug companies, etc. Providing some medical benefit but not exactly clear what or how much. iii. Other-Medical Providers: 1. Examples: Drug or Pharmaceutical companies, diagnostic labs, device manufacturers, etc. 2. Legal Implications: Additional regulation by FDA (note: FDA only regulates marketing; not drug development or price) of providers in this category. 3. Insurance Implications: Insurance often contains separate pharmaceutical drug coverage, although not always (e.g., Medicare Part D). 4. Labor vs. Capital Providers: Like any other production process you need capital (raw materials) and labor (workers). These providers supply a large portion of capital (along with traditional medical institutions), in part by employing individual providers (labor; doctors and nurses). iv. Non-Medical Providers: 1. Examples: a. Family, friend, self: This can go here or in the individual-medical provider category. b. Public Sanitation: E.g., 80 years ago the engineer who installed a sewer system in a local town was doing more for your public health than your doctor. c. Environmental Health Organizations: changes in Environmental Law and environmental quality have a huge impact on health. d. Nutrition: important and underappreciated. Can be done at schools, home, work, through the government, etc. Can be thought of as improving internal environment. (See Lewontin). e. Occupational Health: Safety of working environment as distinct from physical / external environment. f. Education: A huge factor in explaining differences in health status between groups: education level. i. Note on Correlation vs. Causation: Malani doesn’t know why education is correlated with better health. But even if it is purely a correlation effect, and not evidence of causation, this is still important. 2. Health as an End or Goal a. Malani does not view health as an end in itself. There are other things, apart from health, that are worthy goals. b. People repeatedly make tradeoffs between marginal increases in health and other goods (e.g., convenience, cost, quality of interaction, etc.). Sometimes health is a very important priority (e.g., when you’re feeling ill) but, often, it isn’t the top priority (e.g., choosing risky behavior – skiing, drug use, sex, etc.). c. Query: What is an end in itself? i. The economist (Malani) answers “utility,” which might roughly translate to happiness. ii. (me) A more nuanced answer might include, for instance, principles of distributive justice. d. Query: What is the end of the U.S. healthcare system? i. Trick question. There is no single healthcare system in this country. ii. Generally speaking, some groups / elements of the system view healthcare as an end in and of itself, while others don’t. 1. Incentives play an important role. 2. E.g., the number of children born on Dec. 31st (for tax purposes) or the paucity of weekend births (because Doctors are at home). e. Query: So should we invest more resources in things other than healthcare, a sector that is already, roughly, 16-20% of our economy? Should there be more rationing of healthcare dollars in light of other important goals? i. See section on healthcare reform. ii. See Norm Daniels reading. (my take on Daniels:) Argues for rationing or limit setting by making rationing decisions explicit and promoting openness and accountability. Is this compatible with a certain bounded rationality problem: people don’t act as if healthcare is an end in itself but when asked, especially in certain situations (e.g., bedside, when sick) they claim that it is the end, and that rationing is not appropriate? Insurer-Payer Outside of the Triangle: i. Note that overlaid on the patient-provider-insurer triangle is another public policy structure. Individuals and organizations at this level are not directly providing care, but they are influencing the health care system are important to its understanding. ii. Examples: 1. WHO: Worldwide healthcare policy setting. 2. NIH: may be involved in the development of technology or knowledge that other organizations commercialize and make available. Similarly with research university. 3. Advocacy Groups: lobbying legislatures (Federal and State) to reapportion healthcare spending. Influence media and public perceptions of healthcare. The Role of the Government (in the patient-provider-insurer triangle): i. As a patient: 1. May act as a surrogate (purchasing healthcare if you are a government employee, e.g., in the Army, etc.). 2. May also impact relationship between you and your surrogate (e.g., child or parent), you and your insurance provider, or you and your healthcare provider through various regulation, licensing, etc. 3. May restrict or regulate healthcare procedures / treatments (e.g., medical marijuana, abortion?, etc.) from existing, which limits self-medication options. ii. As a provider: May be a direct provider (e.g., VA system, NIH research and development) but, more importantly, it regulates providers (e.g., doc licensing, Certificates of Need for hospitals, tax rules, FDA regulation of pharmaceuticals and medical devices, environmental and workplace regulations, etc.). iii. As an insurer: Creates limitations on what insurance benefits can (or must) and cannot be provided. Note that this is also, in effect, a limitation on patients (availability of healthcare procedures). iv. Result: The government impacts all three corners of the triangle, as well as the relationships along the legs of the triangle. And the government is also operating above the triangle, at the public policy layer as well. v. Miscellaneous Notes: 1. In Medicaid the government acts as the insurer and patient (in the sense that it buys / pays for the insurance) at the same time. 2. The government’s role could certainly be different. It could, in theory, actually replace certain nodes. E.g., insurance as in the Canadian system or insurance and providers as in the U.K. Central Analytic Themes of the Course (G (me) Is there something counterintuitive that aims to solve a problem by restricting and discouraging the distribution of information? b. (Malani) If the point of insurance is, ultimately, to respond to risk aversion, then we might want to encourage pooling (by limiting information) to reduce the risk of being adversely selected against. Even if we could have perfect information we might not want it. iii. Note: To really eliminate adverse selection the best method is to go with a single-payer system, which creates just one giant pool (as in the UK or Canada systems) Externalities: i. Generally: Something done by one actor that has an impact on other actors, causing them to care about the original actor’s behavior. Externalities can be both positive and negative. ii. Examples: 1. Infectious disease: positive externality: vaccine prevents others from getting the disease; negative externality: risky behavior might cause others to become infected. 2. Pollution: positive externality: curbing pollution can have widespread effects for current and future generations; negative externality: failing to curb pollution can produce costs that aren’t paid for by the producer, or incorporated into the price of the good or service. 3. Insurance Pooling: positive externality: risk is reduced by pooling groups together; negative externality: risky behavior uses up the insurance pool funds and drives up subsequent premiums (moral hazard problem). 4. Altruism: positive externality: encourages us to help the sick and uninsured; negative externality: encourages free-riding which can raise costs for others, including altruists. Only those uninsured who cannot afford insurance are deserving of altruism. iii. Uninsured: Provides a link between insurance pooling and altruism. 1. Link: When the uninsured show up at the ER for treatment they take treatment funds away from the insured (negative externality of insurance pooling). But why do we pay for this emergency care in the first place? Altruism. 2. Counter-argument: If the uninsured receive insurance they might wind up in your insurance pool. At that point their risky behavior might not increase your tax burden (negative externality of altruism), but it might drive up your insurance premiums even more (negative externality of insurance pooling). iv. Government Intervention: 1. Generally: Government may use regulation, taxation, and legislation to control risky behavior and curb negative externalities (or incentivize behaviors with positive externalities). Note that this is a very paternalistic approach to healthcare. 2. Examples: Alcohol or cigarette taxes (negative) or tax credits for hybrids (positive). 3. Counter: If they don’t impose the right regulations then they might inadvertently create inefficiencies or negative outcomes (See “Pay-for-performance” critique). Cost Effectiveness: i. Medical Productivity: 1. Query: If we are spending 16%-20% of our GDP on healthcare, are we getting sufficient return on our investment? 2. One answer: David Cutler, Harvard Economist. On average we are spending less than $100,000 per additional year of life expectancy. And $100K/yr is a good benchmark figure for life years. a. Critique: That figure is only an average. It says nothing about specific treatments or therapies, which might be horribly inefficient. b. Generally: this debate is becoming increasingly important as healthcare costs continue to rise as a percentage of our GDP. ii. Competing Risks 1. Generally: Investing in technology to cure one medical problem may only uncover other or further medical problems. 2. Example: In the 1970s Medicare started covering End-Stage Renal Disease (ESRD). Since there has been a marked increase in the number of people diagnosed with ESRD. a. Explanation: Increased diagnosis (due to avail. of coverage) might explain a small part but, generally, fewer people are dying from other diseases (e.g., heart disease) and reaching a stage of life where ESRD kicks in. Normally heart disease will kill you before kidney failure, but if we are preventing heart disease then ESRD is more of a problem. b. Result: Consider what other health risks are present when calculating the expected benefit of any healthcare technology. iii. Variation in Practice 1. Generally: Different providers do different things to address the exact same person with the exact same symptoms / ailment. (See “Two Schools” section) 2. Relevancy to cost: Is one approach better than its alternatives? If so, is variation in practice defensible, or should there be an acknowledged “best practice” (with appropriate exceptions)? a. But be careful to keep in mind that what seems to be variation in practice might represent unobserved variations between individuals that warrant different practices. 3. Note variation in practice across income groups, racial groups, gender groups, etc. is a different problem.
https://www.copvcia.com/lol/university-of-south-carolina-school-of-law-656/health-law-7844/outline-32500/