content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
On 25 October 2012 the European Parliament and Council of Ministers adopted a new Regulation on European Standardisation (Regulation (EU) No 1025/2012). ANEC welcomes the commitment of the Regulation to support the continued financing of the representation of the public interest - including consumer protection - in the European Standardisation System, and to set political expectations for the strengthening of the voice of societal interests in the standards development process. Although European standards are the foundation on which the Single Market for products has been built over the past 20 years, CEN, CENELEC and ETSI are private associations in whose work the societal interest may not be naturally represented. Through the Standardisation Regulation, the European Institutions have recognised the value that societal stakeholders can bring in the development of European standards, especially noting the will of the first and second Single Market Acts to extend the formal use of European standards to the field of services, and to broader European public policies. Despite the national delegation principle in CEN and CENELEC bringing a strength to the European Standardisation System, consumer representation in standardisation is weak or fragmented in many countries, as confirmed by the Access to Standardisation Study done for the European Commission in 2008/2009. ETSI does not provide a special category for consumer representation and, although ANEC is a full member, it is considered an ordinary Belgian member with a single vote in the ETSI process (compared with the 45 votes awarded to the largest multinational companies). In its Annex III, the Standardisation Regulation recognises categories of European association that represent stakeholders often absent from the process at national level, or who have special economic and political value. Consumers, as represented by ANEC, are one category. This recognition enables the continued public funding of these associations by the European Union, and their participation in European standardisation directly at the European level. The Standardisation Regulation also sets an expectation that the participation of the experts of these associations in the standards development process be made “effective”. This follows some orientations of the European Parliament’s Resolution of 21 October 2010 on the future of European standardisation which we helped to shape. Such a strengthening is long overdue given that our participation has always helped to achieve the broadest possible relevance of European standards. In anticipation of the adoption of the Regulation, ANEC, ECOS and ETUI have been working with CEN and CENELEC to define what “effective participation” could comprise. Without effective consumer representation in the writing of standards, products will not be as safe, as interoperable, as accessible or as sustainable as they could be. Of course, it is not the interests of business to ignore the needs of consumers if it wants to sell its products and services, but our experience is that business tends to focus on the needs of the mainstream or “average” consumer - where costs are often lowest and profits highest - to the detriment of vulnerable consumers: those who are young, old or disabled. When standards are used for legislation or other public policy objectives, it is essential they take account of the needs of all consumers.
https://www.anec.eu/about-anec/the-standardisation-regulation
Tzolk’in Date: 7 IX-Ruler is White. Blocker is Indigo. Gatekeeper is Jupiter. Meaning or common usage: Juguar, death Associated Qualities: jaguar, shaman, integrity, heart-knowing, night seer, magician, alignment with divine will, Santo Mundo (sacred Earth), ego, willfulness, strategy, prophet, daykeeper, ruling spirit of jungles, plains, mountains “Saving the world.” If you are reading this, maybe you’ve thought about how you can save “el Mondo’”–El Santo Mundo–the Sacred Earth. But of course– although we do abuse her, it is not the Earth who needs our help so much as WE– her teeming masses—need to help each other. Watching events unfold in Egypt, our hearts can not help but throb with a longing for the People there. What can each of us do? IX speaks of the Earth’s feminine power—the force behind earthquakes and storms, the wealth in her veins of iron and gold. In the human dimension, Ix is the wealth of an intuitive, enlightened consciousness, the prowess of an expert hunter, as is the jaguar, a master of the night. To honor Ix and connect to the Jaguar in one’s self, is to connect to the Mother Earth and draw upon her powers and use it as a healing force. My youngest son is an Ix, a Jaguar. Indeed, its easy to “see” the Ix in my son–very easy. He is a person of great integrity and has thought deeply about the role he came to play on this our Santo Mundo. He is 22. He has strategized long and hard and filled a white board with a diagram, planning out his life. Grandmother Pa’Ris’Ha gave him a name long ago, White Winds, and his totem is the Snowy Owl. Ix is a Northern sign with “stealthy Northern energy”—just as the great hunter, the Snowy Owl is of the white freezing winds of the North Lodge. When we are feeling sorrows or burdens, loneliness or depression, our knowledge of Ix can lead us to lie down upon the Santo Mundo for strength and solace. One time, when I came from a sacred rock lodge and layed upon the Mother’s breast, I heard words that have stuck with me now for years: “I am in the Arms of the Beloved.” How strengthening and reassuring! Each day we journey through an unfolding of revelations if we but seek. There ARE answers to the thorniest of problems. Here is one that has been taught by Grandmother Pa’Ris’Ha for many, many challenging conditions, and I would suggest that we do this for the People in Egypt. In meditation, surround them in a very pale blue light. Sit quietly and surround them with this healing light of the North Lodge. Be the Jaguar. Be the Shaman. Let the power of Ix, of the Sacred Earth work through you! We are ALL the Chosen,
https://wellnessandspirituality.net/?p=1081
25 Influential Bloggers That You Should Be Following in 2021 Blogging has become so popular in the last decade or so, especially within the last couple of years. Since people were forced to rethink the way they do business, many businesses have turned to the online world. Blogging has actually been around since the early 2000s, but it increased in popularity about tenfold in the last five years alone. The cool thing about blogging is that pretty much anyone that has a knack for writing can get into this field. And you can do it from anywhere, at any time. You can make a blog in just a few minutes and then you’re good to go. But how can you make money from blogging? It takes a lot of time and determination, and maybe a little bit of luck, too. If you think you’ll get a massive following and make a lot of money from day one, you will be very disappointed. As you will see with the list we compiled, some of these bloggers who dedicate their whole time on their blogging business started off as a hobby. Their passion and knowledge about the topics they tackled are what got them where they are right now, and this definitely didn’t happen overnight. Most of them actually make money from advertisements that pop up on their pages. But to get to that point, a lot of work has to be done. If you’re patient and persistent, eventually you will start earning from your blog. Another thing they all have in common: they write about what they love and share a genuine passion for such as travel. When you truly enjoy what you do it shows, and people like individuals that are happy with their jobs. Now we will take a look at the 25 most influential bloggers that you can get your inspiration from today. Contents - 25. Darren Rowse - 24. Harvey Levin - 23. Pat Flynn - 22. Gary Vaynerchuk - 21. Vitaly Friedman - 20. Mario Armando Lavandeira Jr. - 19. Peter Rojas - 18. Susan Yara - 17. Michael Arrington - 16. Brett McKay - 15. Vani Hari - 14. Tim Ferris - 13. Jonathan Van Ness - 12. Valeria Lipovetsky - 11. Leon Ho - 10. Ryan Schreiber - 9. Stephen Totilo - 8. Ellarie Noel - 7. Jonah Peretti - 6. Brian Clark - 5. Christene Barberich - 4. Rand Fishkin - 3. Neil Patel - 2. Pete Cashmore - 1. Arianna Huffington 25. Darren Rowse Like many bloggers today, he started off his business as a hobby. At first, he was blogging about the Olympics and photography, one of his favorite pastimes. Eventually he started his now main money making Problogger, a website that helps bloggers with handy tips and tricks. He also made an Ebook that sells online, and he makes money from affiliate programs and advertising deals. 24. Harvey Levin An American lawyer, legal analyst and celebrity reporter, Harvey Levin founded the famous website TMZ, which is the leading celebrity gossip blog up to now. He used his contacts from the industry to create his own brand, and he became very successful at it. It turns out, people love to hear anything about celebrities and their lives. 23. Pat Flynn Pat Flynn stumbled upon his newfound love for blogging by accident. After losing his corporate job he was trying to figure out how to earn a living and created the website Smart Passive Income in order to provide for his family. Little did he know, his honesty and transparency that he shares with his readers got him a huge following, and he earns a very decent living from his blog. 22. Gary Vaynerchuk His love of wine that he displayed on the video blog Wine Library TV, got him instant internet fame and turned his $3 million a year wine retail store into a $60 million a year wholesale business. He co-founded VaynerMedia as well, a digital marketing agency that collaborates with some very famous brands in the world. All those ventures combined make his brand unique and bring him millions of dollars a year. 21. Vitaly Friedman Vitaly Friedman founded Smashing magazine in 2006. It is a blog that provides web development and web design related articles, which millions of viewers find useful. He is very passionate about beautiful content and a skilled writer. His style flows easily, that is one of the reasons his website gets so many visits per month – an average of 2.3 million viewers per month. Another reason is that the information is accurate and pertinent. 20. Mario Armando Lavandeira Jr. If you’re living in North America, or anywhere in the western world really, you surely heard of Perez Hilton. The popular yet controversial celebrity gossip website is run and managed by Mario Armanda, and it generates more than two million viewers a month. His main source of income is direct advertisement on the website. 19. Peter Rojas A wiz when it comes to the online world, he is responsible for two major websites: Engadget and Gizmodo. Engadget earns him about $50 million US a year, and Gizmodo about $5 million. Not too shabby, right? Both blogs cater to the gaming world and reviews on consumer electronics, which is a very popular and lucrative business. But most of this income is generated from sponsored ads and direct advertising. 18. Susan Yara Her blog is about another very lucrative topic that took on immense dimensions in the last couple of years: skincare and beauty. She co-owns the blog Mixed Makeup, another personal blog and a skincare line called Naturium skin. One of the most influential #skinthusiasts, she does brand comparisons, tests and tries many products that are on the market, and gives valuable tips according to your skin type. 17. Michael Arrington A former mergers and acquisitions lawyer from California, he found his niche in the startup world. He started his blog TechCrunch.com, a blog where he reviewed up and coming tech companies from all over the world. It was a very lucrative blog, and with money he made from this venture he acquired AOL for the sum of $30 million US. The proceeds from the sale were used to set up his own investment fund and he invests into startups. 16. Brett McKay A self-proclaimed “manly” man, Brett decided to start writing about men and their interests back in 2008. He founded and is the editor-in-chief of an online lifestyle magazine that caters to the male population, The Art of Manliness. Him and his wife run their business online full-time. The blog’s topics range from business, fashion, finances and health, with a new approach to current lifestyle changes. 15. Vani Hari The Food Babe, an online blog about food investigating and clean eating. Vani Hari does not offer recipes or content about cooking or baking, she simply promotes food safety and natural eating alternatives. An activist as well, she influenced some fast food chains into bringing healthier alternatives to their menu while using better quality ingredients. 14. Tim Ferris Known for his popular blog Four Hour Workweek, Tim is a pioneer in the “lifestyle design” area, and he sold several books on the topic. His massive following got him instant fame, and he is one of the most influential bloggers out there. He takes credit for popularizing the “internet lifestyle”, and we should be grateful to him for that. Now we can work from our laptops from literally anywhere in the world because of his innovative ways. 13. Jonathan Van Ness A very vocal activist of sensitive issues such as HIV awareness, mental health and the LGBTQ+ community, Jonathan Van Ness created his blog and podcast named Getting Curious with Jonathan Van Ness about those very topics. Because of his untraditional ways of approaching journalism and social media, his increased popularity has gained him the true influencer status. 12. Valeria Lipovetsky One of the most influential fashion bloggers and TikTok stars is Valeria Lipovetsky. Valeria Inc is her platform where she gives tips on beauty, clothing and style. On her blog she writes about her experiences as a model, her personal style, who influenced her fashion sense, among other similar topics. She also has a jewelry line called Leia Collections, is a mom of two, and she also creates content on health, parenting and wellness. 11. Leon Ho San Francisco-based Leon Ho is the CEO and founder of the very popular blog Lifehack. The blog teaches about productivity, applying business hacks that improve your numbers, and he takes his inspiration from his own experiences as a manager for a large corporation. He started this venture with the intention of sharing tips and tricks that would help others, and it took off and became a major resource for both experts and newbies in the business alike. 10. Ryan Schreiber Pitchfork, an Indie music blog is founded and run by Ryan Screiber, who started off this project in 1995. At the time, the internet was so new and scary for some people, many thought he was nuts at the time. Now, the blog is owned by media conglomerate Conde Nast. He did pretty well for himself, having started off a small blog as a youngster straight from his Mineappolis bedroom. 9. Stephen Totilo If you consider yourself a “nerd” you’ve surely heard of Kotaku, a blog for the”nerd” culture. Stephen Totillo blogs about video games, a dream come true for many gamers. His interest in the subject and obvious passion can be felt through his words that he presents to us. A niche industry that grew tremendously in the last few years, it made him one of the most influential bloggers on the topic. 8. Ellarie Noel Her blog about being a single mom Ellarie got her a large following, and she has a huge following on Instagram as well. Her focus is on the mom lifestyle, and being an unemployed single mom that turned things around and expanded her content creation into a lucrative business. She also writes about and reviews products, travel and lifestyle, and is a well-known beauty blogger. The way she turned her own life around inspires and encourages many women to follow suit. 7. Jonah Peretti Jonah Perretti used to work with a group of people, amongst whom you might recognize the name Arianna Huffington, and created Huffington Post. Then he branched off and created Buzzfeed for fun as an experiment of sorts, where people could connect over popular content they were interested in. The pop culture phenomenon expanded to dimensions he never dreamed of, and his blog is his main source of income today. 6. Brian Clark He founded and is the main contributor to Copy Blogger, a blog about helping any digital content creator with copywriting and marketing skills. He created this website in 2006 when many people were still new to content creation. Although he did step down as CEO of Copy Blogger, he still makes regular content contributions to it. He branched out to other ventures such as Unemployable, an online community he founded that helps out entrepreneurs and freelancers. He also owns Further, a newsletter about midlife personal growth. 5. Christene Barberich You must have heard of Refinery29, the online media and entertainment company that focuses on empowering women. Christene Barberich is the face behind the idea that was founded in 2005 in an apartment she shared with friends. Now the company she co-created is headquartered in New York, and has since taken over the internet. At the moment it is owned by parent organization Vice Media and employs over 500 people. They write on topics such as self-help and personal interests that are very relevant in our everyday lives. 4. Rand Fishkin A college dropout that was working for a family run web development company, he started his blog about SEO in 2004. SEO, or Search Engine Optimization, helps many clients in the search industry with tools and educational resources. His company also creates SEO software that generates millions of dollars a year. With the profits from his subscriptions and software services he acquired several other companies. 3. Neil Patel As a son of immigrants that grew up in California, Neil Patel became one of the most influential and successful entrepreneurs in his field. He worked hard at various jobs, but his knowledge of software and great marketing skills are what got him to where he is today. He created KISSmetrics, an analytic platform for people starting their own businesses with his team, but then ventured out and made his own blog, where his focus is on SEO content. 2. Pete Cashmore Another one of the most influential blogs is Mashable, a news resource appreciated by tech-lovers from around the world. Founder Pete Cashmore started this website at only 19 years old, and he has the reputation of a tech guru. But when he decided to concentrate on media resources, that’s when the blog really became what it is today. He blended the two topics into his website and took it to another level. 1. Arianna Huffington Media legend Arianna Huffington is a Greek-American journalist who created Huffington Post in 2005. Since its creation in 2005, the news-media platform has taken over the internet. She built her media empire from scratch during tumultuous times, so we have to give her major props for that. While being editor-in-chief of the blog she also founded Thrive Global, another media group that focuses on wellness and self-help. She is one of the most remarkable women on the online news platform. Here you have it, the list of the 25 most influential bloggers of 2021. Do you agree with our choices?
https://luxatic.com/25-influential-bloggers-that-you-should-be-following-in-2021/
Looking for something to do? Check out any Powassan area museum POWASSAN — Summer is here and a visit to a local museum is a good way to learn about your community. The Commanda General Store Museum on Highway 522 at Commanda is open for the summer season.The hours are 11 a.m., until 4 p.m. from Wednesday to Saturday. The 2018 fundraising focus is ‘stabilizing our 133 year-old museum structure.’ The Nipissing Township Museum, Highway 654, Nipissing, is also open for the summer season.The hours are from 11 a.m. until 3 p.m., Wednesday to Sunday.The next special event will be ‘Pie Day’ on Sunday, July 15. The Callander Bay Heritage Museum and Alex Dufresne Gallery at 107 Lansdowne St., Callander, is open from 10 a.m. until 5 a.m., Tuesday to Saturday. I have just been notified that the opening and ribbon cutting at 250 Clark, the new municipal office and community centre scheduled for June 29, has been postponed. In the meantime, the building is open for service.You will find the reception area just inside the front door. The Gap day camp program for children that started on July 3 is being operated out of 250 Clark. Register your child at www.powassan.net or call 705-724-2813. The Chisholm United Church, Chiswick Line, will host a Celebration of Founding and Current Families this Sunday, July 8, at 7 p.m. Historical displays will be open at 6 p.m. Refreshments will be served. The Powassan United Church Basement Bookstore has extended their hours on Tuesday and Thursday to 10 a.m. until 3 p.m. and on Saturday, 9 a.m. until noon. Free children's and teen books in both French and English are available. Adult books and DVDs are: $1.00 each or $10.00 per bag (bags provided). Use the side door to access the bookstore. All proceeds are donated to charity. Looking for something to do? Check out any Powassan area museum POWASSAN — Summer is here and a visit to a local museum is a good way to learn about your community. The Commanda General Store Museum on Highway 522 at Commanda is open for the summer season.The hours are 11 a.m., until 4 p.m. from Wednesday to Saturday. The 2018 fundraising focus is ‘stabilizing our 133 year-old museum structure.’ The Nipissing Township Museum, Highway 654, Nipissing, is also open for the summer season.The hours are from 11 a.m. until 3 p.m., Wednesday to Sunday.The next special event will be ‘Pie Day’ on Sunday, July 15. The Callander Bay Heritage Museum and Alex Dufresne Gallery at 107 Lansdowne St., Callander, is open from 10 a.m. until 5 a.m., Tuesday to Saturday. I have just been notified that the opening and ribbon cutting at 250 Clark, the new municipal office and community centre scheduled for June 29, has been postponed. In the meantime, the building is open for service.You will find the reception area just inside the front door. The Gap day camp program for children that started on July 3 is being operated out of 250 Clark. Register your child at www.powassan.net or call 705-724-2813. The Chisholm United Church, Chiswick Line, will host a Celebration of Founding and Current Families this Sunday, July 8, at 7 p.m. Historical displays will be open at 6 p.m. Refreshments will be served. The Powassan United Church Basement Bookstore has extended their hours on Tuesday and Thursday to 10 a.m. until 3 p.m. and on Saturday, 9 a.m. until noon. Free children's and teen books in both French and English are available. Adult books and DVDs are: $1.00 each or $10.00 per bag (bags provided). Use the side door to access the bookstore. All proceeds are donated to charity. Looking for something to do? Check out any Powassan area museum POWASSAN — Summer is here and a visit to a local museum is a good way to learn about your community. The Commanda General Store Museum on Highway 522 at Commanda is open for the summer season.The hours are 11 a.m., until 4 p.m. from Wednesday to Saturday. The 2018 fundraising focus is ‘stabilizing our 133 year-old museum structure.’ The Nipissing Township Museum, Highway 654, Nipissing, is also open for the summer season.The hours are from 11 a.m. until 3 p.m., Wednesday to Sunday.The next special event will be ‘Pie Day’ on Sunday, July 15. The Callander Bay Heritage Museum and Alex Dufresne Gallery at 107 Lansdowne St., Callander, is open from 10 a.m. until 5 a.m., Tuesday to Saturday. I have just been notified that the opening and ribbon cutting at 250 Clark, the new municipal office and community centre scheduled for June 29, has been postponed. In the meantime, the building is open for service.You will find the reception area just inside the front door. The Gap day camp program for children that started on July 3 is being operated out of 250 Clark. Register your child at www.powassan.net or call 705-724-2813. The Chisholm United Church, Chiswick Line, will host a Celebration of Founding and Current Families this Sunday, July 8, at 7 p.m. Historical displays will be open at 6 p.m. Refreshments will be served. The Powassan United Church Basement Bookstore has extended their hours on Tuesday and Thursday to 10 a.m. until 3 p.m. and on Saturday, 9 a.m. until noon. Free children's and teen books in both French and English are available. Adult books and DVDs are: $1.00 each or $10.00 per bag (bags provided). Use the side door to access the bookstore. All proceeds are donated to charity.
At an exhibition by Charlie Sofo the room sheet is crucial. It is a way of decoding the underlying narratives of the objects, actions or rituals that inform the works. Phrases such as Wedges Used to Secure Wonky Cafe Tables, or ‘museum blinds opened daily’, assume a poetic status as adjunct works. Incorporating adjustments to existing architecture as well as found objects, video and photography, Sofo’s recent exhibition at the entry to the Living Museum of the West is the culmination of a three-month residency where the artist gathered information through research and ambient observations. The Museum itself, situated on the Maribyrnong River in Pipemakers Park, is founded on the value of community-based history making. The Museum’s collection policy places emphasis not just on material artefacts that relate to local history—such as the distinct artillery, meat processing and pipe making histories of the area—but to oral histories, film footage, photography and paperwork. Largely thanks to the sustained commitment of artist Kerrie Poliness as a volunteer, there has been regular interest from the contemporary art community for projects in and around the museum over the past twenty-eight years. The entry to the museum has not previously been used for such a project, and as Sofo is well aware, there is a vividness to art projects free from the ghosts of previous exhibitions. Through the artist’s simple act of opening the museum blinds, the exhibition provides a radical new view upon entry, incorporated into the exhibition as Open Blinds. The act draws attention to both the existing architecture and to the natural world beyond the windows. The work displays the kind of simple clear-sightedness with which we view a house—and all of the attendant flaws—before moving in. Time-based rituals form a large part of Sofo’s output. Along with a number of other pieces installed on the floor of the Museum, is a piece of laminated MDF to which three eye drops are re-applied every five hours, precisely the time for the liquid to evaporate (as well as the opening hours for the exhibition). Elsewhere, on Sofo’s blog, we discover how, on a residency in Prato, the artist recorded the fluctuating water levels of a bidet. Each of these works, composed from narratives that relate to daily rituals, foster an alertness to the way that these seemingly mindless rituals might be composed of infinitely interesting variations. Through the titles and material descriptions, it is easy to get the sense that Sofo is constantly interrogating the role and value of art. He is also unafraid to subvert, to question the value of, say, costly public art. Sofo segues comfortably into the genre of Land Art with a sequence of ten found tree stumps, recently lopped to create sightlines to a large and expensive public sculpture installed within the park. This question of the relative values of art, also extends to a piece of brown cardboard which bears blackened imprints of shoes. The cardboard is a common example of makeshift doormats placed in front of restaurants in nearby Footscray during wet weather. It falls within an interesting history in Conceptual Art of works that bear the marks of pedestrian activity. These include Richard Long’s A Line Made by Walking (1967) and, earlier in 2016, Ian Milliss’s toothpaste green foam mattress wedged in the entry point to Seventh Gallery in the exhibition Bedshed.1 In Milliss’s case the floor piece served as a record of movement through the gallery. Sofo’s work instead opens up discussion about the ethics of sourcing found materials. Much, actually most, of his work employs found objects, and so the artist regularly faces conundrums about the acquisition of potentially useful but so-called worthless objects. It is telling that Sofo acquired the piece of cardboard when it had drifted away from the restaurant, at the beginning of its transition from doormat to rubbish. Whilst each work is drawn from a unique context, the overall effect of the exhibition is of an entire composition, balanced by varying degrees of scale across one horizontal and one vertical plane. There is a unifying aesthetic across Sofo’s practice, but it is hard to pin down: scatterings of grain, objects pressed under glass, a monochrome here and there. More than something to look at, Sofo’s power lies in the way he re-directs our gaze upon the rest of the world. Charlie Sofo, A Gap Opens Up, 2016. Living Museum of the West, Melbourne. Photograph Christo Crocker. 1. Bedshed was curated by Jessie Bullivant and included the artists Christo Crocker, Brian Fuata, Ian Milliss, James Parkinson, Gemma Weston, Makiko Yamamoto. It was held at Seventh Gallery, Melbourne, 5 – 20 May 2016.
https://www.eyelinepublishing.com/eyeline-86/review/charlie-sofo-gap-opens
# Triazolam Triazolam, sold under the brand name Halcion among others, is a central nervous system (CNS) depressant tranquilizer of the triazolobenzodiazepine (TBZD) class, which are benzodiazepine (BZD) derivatives. It possesses pharmacological properties similar to those of other benzodiazepines, but it is generally only used as a sedative to treat severe insomnia. In addition to the hypnotic properties, triazolam's amnesic, anxiolytic, sedative, anticonvulsant, and muscle relaxant properties are pronounced as well. Triazolam was initially patented in 1970 and went on sale in the United States in 1982. In 2017, it was the 289th most commonly prescribed medication in the United States, with more than one million prescriptions. ## Medical uses Triazolam is usually used for short-term treatment of acute insomnia and circadian rhythm sleep disorders, including jet lag. It is an ideal benzodiazepine for this use because of its fast onset of action and short half-life. It puts a person to sleep for about 1.5 hours, allowing its user to avoid morning drowsiness. Triazolam is also sometimes used as an adjuvant in medical procedures requiring anesthesia or to reduce anxiety during brief events, such as MRI scans and nonsurgical dental procedures. Triazolam is ineffective in maintaining sleep, however, due to its short half-life, with quazepam showing superiority. Triazolam is frequently prescribed as a sleep aid for passengers travelling on short- to medium-duration flights. If this use is contemplated, the user avoiding the consumption of alcoholic beverages is especially important, as is trying a ground-based "rehearsal" of the medication to ensure that the side effects and potency of this medication are understood by the user prior to using it in a relatively more public environment (as disinhibition can be a common side effect, with potentially severe consequences). Triazolam causes anterograde amnesia, which is why so many dentists administer it to patients undergoing even minor dental procedures. This practice is known as sedation dentistry. ## Side effects Adverse drug reactions associated with the use of triazolam include: Relatively common (>1% of patients): somnolence, dizziness, feeling of lightness, coordination problems Less common (0.9% to 0.5% of patients): euphoria, tachycardia, tiredness, confusional states/memory impairment, cramps/pain, depression, visual disturbances Rare (<0.5% of patients): constipation, taste alteration, diarrhea, dry mouth, dermatitis/allergy, dreams/nightmares, insomnia, parasthesia, tinnitus, dysesthesia, weakness, congestion Triazolam, although a short-acting benzodiazepine, may cause residual impairment into the next day, especially the next morning. A meta-analysis demonstrated that residual "hangover" effects after nighttime administration of triazolam such as sleepiness, psychomotor impairment, and diminished cognitive functions may persist into the next day, which may impair the ability of users to drive safely and increase risks of falls and hip fractures. Confusion and amnesia have been reported. In September 2020, the U.S. Food and Drug Administration (FDA) required the boxed warning be updated for all benzodiazepine medicines to describe the risks of abuse, misuse, addiction, physical dependence, and withdrawal reactions consistently across all the medicines in the class. ### Tolerance, dependence, and withdrawal A review of the literature found that long-term use of benzodiazepines, including triazolam, is associated with drug tolerance, drug dependence, rebound insomnia, and CNS-related adverse effects. Benzodiazepine hypnotics should be used at their lowest possible dose and for a short period of time. Nonpharmacological treatment options were found to yield sustained improvements in sleep quality. A worsening of insomnia (rebound insomnia) compared to baseline may occur after discontinuation of triazolam, even following short-term, single-dose therapy. Other withdrawal symptoms can range from mild unpleasant feelings to a major withdrawal syndrome, including stomach cramps, vomiting, muscle cramps, sweating, tremor, and in rare cases, convulsions. ### Contraindications Benzodiazepines require special precautions if used in the elderly, during pregnancy, in children, in alcoholics, or in other drug-dependent individuals and individuals with comorbid psychiatric disorders. Triazolam belongs to the Pregnancy Category X of the FDA. It is known to have the potential to cause birth defects. ### Elderly Triazolam, similar to other benzodiazepines and nonbenzodiazepines, causes impairments in body balance and standing steadiness in individuals who wake up at night or the next morning. Falls and hip fractures are frequently reported. The combination with alcohol increases these impairments. Partial, but incomplete tolerance develops to these impairments. Daytime withdrawal effects can occur. An extensive review of the medical literature regarding the management of insomnia and the elderly found considerable evidence of the effectiveness and durability of nondrug treatments for insomnia in adults of all ages and that these interventions are underused. Compared with the benzodiazepines including triazolam, the nonbenzodiazepine sedative-hypnotics appeared to offer few, if any, significant clinical advantages in efficacy or tolerability in elderly persons. Newer agents with novel mechanisms of action and improved safety profiles, such as the melatonin agonists, hold promise for the management of chronic insomnia in elderly people. Long-term use of sedative-hypnotics for insomnia lacks an evidence base and has traditionally been discouraged for reasons that include concerns about such potential adverse drug effects as cognitive impairment, anterograde amnesia, daytime sedation, motor incoordination, and increased risk of motor vehicle accidents and falls. One study found no evidence of sustained hypnotic efficacy throughout the 9 weeks of treatment for triazolam. In addition, the effectiveness and safety of long-term use of these agents remain to be determined. More research is needed to evaluate the long-term effects of treatment and the most appropriate management strategy for elderly persons with chronic insomnia. ## Interactions Ketoconazole and itraconazole have a profound effect on the pharmacokinetics of triazolam, leading to greatly enhanced effects. Anxiety, tremor, and depression have been documented in a case report following administration of nitrazepam and triazolam. Following administration of erythromycin, repetitive hallucinations and abnormal bodily sensations developed. The patient had, however, acute pneumonia, and kidney failure. Co-administration of benzodiazepine drugs at therapeutic doses with erythromycin may cause serious psychotic symptoms, especially in those with other physical complications. Caffeine reduces the effectiveness of triazolam. Other important interactions include cimetidine, diltiazem, fluconazole, grapefruit juice, isoniazid, itraconazole, nefazodone, rifampicin, ritonavir, and troleandomycin. Triazolam should not be administered to patients on Atripla. ## Overdose Symptoms of an overdose include: Coma Hypoventilation (respiratory depression) Somnolence (drowsiness) Slurred speech Seizures Death can occur from triazolam overdose, but is more likely to occur in combination with other depressant drugs such as opioids, alcohol, or tricyclic antidepressants. ## Pharmacology The pharmacological effects of triazolam are similar to those of most other benzodiazepines. It does not generate active metabolites. Triazolam is a short-acting benzodiazepine, is lipophilic, and is metabolised hepatically via oxidative pathways. The main pharmacological effects of triazolam are the enhancement of the neurotransmitter GABA at the GABAA receptor. The half-life of triazolam is only 2 hours making it a very short acting benzodiazepine drug. It has anticonvulsant effects on brain function. ## History Its use at low doses has been deemed acceptable by the U.S. Food and Drug Administration (FDA) and several other countries. ## Society and culture ### Recreational use Triazolam issued nonmedically: recreational use wherein the drug is taken to achieve a high or continued long-term dosing against medical advice. ### Legal status Triazolam is a Schedule IV drug under the Convention on Psychotropic Substances and the U.S. Controlled Substances Act. ### Brandnames The drug is marketed in English-speaking countries under the brand names Apo-Triazo, Halcion, Hypam, and Trilam. Other names include 2'-chloroxanax, chloroxanax, triclazolam, and chlorotriazolam.
https://en.wikipedia.org/wiki/Alti-Triazolam
accueil site > Pedagogic tools > Teaching materials > All level > In Albert Kahn’s garden:from a sensorial course to a traveller’s (...) Albert Kahn’s Paris gardens are, like any garden, a perceptible and physical context; they appear to us with colours, smells and sounds, tactile feelings; these feelings, stored in voluntary or involuntary memory, are a source for imagination. OBJECTIVES To stimulate sensorial exploration To appeal to imagination To structure languages SKILLS To perceive sensations in a reasoned way To use tools that enable awareness of these sensations To learn to articulate verbal and plastic language. MATERIALS “windows? and other variously shaped centring tools mirrors notebooks, paper chalk, pastel crayons cameras, pencils scarves collecting boxes, a press REFERENCES Albert Kahn’s garden or views of that garden Aline Rutily’s « Carnets de jardins » (Garden notebooks) PROCEDURES 1) List sensations >When you are in a garden, how can you pay more attention to what you perceive rather than to what you know or think you know? eyesight Ask the children to single out details to be observed in a Japanese garden: the outline of the portico or the red bridge that cross over the pond, the details of the statue of a dragon or the bronze lanterns, the veins in the stone, the line of the stone-paved paths, the dwarf trees pruned the Japanese way, the various very wide perspectives exposed to their gaze. Make them pick out the various shapes in a fruit garden: apple trees, pear trees, plum trees pruned in the shape of a distaff, a spindle, a beaker or a sphere… With their eyes they follow the regular lines of the alleys and lawns of a formal garden, the symmetry of its flowerbeds, the layout of the patches of colour, the details of the architecture of the “Palmarium greenhouse" Get them to grasp the details of the flowers as they look at them from a short distance, or the patches they make when seen form afar, of the winding lines of a landscape garden… The children make palettes and colour charts with pastel crayons: they pick out the shades of the plants, from the “Blue Forest? to the “Vosges Forest?, the splendid pinks of the Japanese cherry trees, the wide range of the reds of the azaleas, turning purplish-blue or orange, the reds of the maple-trees, in autumn… They use the mirrors to create surprising effects, to brake the scale. Keep traces of these experiments on photographs… touch The children move about in the garden, blindfolded, for a better exploration and comparison of the tactile characteristics of its elements. They bring back, classify, keep and choose elements that will be stored in the pages of the notebook : samples of earth, plant elements fallen to the ground, twigs … smell With the same tools, they list smells: the fragrance of flowers, of some leaves, of fruit, of the earth, of grass. sound elements (bird songs, for example) : With the same approach they can be recorded while they walk about in order to make a catalogue of sounds to explore, identify, qualify… 2) Language structure: During each step, words and sentences are written down in order to qualify the sensations they have listed express what everyone feels Mental images spring from this articulation between various languages; choices are made, they create their own personal itinerary, which will structure the realisation of each “travel diary?. FOLLOW-ON ACTIVITIES To gather information about the history of Albert Kahn’s garden, of the Albert Kahn Museum the work of Albert Kahn garden vegetation (trees, plants with or without flowers…) the animals which live in a garden ASSESSMENT It will measure the diversity of what has been collected projects of plastic realisations the interest given to Albert Kahn’s work.
https://paysage-patrimoine.eu/spip.php?article652&num_lang=2
Life can throw numerous events at people which can be experienced as difficult and demanding. These periods in life can often feel overwhelming and distressing leading to hopelessness, anger and frustration. Counselling and Psychotherapy can offer support and guidance in being able to manage your own way through challenging times. The aim of therapy is to enable you to manage relationships, conflicts and life in general, without feeling distressed or alone. Here at CHaT there is a commitment to offering a safe, calm, nurturing and supportive environment, advocating reflection, exploration and an insight into a more relaxed and peaceful way of being. We will provide the space for you to discuss, offload and unburden negative feelings without judgement but with empathy and honesty. By discussing and often resolving personal issues, phobias, negative thought processes and emotional difficulties, you will be able to gain a greater control over your emotions and start to live a more stable and calm life. My name is David Wright and I have a Masters degree in Interdisciplinary Psychology which includes Mindfulness and Transpersonal Psychology. I have a degree (with Honours) in Counselling studies. I also have a Diploma in Cognitive Behavioural Therapy (CBT) and have a wide range of experience of working with mental health issues. I offer a non-judgemental, confidential and empathic way of being. I previously spent 22 years in the Armed Forces, with tours in Northern Ireland, Iraq and Afghanistan, and can work with serving and former members of the military as well as their families or friends that require support. My name is Amanda Woodhouse. I am a professional, qualified counsellor with a BA (honours) Degree in Counselling Studies & Diploma in Cognitive Behavioural Therapy (CBT). I have a wide range of experience working with Mental Health & Emotional issues with people aged 16 years & onwards. My previous experience includes counselling for the NHS & an organisation in Grimsby providing Health & Social Care to the community. For the past 3 years I have counselled students aged 16 years & upwards at a local College & University. I offer a non-judgemental & emphatic way of being, enabling the client to talk freely & confidentially about any issues or concerns they are having, no matter how big or small. The aim of Person-centred counselling is to offer individuals with an opportunity to understand themselves and develop a sense of self where they can accept how their attitudes, feelings and behaviours can be affected and hopefully discover their true potential in safe, confidential, comfortable, non-judgemental surroundings and environment, with the therapist being genuine, offering empathy, whilst not giving advice, directions or personal views. Instead, empowering the client to discover their own solutions to the issues in their lives. What type of counselling do you provide and which is best for me? We offer a non-judgemental way of being, enabling the client to speak confidentially about any issues or concerns that they may have. It is the client who is the expert on themselves, therefore we aim to assist the client in understanding their beliefs and accepting who they are, and we will listen, without giving advice or opinions and simply help the client to feel accepted and to accept themselves. We provide counselling for individuals on a short or long term basis. This means you may either engage in counselling for as long as you require or choose to be allocated a set amount of sessions with specific goals in mind. When you make contact with CHaT, you can discuss the counselling you prefer or if you are unsure you can ask when you come in for your assessment. We seek to ensure that everyone sees one of our counsellors within two weeks of enquiry. All our counsellors have a degree in counselling studies, training for a minimum of three years. For best ethical practice our counsellors are all registered with the British Association of Counselling and Psychotherapy (BACP). We offer crisis counselling at short notice where you will be seen within a maximum of 1 week for emergency or crisis issues. In the first session you will meet your counsellor for the first time. Sessions last for 50 minutes, and take place in a quiet room with just two of you present. A number of things will be covered such as confidentiality and session timings.. Here at CHaT there is a commitment to offering a safe, calm, nurturing and supportive environment, advocating reflection, exploration and an insight into a more relaxed and peaceful way of being.
https://www.chatcounselling.co.uk/
My practice specializes in the treatment of adolescents, young adults, and adults in the areas of eating disorders, body image issues, anxiety, depression, relationships, women's issues , pregnancy, prenatal, postpartum support and the immediate and lasting effects of trauma. I take an integrative approach to treatment - assessing the needs of each client and going from there. Debra Linch Clinical Social Work/Therapist, LCSW Verified Verified I believe the fit between a patient and therapist plays an important role in healing and growth. My approach to therapy begins with respect and validation for your feelings and experiences. I strive to create a safe and comfortable environment where dialogue can flow without fear of judgment. I specialize in treating people struggling with depression, anxiety, addiction and women's issues such as perinatal and postpartum depression. I will work with you to restructure your unhelpful beliefs that cause distress, develop new and healthier coping skills and improve your communication skills, confidence and self-esteem. Routh Chadwick Clinical Social Work/Therapist, LCSW Verified 1 Endorsed Verified 1 Endorsed I welcome all individuals who resonate with my approach, but some of my specialties are: women's issues , dating and relationships, life transitions, LGBT issues, life coaching, burnout and stress management. It's been a trying couple of years to say the least. Perhaps you're feeling burned out, uncertain about your next steps or you just need a place to bring your anxiety and frustrations about life. I believe that everyone has innate resilience and the ability to find solutions, no matter how bad things seem. Often it's when you finally ask for help that the greatest opportunity for growth and change occurs. I can help you move through those difficult places, whether it feels like depression, anxiety or feeling stuck. If you improve one area of your life, the rest will follow. Siobhán Cassidy Clinical Social Work/Therapist, LCSW Verified Verified Hi there. Firstly, my name Siobhán is Irish & pronounced like sha-v-on. And I am glad you are here! Mental health is vital for navigating life. And having someone in your corner with expertise, compassion, & independence is a great way to cope. I truly believe meaningful therapy comes when YOU feel a connection with your clinician. So finding a fit for you is vital. I approach everyone as a fellow human & use my bubbly personality, genuine curiosity, & compassion to develop rapport. I am happy to chat in a free consultation call, & can provide resources & referrals if needed. Alessandra Fabris-Tantawi Clinical Social Work/Therapist, LMSW, MEd, RYT-200 Verified 1 Endorsed Verified 1 Endorsed In a time where things can feel out of control, I can provide you with a sense of stability and hope. Don't let the brokenness of the world discourage you from seeking a curiosity to become the best version of yourself. I am here to support you in your journey to unpack and discover the ways in which you can thrive and come home to yourself. Let's take this breath-by-breath, step-by-step, you are already here, you are already on your way. Katie Kyle Clinical Social Work/Therapist, LMSW Verified 3 Endorsed Verified 3 Endorsed If you're hurting right now that doesn't mean something is wrong with you. Many people who work with me learn that the behaviors they want to overcome are normal responses to life’s challenges. That doesn’t mean that you need to get used to sleepless nights, racing thoughts, panic attacks, or depressive episodes. I’ll help you make sense of why you are responding in these ways and together we’ll help you find relief. The work we do will involve radical self-acceptance. Monique E West Clinical Social Work/Therapist, LCSW-R Verified 1 Endorsed Verified 1 Endorsed I believe strongly in the trans-formative energy of the human spirit. I believe that with self determination and a knowledgeable, non-judgmental approach one can reach a place of absolute self-fulfillment, genuine inner peace & meaningful healthy connections with others. Clients have told me that I have a natural ability to create an environment that encourages self exploration & a safe space. I work with adult ADHD/ADD; I offer organizational coaching, impulse control techniques. I also specialize in treating therapist as the client around issues of professional burn out, stress management and trauma. Layla Alter Pre-Licensed Professional Verified 1 Endorsed Verified 1 Endorsed I practice holistic, body centered, trauma informed psychotherapy. I seek to create a collaborative, safe, creative space to catalyze growth. I have experience working with young adults and adolescents suffering from anxiety, sexual trauma and depression. I utilize modalities with individuals 10 and older that incorporate a mind-body connection to help instill presence, newfound self-awareness, and empowerment. I also incorporate Somatic, Expressive Arts Therapy techniques to facilitate self exploration and discovery. I take a psychodynamic approach to help you recognize your ability to evolve and grow. Shayna DelVecchio Pre-Licensed Professional Verified Verified I hope to help equip you with tools and emotional regulation skills to ease your reaction to stressors, as well as lend a compassionate ear. Sometimes we want solutions and other times, we just want to - be heard. I’m here to support you however it feels best for you. Though we cannot choose what life decides to throw at us, what we can control is how we move through life in spite of its difficulties. Lauren Johnson Marriage & Family Therapist, LMFT Verified Verified I understand that people find it at times challenging or embarrassing to discuss their sexual lives and take pride in providing a judgment-free environment with compassion, empathy, and openness. It is important to me to recognize the diversity of each person and adapt my therapeutic techniques to the unique needs of each client. My ultimate goal for each client in therapy is to be provided with tools and specific techniques to master and utilize during and after completing their work with me in therapy. KMF Therapy Collective Marriage & Family Therapist, LMFT, Founder Verified Verified If you, or someone you love, are struggling with substance abuse, past trauma, relationship or communication setbacks, or anxiety and/or depression, it can be overwhelming to fight these feelings alone. Sara Stein Clinical Social Work/Therapist, LCSW Verified 9 Endorsed Verified 9 Endorsed By working with Sara, you will realize that you hold the power to create your ideal life. Sara’s warm and welcoming presence allows her clients to feel supported and motivated in navigating the different crossroads they encounter. She understands the complexities that come with these moments and eagerly listens to her clients’ perspectives, challenges potential barriers, and helps them build the tools to effectively work through these decisions. Using a collaborative and strengths-based foundation, Sara supports clients experiencing everything from anxiety, depression, self-esteem, to general challenges in everyday life. Haley Michno Clinical Social Work/Therapist, LMSW Verified 12 Endorsed Verified 12 Endorsed With a calm demeanor and friendly disposition, Haley has an innate ability to bring a sense of ease to those around her. She is warm, motivating, and empowers her clients to identify and utilize their strengths in any difficult situation. At times when we need the support, Haley is exactly who you can rely on. From navigating dating and relationships to coping with anxiety or depression, Haley believes that therapy is the most important investment you can make in yourself. She works hand-in-hand with clients to explore and repair patterns that have left them feeling stuck or unfulfilled. Carly Schlecker Clinical Social Work/Therapist, LMSW Verified 2 Endorsed Verified 2 Endorsed Life is unpredictable and oftentimes the only constant we have is ourselves. Through therapy, I hope to provide you with an expansive toolbox so you can be best prepared for any situation that comes your way. With these tools, I believe you can be an even more successful version of yourself. I am a cognitive behavioral (CBT) therapist specializing in individual, couples, and group therapy for people facing various challenges such as anxiety, depression, OCD, relationship issues, negative thought and behavioral patterns, and more. Emily Sterns Clinical Social Work/Therapist, LMSW Verified 4 Endorsed Verified 4 Endorsed Emily specializes in supporting individuals who experience self-doubt, difficulty in relationships, anxiety, eating disorders and low self-esteem. She helps her clients to recognize and make the changes necessary to create a life they love. Through several years of clinical experience, Emily has found that therapy is most effective when grounded in understanding, collaboration, and motivation. She prioritizes creating a supportive environment in which individuals feel both safe and inspired to make powerful strides towards a more fulfilled self. Leah Nordman Clinical Social Work/Therapist, LMSW Verified 1 Endorsed Verified 1 Endorsed Welcome! I believe therapy is a collaborative process and hope to create a welcoming space for partnership and growth. I specialize in working with children aged 5-16, adolescents, women and gender-expansive adults, and couples. I have worked closely with survivors of all ages of gender based violence and individuals experiencing complex trauma, depression, anxiety, grief, and PTSD. With couples, I partner with all members to address challenges, including communication, disconnection, infidelity, and resentment. Emma Zucker Licensed Master Social Worker, LMSW Verified Verified Emma recognizes that at times it can be hard to pinpoint the crux of a problem in order to find a solution; which may lead to feeling defeated, frustrated, or even doubtful. She helps her clients break down these challenges to reduce feeling overwhelmed. During moments of emotional distress, Emma provides rational, grounded thinking that allows clients to work through their situation with clarity and self-assurance. Angela Ordyniec Clinical Social Work/Therapist, LCSW, LICSW Verified Verified Would you like to become more self aware, accept what you are unable to control, and to move toward your self identified values? If so, I can offer this support for you related to various human experiences of grief/loss, relationship dynamics and communication, life transition, exploration of identity and sexuality, and inner discomfort including anxiety and sadness. Accepting new patients for teletherapy in New York, Minnesota, South Carolina, California and New Jersey for individuals and couples therapy. Lauren Fuchs Clinical Social Work/Therapist, LCSW 9 Endorsed 9 Endorsed Lauren is the type of provider who shows up to each session authentically and genuinely. By keeping it real, while also empathizing with her clients' unique perspectives, she facilitates an environment where getting vulnerable can actually feel comfortable, not intimidating. She has an outgoing and warm demeanor that makes it easy for clients to feel instantly connected. Clients have shared that in working with Lauren they have felt truly seen and heard as she has exceptional attention to detail and empathizes with the challenges they are bringing into session. Caini Deng Verified 1 Endorsed Verified 1 Endorsed I specialize in helping people who are having trouble coping with the ups and downs of their emotions. Anyone with anxiety, body image issues, grief and loss, trauma, or emotional regulation issues is a match for my practice. As an experienced Dialectic Behavioral Therapy (DBT) therapist, I can help you navigate intense emotions and develop coping skills. Most importantly, instead of challenging your thoughts and feelings, I’ll help you to recognize the beauty of your authenticity and regain your optimism about the future. See more therapy options for Flatiron, New York Women's Issues Therapists How do you encourage a woman to go to therapy? It’s helpful to express concern and love for the person while framing therapy as a tool for improving their life. Offering specific examples of how an individual may be suffering, and what effects it has on them, should be done with compassion and with empathy. It may be useful to devise a game plan—breaking the process down into parts, such as finding a therapist, making appointments, and looking into insurance coverage. How can women prepare for therapy? Women can prepare for therapy by being willing to talk about their past experiences and their private thoughts. Before a session, an individual may want to reflect on how they’ve felt since the last session and what’s happened in their lives. Between sessions, it can be useful to write down notes about their reaction to a given event or how they felt at a particular time. How long does therapy for women’s issues often last? As with any type of therapy, sessions depend on the individual and the challenges they face. Therapy types like Cognitive behavioral therapy (CBT), Prolonged exposure therapy (PET), and Eye movement desensitization and reprocessing (EMDR) can be brief, most often ranging from between 5 and 20 sessions. In some cases, such as for women with eating disorders, residential treatment may be recommended for an individual who is judged to be at high risk of self-harm. For any therapy, the duration will depend on the progress made and results realized. How can women overcome stigma around specific challenges in order to seek therapy? Women dealing with domestic abuse, sexual assault, eating disorders, and any other number of troubles may feel a stigma around seeking help. It’s important to remember that these issues are not uncommon and there is no shame in getting help for them. Therapy is confidential, and women can talk about their experiences without fear of judgment. Seeking therapy can be a courageous and liberating act, and an important step toward healing and recovery.
https://www.psychologytoday.com/us/therapists/ny/new-york/flatiron?category=womens-issues
Critic Consensus: No consensus yet. Movie Info Cast as Yitzhak Rabin as Menachem Begin as Esther Cailingold as Levi Eshkol as Gold Meir Critic Reviews for The Prime Ministers: Soldiers and Peacemakers All Critics (7) | Top Critics (4) | Fresh (2) | Rotten (5) The new film gives a "West Wing"-esque behind-the-scenes look at key historical events, but with more cocktail-party anecdotes than analysis. So resolved to tell you something that it never notices it's overtelling you everything. It also suffers from its one-sided perspective which often fails to provide sufficient informational context. But it nonetheless provides a vividly personal account of the country's travails during some of its most tumultuous times. The film's incessant voiceover narration does often feel like an interminable suppertime history lesson delivered by your favorite poli-sci-obsessed granduncle. The cumulative effect of the Soldiers & Peacemakers' straight-forward and blunt narration of linear history is a film only slightly less dry than matzoh. His personal insights are fascinating, but you need a good working knowledge of Israel's history since 1948 to place Avner's recollections into a wider context. Audience Reviews There are no featured reviews for The Prime Ministers: Soldiers and Peacemakers at this time. The Prime Ministers: Soldiers and Peacemakers Quotes There are no approved quotes yet for this movie.
https://www.rottentomatoes.com/m/the_prime_ministers_soldiers_and_peacemakers
Is Light a Wave? Some say light is a wave. Some say it is composed of individual particles. Some say it is both. In this post we will look at the logical structure of the historical argument for light being a wave as put forward by Thomas Young based upon his Split-Beam experiment. A passage from my book, The Party Line, discussing Young’s experiment is: “Young took a beam of light passing through a small hole in a window shade and split it into two parallel beams in close proximity by placing a slim card in the middle of and along the axis of the beam. When he did this, he saw that the beam projected a series of darker and lighter bands on a screen. When he blocked either of the halves of the beam, the image on the screen was a continuous blur of light, just as if he had removed the card and allowed the whole beam to shine upon the screen. The phenomenon of the bands of light appeared only when the beam had been split and both halves were allowed to travel to the screen. This was the physical, observable phenomenon calling out for explanation.” When discussing the validity of physical theories it is important to distinguish the phenomenon requiring explanation from the actual theoretical explanation itself. Young’s explanation of this peculiar behavior of light is based upon his findings regarding water and sound waves. He found that when a (water) wave crest met another crest, then the crest doubled. Likewise for troughs, when a trough met another trough then they created a deeper trough. But when a trough met a crest, then they cancelled each other out. This he called interference. He reasoned that if light is also a wave, then it should exhibit interference, and this would explain the dark and light bands observed when both sides of the split beam of light are allowed to fall upon the screen. Young’s reasoning here would look like: But this form of inference is INVALID!. Take a look at this argument with the same exact form: This is called the Fallacy of Affirming the Consequent (I call it the Birthday Cake Fallacy). One could be eating cake for any other number of reasons. So, if using an invalid inference pattern to justify a belief in a theoretical model, such as the picture that light is a wave, is reason to say that that belief has not thereby been justified, then we would have to say that Young’s argument does NOT justify the belief that light is a wave. In my next blog we will look at Einstein’s argument that light is a particle. Read more in my book, The Party Line.
https://www.dennisdgagnon.com/is-light-a-wave-phenomena/
Blog The Present & Future of Engineering James KenealeyJan 16, 2018Employer Hub 2018 is the Year of Engineering. Adrian Adair, operations director, discusses what the wider industry can do to find the next generation of engineers. It’s an exciting time to work in engineering. Record levels of investment and a host of major infrastructure projects, including HS2, Hinkley Point C and airport expansions, makes it a great time to consider a career in the sector. Public and private sector organisations are pumping billions of pounds into new build and improvement programmes that are helping the UK to cement its reputation as a centre for global engineering excellence. With so many different projects happening right now and a number of disciplines essential to their delivery, there’s a growing urgency for skills. The sector continues to be hit hard by the well-publicised shortage of talent and those working within the sector are actively looking for new and innovative ways to source the next generation of engineers. One company that is making a real impact by attracting new talent and developing the next generation of trailblazers, is Morson International. The UK’s leading technical recruiter supplies thousands of engineers to projects across the globe but this limited talent pool is making candidate-sourcing increasingly competitive. “Many of the major projects that we’re supplying the skills for will reach their peak at similar points and these overlapping schedules put an even bigger strain on the talent needed to deliver on time and on budget as we’re competing for skills within the same pool,” explains Morson International’s operations director, Adrian Adair. “There’s no quick fix to the skills shortage but luckily we’re still at a point in the delivery cycle where there’s enough time to train candidates.” Long-term demands According to data from the University of Dundee, CITB and Experian, the unprecedented demand for skills in order to deliver HS2, for example, will require 4,980 construction operatives, 1,015 designers and 735 project managers each month over the course of the 26-year project delivery period. Adrian continues: “We’re fortunate that the engineering sector provides so many positives that work in our favour when attracting new talent. Investing in infrastructure is a long-term business that brings sustainability in careers and one of the highest starting salaries.” Yet it isn’t just monetary benefits that attract people into engineering. According to a recent survey of 2,000 active contractors by Morson International, candidates are hungry for opportunities that offer lifestyle benefits, particularly flexibility, as well as career progression and prestige in their work portfolio. Adrian explains: “One of the simplest ways to bridge the skills gap is by training the next generation of professionals and skilled workers. We already deliver intermediate apprenticeships through our dedicated training arm, Morson Vital Training, and recently partnered with the National College for High Speed Rail to give our apprentices access to higher level pathways, right through to degree level.” However, Morson and the college alone cannot supply the full industry need. Around 30 per cent of the current rail workforce need extra training to deliver HS2, so the company are also looking at new ways to upskill and attract talent from other sectors that already possess similar skillsets. Military means Morson International casts its recruitment net outside of the traditional talent pool by targeting ex-military personnel, in particular. The company has a long history of supporting the British military and is committed to helping the armed forces reintegrate into successful civilian careers. The Morson team is headed up by ex-forces personnel who understand the significant life change of this transition and help service leavers to translate their military experience into employer benefits. The company has more than 500 ex-military personnel working on client projects at any one time and was recently awarded the Employer Recognition Scheme (ERS) Gold Award – the Ministry of Defence’s highest badge of honour for organisations that have signed the Armed Forces Covenant and demonstrate outstanding support for those who serve and have served. The skills debate must also remain a top priority for academics, with more effort needed to encourage young people into STEM related subjects, especially females. Doubling the number of women working in the sector would add an extra 96,000 people to the UK workforce. The latest ‘State of Engineering’ report from Engineering UK found that just seven per cent of engineering apprentices in the North West are female. Furthermore, just one in three 11-16 years olds know what path to take to become an engineer and less than a third (29 per cent) actually know what an engineer does. “Gender stereotypes are established long before teenagers start considering a career,” explains Adrian. “They’re all around us and play a big part in encouraging and discouraging girls into technical jobs. Around half of all female engineers enter the industry through a family connection, showing the importance of role models and the need to challenge the perceptions held by schools, teachers and parents.” Morson International has pledged to double the number of females it has in engineering roles by the end of the decade. Currently, the recruiter has more than 1,800 female contractors in various roles throughout the globe, yet in engineering, the number of females compared to males sits at 7.5 per cent. The company has also partnered with the Girls’ Network and recently helped to launch its Salford division. The Girls’ Network empowers young females from the least advantaged communities to be ambitious and reach their aspirations by matching with a positive female mentor. Morson International plans to replicate the success of the award-winning mentoring programme and launch new divisions in key commercial areas, such as the South West, to help improve female representation and build a diverse pipeline of talent for major projects, like Hinkley Point C. Scholarship support The Gerry Mason Engineering Excellence Scholarship, set up by the late Morson Group founder, recently pledged a further 15 fully-funded engineering scholarships with Salford University as part of its ongoing commitment to develop the next generation of engineers. Worth £9,000, the scholarships enable local young people, who would otherwise be deterred from university by the associated costs, to pursue an engineering degree. Yet since launching in 2015, the Gerry Mason Engineering Excellence Scholarship is yet to attract a female applicant, something the Morson Group and Salford University is eager to change by explaining that a picture of success would one day be an ‘all female cohort’. Extending the focus on diversity and inclusion activity beyond gender is another way to address the engineering skills shortage. According to a Forbes global survey, 85 per cent of leaders agree that a diverse and inclusive workforce is crucial to encouraging different perspectives and ideas that drive innovation. Diverse recruitment techniques Adrian continues: “Diversity is so important and brings huge benefits to the learning environment, organisations and projects, and with inclusivity at the core will outperform their peers. Techniques like unconscious bias training and blind auditioning really make a difference and help to build more diverse teams.” Reducing unconscious bias changes the way roles are advertised to improve diversity and attract new talent. Job listings provide a first impression of a company’s culture and subtle wording choices can have a strong impact on the applicant pool. Language like ‘strong and confident’, for example, is male-orientated whilst ‘collaborative and cooperative’ is more with female focussed. Adrian continues: “It’s working to strike a balance with the language used and replace gendered words with something more neutral. As a global business that supplies talent around the world, we also have to consider any terms that might only be recognised in certain countries, like ‘black, Asian and minority ethnic’, which is only used in the UK.” Blind auditioning also works to challenge traditional beliefs amongst staff and clients. It’s easy for bias to unconsciously trickle into the recruitment process. Removing demographic characteristics, however, such as name, gender and age, can help recruiters and clients to focus on a candidate’s qualifications and talents. As we look ahead to the future of engineering and Adrian predicts that technology, particularly IT and cyber, will become some of the most in-demand skills within the sector. Traditionally, technology professionals and engineers rarely applied for the same roles. Now, the use of technology has become a requirement across all engineering disciplines and has brought a greater synergy in the role specifications for each. Adrian continues: “This competition for skills has led to a more person-centred focus, with emphasis on selecting the right attributes for the role and the company culture. We’re seeing greater flexibility in the mandatory skills profile stipulated on job specifications, which is good news for candidates as it provides career fluidity and opens up opportunities to take a new direction, add new skills and widen the choice of industries, locations and roles.” With 85 per cent of the jobs that will exist in 2030 having not yet been invented, what is certain is that closing the skills gap is a long-term challenge and that businesses, educators and the government must work together to deliver real change.
Alternative Energy programs are emerging as a result of the growing need to develop the economy of the future that will rely not only on fossil fuels, but also on renewable and clean energy sources. The new generation of engineers that will support this shift in the energy production must develop truly multidisciplinary skills and be able to respond efficiently to various aspects of the alternative energy technology. “Fuel Cells for Portable Electronics” is a new course thought as a part of the alternative energy curriculum. The initial experience in teaching this course is presented in this paper. It underscores potential challenges because of the fact that engineering students join the graduate program in alternative energy from a variety of engineering backgrounds and with inconsistent knowledge of basic chemistry. The paper reports differences in student abilities to understand the fundamental electrochemistry concepts absolutely crucial for the subsequent introduction of more complex and practical fuel cell design and evaluation methods. A cursory comparison of test results revealed clear dependence on the student demographics. A qualitative conclusion was drawn recommending proper monitoring, exchange of experiences, and possible modification of prerequisites. Furthermore, a simple mnemonic tool was presented as an effective method to teach electrochemistry to engineering students. Introduction The main objective of this paper is to bring awareness about challenges when teaching energy related courses, such as those dealing with fuel cells, to engineering students of variable prior exposure to chemistry. Furthermore, the paper contains a discussion of the novel and entertaining methods for overcoming the lack of fundamental knowledge of electrochemistry for those students with little or rudimentary understanding of electrochemical principles while not sacrificing the ultimate outcomes of course, which are to provide practical, industry ready skills. These methods introduce equally challenging concept of designing an engineering course with ultimate integral quality expressed through a complete, interconnected understanding of the overall main objectives rather than fragmented knowledge acceptance typical for students with not well defined foundations. The paper also reflects the experience from working with limited student population and can only claim qualitative importance and informative character. As the alternative energy education becomes more important and a “main-stream” concentration for many institutions the observations and methods presented here could stand to gain validity and pave the way for a more “awakened” approach to teaching the alternative energy.
https://peer.asee.org/experiences-and-teaching-tools-in-alternative-energy-education
SOFTWARE: Photoshop PREREQUISITES: District Computer Skills Requirement met. Digital imaging using raster image editing and/or image creation software: scanning, resolution, file formats, output devices, color systems, and image-acquisitions. No class on 2/28, 3/14. SOFTWARE: Photoshop PREREQUISITES: District Computer Skills Requirement met. Digital imaging and graphic design using raster image editing, scanning, resolution and file formats. No class on 2/26, 3/14.
https://schedule.dcccd.edu/Spring/Instructor/0018916
Homocysteine – elevated levels, a heart health risk! Most of us know or have heard about cholesterol levels and their relationship to our cardiovascular system, but few of us have any idea about the importance of homocysteine levels. What is homocysteine? Homocysteine is an amino acid produced in the body, during the metabolism of methionine, homocysteine is a by product of the methylation pathway. This reaction is dependent on nutrients such as; folate and vitmain B12, it can also be broken down via a pathway which requires vitmain B6. If these pathways receive adequate amounts of the nutrients they require then homocysteine levels do not rise, but if any one of these nutrients are not available then levels may rise – ideal homocysteine values are between 5 to 10umol/L – the level appropriate for you will depend on other risk factors you may have present, ideal is a reading below 7.5 umol/L. What are the health risks associated with high levels of homocysteine? Cardiovascular disease – hyperhomocysteinaemia is believed to cause problems with the cells which line the inside of our blood vessels, causing narrowing and hardening of the arteries and therefore diminishing the flow of blood. Parkinson”s disease – L- Dopa, a drug given to treat Parkinson’s, affects the methylation pathway, resulting in increased production of homocysteine. Alzheimer’s disease – people with elevated homocysteine levels have nearly double the risk of developing Alzheimer’s disease. Pregnancy – high levels are known to be related to many adverse pregnancy outcomes including birth defects. Studies have also shown how high levels relate to many other health conditions also, e.g., depression, osteoporosis, headaches, macular degeneration……. What factors contribute to elevated homocysteine? Deficiency or low levels of the required nutrients. These include, vitamin B12, this vitamin is only found in animal products, therefore nearly all people who follow a vegan diet (a diet which excludes animal products) or any conditions which cause malabsorption such as coeliac disease or low stomach acid (vitamin B12 requires intrinsic factor to be absorbed from the stomach into the bloodstream) can easily have inadequate levels of this vitamin. Another nutrient commonly associated with elevated homocysteine is folate, levels of this nutrient may be low due to a genetic problem with the MTHFR and can be tested for easily and inexpensively. Also patients with existing conditions e.g., end-stage renal disease, heart transplant recipients or hypothyroidism may also have increased levels. Use of some medications can contribute, for example, lipid-lowering drugs, anti-convulsants, androgens and anti-rheumatic drugs. Testing for hyperhomocysteinaemia! Testing your serum homocysteine level is a simple fasting blood test through a laboratory, which is eligible for a rebate under Medicare, if your GP feels it is warranted, otherwise the out of pocket expense to you, is under $40! A high reading may warrant further testing to differentiate which vitmain is involved, this may include, one or more of the following depending on your particular health history ; “active vitmain B12”, red blood cell folate, MTHFR mutation, urinary pyroles, intrinsic factor antibodies or parietal cell antibodies. A health practitioner who understands about homocysteine, will be able to direct you, in where to start. What factors lower homocysteine. Dietary factors have a particularly strong influence on hyperhomocysteinaemia, understanding the relationship between homocysteine and B vitamins becomes important and effective in treatment. Most people respond to appropriate nutrient treatment despite the cause. Anti- homocysteine factors include, vitamins B6, B9, B12 and B2, serine and zinc. It is well advise to consider testing, not only if you have a family history of heart or neurological conditions but also if you are preparing for pregnancy. The results of treatment may well prove far-reaching!
https://naturalhealthmedicine.com.au/elevated-homocysteine-heart-health-risk-levels-checked/
- Emotions motivate our behavior. Emotions prepare us for action. - The action urge of specific emotions is often “hard-wired” in biology. - Emotions save time in getting us to act in important situations. - Emotions can be especially important when we don’t have time to think things through. - Strong emotions help us overcome obstacles—in our minds and in the environment. Emotions Communicate to (and Influence) Others - Facial expressions are hard-wired aspects of emotions. - Facial expressions communicate faster than words. - Our body language and voice tone can also be hard-wired. - Like it or not, they also communicate our emotions to others. - When it is important to communicate to others, or send them a message, it can be very hard to change our emotions. - Whether we intend it or not, our communication of emotions influences others. Emotions Communicate to Ourselves - Emotional reactions can give us important information about a situation. - Emotions can be signals or alarms that something is happening. - Gut feelings can be like intuition—a response to something important about the situation. - This can be helpful if our emotions get us to check out the facts. - Caution: Sometimes we treat emotions as if they are facts about the world: The stronger the emotion, the stronger our belief that the emotion is based on fact. (Examples: “If I feel unsure, I am incompetent,” “If I get lonely when left alone, I shouldn’t be left alone,” “If I feel confident about something, it is right,” “If I’m afraid, there must be danger,” “I love him, so he must be OK.”) - If we assume that our emotions represent facts about the world, we may use them to justify our thoughts or our actions. This can be trouble if our emotions get us to ignore the facts.
https://mindfullyhealing.com/what-emotions-do-for-you/
What is an Astronaut? Also known as: Cosmonaut. In 1958, the National Aeronautics and Space Administration (NASA) adopted the word astronaut (meaning “sailor among the stars”) for the men and women they would train to go into space. The Soviet space agency came up with a similar term, cosmonaut (which means “sailor of the universe”) at about the same time. An astronaut is an individual trained to pilot and/or travel in a spacecraft, work in space, and do activities related to human space exploration. While space flight may now seem routine, every trip into space can be a walk between success and disaster. Therefore most of an astronaut’s career is spent undergoing extensive training. What does an Astronaut do? In the early days, the job description of an astronaut was basically that of being an observer – someone who would view and document what was happening. It didn’t take long for NASA to understand that human interaction would be required. Today, two types of astronauts are selected for space flights: Mission Specialist Astronauts – these astronauts work with pilots to conduct experiments, launch satellites, and maintain spacecraft and equipment. Their background can be in engineering, science, or medicine. They can also work as astronaut educators, inspiring students to consider joining the US space program. Pilot Astronauts – these astronauts serve as space shuttle and international space station pilots and commanders. They are responsible for the crew, the mission, the mission success and the safety of the flight. The Johnson Space Center provides a number of simulators and facilities to prepare the astronauts for their work in space, such as a neutral buoyancy simulator, which simulates weightlessness on earth, and a 200′ long and 40′ deep pool where astronauts train for spacewalks underwater. When in orbit, most of the time is spent in the craft or space station. At times, a spacewalk is required to make repairs, or to deploy a satellite, and the astronaut must wear a space suit, or an EMU (extravehicular mobility unit) for protection. Most missions last two to three weeks, but long duration missions may run as long as half a year. Training for long duration missions is very arduous and takes approximately two to three years. Basic Training for Astronaut Candidates: Astronaut candidates report to the Johnson Space Center (JSC) in Houston, Texas, which has trained more than 300 U.S. astronauts and 50 astronauts from other countries in its fifty year history. As well, more and more Americans now train at Star City, a cosmonaut training facility near Moscow (especially since the end of the U.S. space shuttle program in 2011). Basic training is the first phase, lasting two years. The candidates learn about vehicle and space station systems, and most of the training takes place in the classroom. Key disciplines that may prove to be helpful in their work in space are studied, such as meteorology, engineering, space science, and earth sciences. Survival training must also be completed outside of the classroom (military-water-and-land-survival), in order to prepare for an unplanned landing back on earth. The candidates must become scuba certified, and must also pass a swimming test; they must swim three lengths of a 25-meter (82-foot) pool without stopping, and then swim three lengths of the pool in a flight suit and tennis shoes with no time limit. They must also tread water continuously for 10 minutes while wearing a flight suit. Both the scuba certification and the swimming test must be completed within the first month of training. Second Phase Training Candidates may be selected to become astronauts once basic training is complete. During the second phase, the trainees are grouped with experienced astronauts, and with their help become proficient in a variety of activities related to pre-launch, launch, orbit, entry, and landing. The experienced astronauts also share their experiences and knowledge, becoming mentors and advisors to the trainees. Advanced Mission Training The advanced mission training phase (lasting ten months) is where the astronauts receive their crew and mission assignments. They focus on exercises, activities, and experiments directly related to their mission, and familiarize themselves with the power tools and other special devices they will use during their mission. What is the workplace of an Astronaut like? An astronaut is a civil servant, and as such, is an employee of the federal government. As a civil servant, astronauts have to go to training sessions, write reports, and attend meetings, similar to any other office worker. When in orbit, an astronaut will spend most of their time in the craft or space station, occasionally having to do a spacewalk to make repairs, or deploy a satellite, etc. Further Reading - Daily Life in Spacechannel.nationalgeographic.comOrbiting 240 miles from the surface of Earth, day-to-day life aboard the International Space Station is often a mystery to terrestrials. The station is a faint glimmer that appears in the sky for a few minutes at a time—if you happen to be looking up as it passes. From inside, it’s another story. - How Astronauts Workscience.howstuffworks.comSay the word “astronaut” and you’ll conjure up visions of heroes and heroic feats: Alan Shepard and Virgil Grissom successfully completing suborbital trips; John Glenn orbiting Earth aboard Friendship 7 in a historic five-hour flight; Neil Armstrong stepping down from the lunar module ladder onto the moon’s surface; and Jim Lovell stabilizing the Apollo 13 spacecraft after an explosion a little more than 55 hours into the flight. - NASA www.nasa.govNASA stands for National Aeronautics and Space Administration. NASA was started in 1958 as a part of the United States government. NASA is in charge of U.S. science and technology that has to do with airplanes or space. - An Astronaut Reveals What Life in Space is Really Like www.wired.comThere’s no way to anticipate the emotional impact of leaving your home planet. You look down at Earth and realize: You’re not on it. It’s breathtaking. It’s surreal. It’s a “we’re not in Kansas anymore, Toto” kind of feeling.
https://www.knowasiak.com/tag/brain-dominator/
In thisarticle we will not consider indicators of well-known calculations of the efficiency of investment ((NPV, irr, etc.), estimates that in most cases are quite complex and require lengthy preparation for the subsequent inclusion in the business plan. The article gives a number of operational methods to carry out a quick assessment of investment. This paper describes a calculation method investments, based on reducing costs. Note that described in this paper, techniques are extremely useful at the initial stages of investment, if necessary, “estimate” the likely economic benefits from the investments held. To begin select the number of investments related not to the implementation of large-scale projects (requiring the preparation of project feasibility study), and with the ongoing activities of the company: renewal of fixed assets, increasing productive capacity, promotion and advertising goods. The question is to calculate the commercial viability of proposed investments in terms of reducing costs. Criterion for cost reduction is used to evaluate 2 types of investments – update equipment and increasing production capacity. These indicators are used as aids in calculations of investments, creating a business plan or feasibility study. In this case, use the following principle: the investments associated with upgrading equipment (production volume does not change) will be effective when the cost reduction derived from such a change, will provide the necessary compensation. Example The company needs to assess the investment for the future of the business plan or feasibility study for the acquisition of more modern equipment worth 300 thousand rubles. Which will be used within 5 years. Amortization new equipment 60 thousand rubles. per year (ie equipment is fully amortized over five years). The cost of maintaining equipment 40 thousand rubles. per year. Profit tax of 20%. Tax savings of 12 thousand rubles. per year (60 thousand rubles .* 20%). Current equipment can be sold for 150 thousand rubles., or work out another 3 years, after which will be replaced by a new one. Amortization of existing fixed assets 50 thousand rubles. Per year. Maintenance costs of fixed assets 60 thousand rubles. per year. Tax savings of 10 thousand rub. per year. Savings on costs = (60 + 10) – (40 + 12) = 18 thousand rubles. annual rate of return on savings = cost / (Investments – Income from sale of equipment) = 18 / (300 – 150) = 12%. The evaluation of investments. Conclusion of the acquisition of fixed assets may be taken, in the case where the yield of 12% is sufficient for the company.
http://www.wallstreetproject2010.org/rapid-assessment-of-the-investment-reduce-costs/
How AI Can Stop Wildlife Poaching Interview with Shahrzad Gholami, Ph.D. candidate in computer science, University of Southern California A core theme of this Forbes AI issue is people—and how artificial intelligence (AI) is impacting the workforce, how companies are using the technology to attract and retain employees, and how businesses can leverage AI to build stronger, more diverse teams and organizations. Yet AI’s impact can be felt outside company walls as well. We sat down with Shahrzad Gholami to learn how she is developing AI applications for social good. Gholami is currently completing her doctoral studies at the University of Southern California (USC), is a member of Teamcore, a research group within USC’s Viterbi School of Engineering, and is focusing on the challenges of wildlife conservation. Here’s a more detailed look at Gholami’s project to stop wildlife poaching, and her recommendations on how AI researchers can connect with projects that benefit society. Shahrzad Gholami Why is AI a good tool for social good? Many social problems in the real world involve looking at millions of data points, and it’s hard for humans to draw conclusions from that much data and make optimal decisions. AI can help us do those computations faster and more accurately. It can also help us get beyond pure theory when we’re trying to solve real-world problems. What inspired you to begin using AI for good causes? When I started my Ph.D. program, I wasn’t aware of the depth of the poaching crisis in conservation areas. Many wildlife species are being targeted by poachers to the point where there’s a downward trend in population. My advisor spoke at an AI event about using AI and machine learning for security, and met someone connected to the Wildlife Conservation Society—they were worried about the security of their wildlife and wanted to use the resources that conservation groups have, like guards and patrols, more strategically. As a result, we created the PAWS (Protection Assistant for Wildlife Security) project, which uses machine learning to predict where poachers might strike. We analyze data about previous poaching incidents as well as patrolling activities to realize the poachers’ hotspots, and then we find the best routes for patrols so they can detect more poachers’ snares and traps. When we tested PAWS in the field at Uganda’s Queen Elizabeth National Park, rangers found more snares and snared animals in areas where we predicted a high rate of snaring, compared to areas of lower predicted activity. The places we found were often not considered by rangers to be a poaching hot spot in the past, but we ultimately showed the patrols places they would have missed, so they can save more wildlife. What does this work mean to you as a researcher? When I see that mathematical models and solutions are working in the real world and helping us solve human problems, I get more passionate for my next steps in research. That’s the greatest reward for a researcher, when you see that your work has impact. What needs to happen if we want to apply AI to more social challenges? We need more interdisciplinary research by joining forces with other domain experts, like people in social work, ecology, healthcare and so on. But we also need a mechanism to connect these people. Lots of AI researchers want to do impactful work, but they don’t know how to find it, so they work on theoretical problems instead, which may not directly address real-world challenges. And people with real-world problems don’t realize that AI can help them. How can we connect the researchers who want to research and the people who ultimately want to leverage AI for the greater good? They need an efficient way to communicate with each other, maybe through conferences and events, around social good. That’s how my advisor and the Wildlife Conservation Society found each other. You give talks at conferences, and someone in the audience needs you—they find that connection. Partnerships can also help AI researchers scale their work so it can help more people. We’re planning to integrate PAWS with SMART (Spatial Monitoring and Reporting Tool), an open-source software tool used by over 600 wildlife parks to manage patrols and gauge threat levels. Our research will not only help the parks; we’ll get access to much more hard data about patrols, which helps us fine-tune and improve our models. It sounds like AI researchers and groups with societal challenges need to know the other exists. It’s really important to explain to non-AI people how AI tools work. If we do that well, then the groups with the real-world problems could actually proactively approach researchers and ask for help. The thing about researchers is that we are always looking for challenging projects, and problems that are new and novel. New and challenging problems to solve with AI are out there—we just need to find them.
What is the mission of the Office of the Legal Counsel? The Office of the Legal Counsel (LEG) is responsible for providing unified and central legal services, advice and counsel to the Secretariat as a whole and the Organization's Governing Bodies. In particular it carries out legal work relative to the fulfillment of the mandates of its Constitution and the Strategic Plans and Objectives adopted by the Member States. LEG also provides legal advice to the technical and administrative units, as well as field offices and Centers on all legal and constitutional aspects of their activities, including, but not limited to the negotiation and drafting of treaties, agreements and contracts, host country relations, privileges and immunities, administrative matters, application and interpretation of the Organization's rules and regulations, relations with other international organizations, non-governmental organizations and private sector, and the handling of complaints lodged by staff members before the Administrative Tribunal of the International Labour Organization in Geneva and the defense of the Organization in other national or international judicial and quasi-judicial fora. In providing these services LEG ensures that the decisions that are made and carried out are in legal conformity, thus, protecting the legal interest of the Organization. Finally, the Office of the Legal Counsel also provides technical legal cooperation to PAHO Member States upon request on Health-related law and Human Rights. Who does the Office of the Legal Counsel report to? In PAHO's organizational structure, LEG is under the Office of the Director of PAHO and the Legal Counsel reports directly to the Director. The Office of the Legal Counsel is part of PAHO's management structure and, as such, does not act independently but is accountable to the Director ofthe Organization. What is the level of authority of the Office of the Legal Counsel? The Office of the Legal Counsel plays a proactive role in facilitating the achievement of the Organization's mission by safeguarding its privileges and immunities and status as an international public health organization. It works closely with PAHO's Senior Management and Governing Bodies to promote good governance, ensure respect for constitutional, legal and administrative issues, thus protecting the name and integrity of the Organization. The Office of the Legal Counsel also works closely with other members of PAHO's Integrity and Conflict Management System to foster ethical behavior and compliance with the Organizations rules and regulation, and to ensure effective conflict management, the right to due process and consistency in decision-making within the Organization. What is the level of confidentiality of the Office of the Legal Counsel? The Office of the Legal Counsel has complete access to all records and files of the Organization. It has a duty to protect the confidentiality of information that is brought to its attention and can only share this information with persons within the Organization who have a legitimate need to know. In resolving problems, LEG observes the requirements of due process and strives to ensure that the best interests of the Organization and/or its personnel are protected. Who can use the services of the Office of the Legal Counsel? The Office of the Legal Counsel is accessible to all PAHO personnel both at Headquarters and in the Country Offices and Centers, regardless of the type or duration of their appointment or contract. In addition, people from outside the Organization, such as Government officials, contractors, vendors and suppliers, lawyers and others can contact the Office of the Legal Counsel to discuss, consult, or report issues that may have legal implications. When should you contact the Office of the Legal Counsel? - Ask questions about the application and use of the Organization's privileges and immunities; - Report suspected violations of PAHO's rules and regulations, including staff rules and regulations, and irregularities or concerns regarding the letting, preparation, performance or implementation of any agreement or project executed by the Organization or any contract that PAHO has entered into with an individual, company, government or organization; - Seek advise on the application or interpretation of PAHO rules and regulations, PAHO/WHO E-Manual, or any PAHO policy, including legal aspects of personnel, procurement or financial matters; - Seek advise on proposed institutional relationships with private sector enterprises, including non-governmental, foundations and for-profit entities; - Inquire whether PAHO personnel can publish work-related articles; - Inquire into the use of PAHO's name and logo or report the unauthorized use of PAHO's name or logo by anyone within or outside the Organization; - Seek advice on use of copyrighted material or disclose copyright infringement regarding any work or publication that has been produced by PAHO; - Obtain advice on ILOAT cases and how the jurisprudence might apply in given situations. How can the Office fo the Legal Counsel be contacted?
https://www.paho.org/en/integrity-and-conflict-management-system-icms/office-legal-counsel-leg
What to Do After an Accident After you or your loved is involved in a car accident, it is imperative that you contact the police immediately. Your best chance for a favorable ruling or settlement in a car accident case is to have comprehensive records of the accident and its aftermath. Records you should collect for legal action include: An accident report filed with the police The insurance information of the car that hit you The other driver’s contact information Photographs of the accident scene Contact information for any witnesses to the accident Your medical care and expenses following the accident Immediately after a car accident you or your loved one needs to consult a doctor. Drivers and passengers often suffer serious injuries that can have them hospitalized. Many injuries will not manifest immediately, some symptoms can manifest hours, days or weeks after an accident. Insurance companies may try to claim that your injuries are not serious if you do not seek immediate medical attention. Be sure to seek legal advice from an car accident lawyer early in the process and do not sign anything or make any statements without the approval of your lawyer.
https://kefferlawfirm.com/index.php/practices/40-areas/81-quisque-mauris-risus-gravida-a-molestie-eu-dictum-ac-augue-integer-sodales-tempor-lectus-sit-amet-dictum-metus-7
Find out more Jump to Content Jump to Main Navigation User Account Personal Profile See all online law products More About Subscriber Services Guided Tour FAQs Help Contact Us For Authors Search This site Oxford Public International Law Browse all Subject Air law and law of outer space Diplomacy and consular relations History of international law Human rights Immunities Individuals and non-state actors International co-operation International criminal law International economic law International environmental law International humanitarian law International law and international relations International organizations International procedural law International responsibility Law of the sea Law of treaties Relationship between international and domestic law Settlement of disputes Sources, foundations and principles of international law Statehood, jurisdiction of states, organs of states Territory Theory of international law Use of force, war, peace and neutrality Author My Content (0) Recently viewed (0) Save Entry My Searches (0) Recently viewed (0) Save Search Print Save Cite Email this content Share Link Copy this link, or click below to email it to a friend Email this content or copy the link directly: https://opil.ouplaw.com/abstract/10.1093/law/9780198849155.001.0001/law-9780198849155-chapter-50 The link was not copied. Your current browser may not support copying via this button. Link copied successfully Copy link Share This Sign in You could not be signed in, please check and try again. Username Please enter your Username Password Please enter your Password Forgot password? Don't have an account? Sign in via your Institution You could not be signed in, please check and try again. Sign in with your library card Please enter your library card number View translated passages only Oxford Law Citator Contents Expand All Collapse All Preliminary Material Preface Acknowledgements Contents Table of Cases African Commission on Human and People’s Rights African Court on Human and People’s Rights European Committee of Social Rights European Court of Human Rights European Court of Justice European Patent Office Inter-American Commission on Human Rights Inter-American Court of Human Rights International Center for the Settlement of Investment Disputes International Court of Justice International Tribunal for the Law of the Sea Iran-Us Claims Tribunals Other Ad-Hoc Arbitration Permanent Court of Arbitration Permanent Court of International Justice Reports of International Arbitral Awards UN Human Rights Committee World Trade Organization Argentina Australia Bangladesh Brazil Canada Chile China Colombia Cook Islands Federated States of Micronesia India Italy Kenya Netherlands New Zealand Pakistan Papua New Guinea Philippines Republic of South Africa Uganda United Kingdom United States Table of Legislation International Instruments International Documents CBD Documents Decisions ECOSOC Documents FAO Documents ICAO Documents ILA Documents ILC Documents IMO Documents Resolutions IPCC Documents OECD Documents UNHRC and OHCHR Documents Resolutions UN Environment Documents UNESCO Documents UNFCCC Documents Decisions UNGA Documents Resolutions UNSC Documents WTO Documents Decisions and documents under various Conventions EU Legislations Commission Decisions and Directives Regulations Treaties Other Legislation Argentina Australia Bangladesh Bolivia Brazil Canada China France India Kenya Librea Mexico New Zealand Norway Pakistan Papua New Guinea Republic of Ecuador Republic of South Africa Sweden United Kingdom United States Contributors Main Text Ch.1 International Environmental Law: Changing Context, Emerging Trends, and Expanding Frontiers I Introduction II The Changing Context for International Environmental Law A Growing Understanding, Tracking, and Documentation of Global Environmental Harm B Increasing Reporting and Popularization of Global Environmental Harms C Mainstreaming of Diverse Ethical Values, Approaches, and Perspectives D Increasing Political Salience and Contestation III Emerging Trends in International Environmental Law A Discursive Dominance of the Discourse of Sustainable Development B Increasing Maturity in the Content of International Environmental Law 1 Developments in customary international law relating to the environment 2 Crystallization of principles of international environmental law 3 Expansion in the legal tool-kit C Shifting Focus of International Environmental Law 1 Facilitative and catalytic 2 Procedural ‘turn’ 3 Greater deference to national sovereignty, circumstances, and capacities 4 Tailored and nuanced differentiation 5 Increased reliance on soft law 6 Treaty-making to treaty interpretation and implementation D Increasing Resort to International Courts and Tribunals E Enhanced Decentralization and the Emergence of Polycentric Governance IV Expanding Frontiers of International Environmental Law V Conclusion: Is International Environmental Law Fit for Purpose? Part I Context Ch.2 Discourses I Introduction II Basics III Mapping Environmental Discourses IV The Major Environmental Discourses A Limits, Boundaries, and the Promethean Response B Problem-Solving Discourses C Sustainability D Green Radicalism V The Relative Importance of Discourses VI Conclusion Bibliography Ch.3 Origin and History I Introduction II The Traditional Era A Shared Transboundary Resources B Resource-Sharing in Areas Beyond National Jurisdiction C Intergenerational Resource-Sharing? III The Modern Era A The Challenge of Pluralism B Normative Innovation C Emergence of an International Environmental Law Discipline IV The Post-Modern Era A Coping with the Implementation Gap B Civil Society Concerns C The Quest for Synergy V Conclusion: Beyond the Territorial Imperative Bibliography Ch.4 Multilevel and Polycentric Governance I Introduction II To Centralize Or Not? A Presumption Against Centralization B Arguments that Favour Centralization 1 Externalities 2 Game theoretic approaches 3 Regulatory competition III Multilevel Governance IV Fragmentation, Polycentric Governance, and Regime Interactions V Advancing Understandings of Global Environmental Governance A Comparative Institutional Analysis B The Governance Trilemma C New Technologies, New Governance, New Technologies of Governance V Conclusion Bibliography Ch.5 Fragmentation I Introduction—International Environmental Law or International Law II Regimes and Non-Regimes A Issue-Areas and the Boundaries between Them B Organizational Mandates and the Boundaries between Them C Multilateralism, Regionalism, Bilateralism, and Unilateralism III Fragmentation and Regime Interaction A Relationships of Conflict or Interpretation B The Role of International Adjudication C Constitutionalism and Pluralism IV Global Pact for the Environment V Conclusion Bibliography Ch.6 Instrument Choice I Introduction II Instruments of Environmental Protection: A Typology A Traditional Standards B Market-Based Instruments C Other Instruments III The Role of Instrumental Agnosticism in International Environmental Law A International Agreements’ Tendency to Include Goals and Procedures While Leaving Choices about the Role of Market-Based Instruments to Nation-States B Trade Restrictions and Subsidies in International Law IV Internationalization of Environmental Benefit Trading V Conclusion Bibliography Ch.7 Scholarship I Introduction II What is (International) Environmental Law Scholarship? A Plethora of Form and Content B Interest Groups and Legal Societies C Relationship between Scholarship and Theory D Relationship between Scholarship and Method III Challenges in International Environmental Law Scholarship A Fragmentation B Reactivity C Under-represented Voices and Issues D Connecting Law and Related Disciplines in International Environmental Scholarship IV Scholarship and Praxis: Litigation, Advocacy, and Consultancy V Conclusion Bibliography Ch.8 Legal Imagination and Teaching I Introduction II Three Challenges in Teaching International Environmental Law III Wishful Thinking IV Legal Imagination V Legal and Environmental Realities VI Conclusion Bibliography Part II Analytical Approaches Ch.9 International Relations Theory I Introduction II Power and Hegemony IR Insight 1: International Environmental Law Reflects Relational Power—Both Material and Discursive III Norms, Legalization, and Effectiveness IR Insight 2: IEL Contains a Spectrum of Soft to Hard Norms which Reflect Not Only Material Interests, but Shared Understandings and Discourses with Justice and Ethical Dimensions IV Governance IR Insight 3: While States Continue to be Key, There Has Been a Shift Away from State Dominated Institutions to Sub-State and Non-State Actors, Constituting Both a Strength and Weakness V Legitimacy and Democratization IR Insight 4: Legitimacy of Law-Making Institutions and Norms Depends on Both Their Inclusiveness in Terms of Affected Interests and Their Effectiveness VI Knowledge IR Insight 5: To Be Effective, Environment Agreements Need To Be Linked To Institutions which Ensure Scientific Knowledge Is Fed into Decision-Making Processes in a Manner which Ensures Credibility, Legitimacy and Saliency VII Conclusion Bibliography Ch.10 Economics I Introduction II Internalizing Externalities and Providing Public Goods III Prisoner’s Dilemma and Bargaining A Game Theory B The Coase Theorem C Limits to Efficient Bargaining IV Why Do States Conclude Treaties? V Treaty Participation and Design A State Characteristics B Primary Rule System C Instruments 1 Liability rules 2 Command and control 3 Market-based instruments 4 Searching for smart instrument mixes VI Effectiveness and Compliance A Effectiveness B Compliance C Why and When Do States Violate? D Promoting Compliance Bibliography Ch.11 Global South Approaches I Introduction II The Colonial Origins of International Law III Evolution of International Environmental Law and the North-South Divide IV An Alternative Approach to International Legal Scholarship—Third World Approaches to International Law (Twail) V Global South Perspectives on Environmental Law A Differential Treatment, and the Common But Differentiated Responsibility Principle B Intergenerational Equity C Environmental Justice 1 Distributive justice 2 Procedural justice 3 Corrective justice 4 Social justice VI Conclusion: Potential and Limits of Global South Perspectives Bibliography Ch.12 Feminist Approaches I Introduction II Rise, Retreat, and Reframing of Ecofeminism A Defining Ecofeminism B Rise and Retreat of Ecofeminism C Reframing Ecofeminism III Gender and Public International Law IV Gender and IEL: Developments Prior to UNCED V Gender and IEL: Developments from UNCED A Feminist Critiques of Sustainable Development and the Green Economy B The Vision of Agenda 21: Chapter 24—Enhancing Women’s Participation in IEL C Incorporation of Gender Within the UNCCD D Essentializing Women in the UNCBD E Journey of Gender in the UNFCCC: Silence on the Gender Action Plan 1 Women’s participation in the UNFCCC 2 Gender within UNFCCC Instruments VI Conclusion Bibliography Ch.13 Ethical Considerations I Introduction II Anthropocentric Values A Self-interest B Culture C Economic Values D Aesthetics E Future Generations III Non-Anthropocentric Values A Sentience and Humane Considerations B Existence Values C Ecosystem Values IV Conclusion Bibliography Ch.14 Earth Jurisprudence I Introduction II Conceptual Framework A Great Jurisprudence B Earth Jurisprudences 1 Principles of Earth jurisprudence 2 Applying Earth jurisprudences 3 Rights of Nature 4 Legal personality 5 Legal rights and duties III Emergence of Earth Jurisprudence in International Law A Universal Declaration of the Rights of Mother Earth 1 Rights recognized in UDRME 2 Implications for human rights 3 Influence of UDRME B Application to Climate Change C United Nations Harmony with Nature Programme IV Implementation and Future Development A Implementation B Transformative and Disruptive Potential C Sources of Law D Permanent State Sovereignty Over Natural Resources E Institutions V Conclusion Bibliography Ch.15 The Role of Science I Introduction II The History of Science in International Environmental Law III International Environmental Law and Science A The Role that Science Plays in the Development and Implementation of International Environmental Law 1 Ozone regime 2 UNFCCC 3 The influence of science on international environmental law B The Role of International Environmental Law in Promoting Science C The Role of International Environmental Law in Managing Threats Posed by Science IV Conclusion Bibliography Part III Conceptual Pillars Ch.16 Harm Prevention I Introduction II Conceptual Questions A Status as General International Law B The Role of Due Diligence 1 Harm prevention and the due diligence standard 2 Procedure and substance 3 Prevention and precaution III The Harm Prevention Rule as a Reference Point for International Environmental Law A The Law of State Responsibility B Judicial Processes C Treaty-Based Approaches IV Conclusion Bibliography Ch.17 Sustainable Development I Introduction II The Concept of Sustainable Development in Historical Perspective III ‘Sustainable’ Development From a Legal Standpoint IV The Operation of Sustainable Development in Legal Practice A The Nature of Sustainable Development as a Norm B Functions of Sustainable Development as a Norm 1 Analytical distinctions 2 Normative impact 3 Jurisprudential relevance V ‘Sustainable’ vs ‘Development’ Bibliography Ch.18 Precaution I Introduction II Precaution as a Conceptual Pillar A Scientific Uncertainty Over Threats of Harm B Precaution vs Prevention III Precaution as a Principle of International Environmental Law A Incorporation of Precaution in International Environmental Law Instruments B Approach vs Principle IV Precaution in the International Jurisprudence V Implementing Precaution VI Conclusion Bibliography Ch.19 Differentiation I Introduction II Conceptual Bases and Development of Differentiation A Conceptual Bases B Development III Manifestations of Differentiation in International Environmental Law A Differentiation and the Principle of Common But Differentiated Responsibilities (CBDR) B Differential Norms C Differentiation at the Implementation Level IV Criticisms of Differentiation and Progressive Evolution A Critiques B Evolving Differential Techniques V Ongoing Need for Differentiation A Need to Maintain Some Form of Differentiation B Broadening the Bases for and Forms of Differentiation VI Conclusion Bibliography Ch.20 Equity I Introduction II Equity in International Law A Meaning of Equity III Intergenerational and Intra-Generational Equity in International Environmental Law A Intergenerational Equity B Intra-generational Equity IV Concluding Remarks Bibliography Ch.21 Public Participation I Introduction II Concepts, Contexts, and Scales A Concepts: Scope and Rationale of Public Participation B Contexts: Legal Foundations and Development C Scales: Participatory Rights and Social-Ecological Settings III Decision-Making in National Contexts A Basis for Public Participation in Environmental Treaties B Basis for Public Participation in Human Rights Regimes IV Decision-Making in International Contexts A Public Participation B Access to Justice V Conclusions Bibliography Ch.22 Good Faith I Introduction II Good Faith in International Environmental Law A International Environmental Law as International Law of Cooperation B Good Faith Performance of Environmental Treaties C Environmental Cooperation in Good Faith D Due Diligence and Good Faith E Good Faith in Action: Chagos Marine Protected Area Arbitration (2015) III Concrete Role of Good Faith in Environmental Regimes A The Implication of the Whaling Judgement B Good Faith in Implementation of, and Compliance with, Environmental Regimes C Good Faith in ‘Pledge and Review’ Environmental Regimes IV Conclusion Bibliography Part IV Normative Development Ch.23 Customary International Law and the Environment I Introduction II Formation of Customary International Environmental Law A The ‘Banality’ of Custom in International Environmental Law B Peculiarities of Customary International Environmental Law 1 Relationship between treaties and custom 2 Relationship between ‘soft law’ and custom 3 Relationship between general principles, normative concepts, and custom III Environmental protection in general international law A Out of the Fog B The Prevention Principle and the Duty of Due Diligence C The Duty to Conduct an Environmental Impact Assessment (EIA) D The Duty to Cooperate IV Concluding Observations Bibliography Ch.24 Multilateral Environmental Treaty Making I Introduction II Categorizing International Instruments: Some Initial Distinctions A Legal Form B Parties: State and Non-State Actors C Contractual vs Legislative D Constitutive vs Regulatory Instruments III Why do States Negotiate and Accept International Agreements? A Instrumental Factors B Non-Instrumental Factors IV Steps in the Treaty-Making Process A Initiation of Negotiations B Negotiations C Adoption, Signature, Ratification, and Entry into Force V Design Issues A Breadth 1 Who may participate in a treaty regime? 2 What are the minimum participation requirements? 3 Substantive scope B Depth C Promoting Participation D Building a Treaty Regime Over Time E Ensuring Agreements Stay Up to Date VI Conclusion Bibliography Ch.25 Soft Law I Why Soft Law? II Soft Law as Part of the Multilateral Treaty-Making Process III Soft Law and Customary Law IV Treaties as Soft Law V Soft Law General Principles VI Conclusions Bibliography Ch.26 Private and Quasi-Private Standards I Introduction II The Concept of Private and Quasi-Private Standards III The Proliferation of Private Standards IV Explaining the Emergence of Private Standards V The Role of International Organizations and International Law in Relation to Private Standards A International Organizations Delegate Authority to Adopt Quasi-Private Standards to Private Actors B International Law Enhances the Authority of Private or Quasi-Private Standards C International Organizations Serve as a Meta-Regulator Vis-À-Vis Private or Quasi-Private Standards D There is Substantive Borrowing between Private Standards and International Law and Vice Versa VI Appraising the Effectiveness of Private Standards in the Environmental Domain? VII The Legitimacy of Private Standards VIII Conclusion Bibliography Ch.27 Judicial Development I Introduction II The Modern Judicial Order and Environmental Law A Nature and Scope of Environmental Cases B Effectiveness of ICTs C Remedies Available D Science Before International Courts and Tribunals III Jurisprudence A Environmental Obligations 1 No harm—prevention—due diligence 2 Environmental impact assessment 3 Polluter pays 4 Conservation and environmental protection 5 Cooperation B Environmental Rights 1 Human right to a healthy environment 2 Procedural rights 3 Common concern—common interest—common heritage of humankind 4 Rights holders 5 Extraterritorial enforcement C Interpretive Principles: Precautionary Principle and Sustainable Development D Law of War, Pillage IV Limits of Judicial Mechanisms V Conclusion Bibliography Part V Subject Matter Ch.28 Transboundary Air Pollution I Introduction II Causes of Transboundary Air Pollution III Content of Customary Law IV Treaty Regimes Regulating Transboundary Air Pollution A LRTAP and its Protocols 1 The LRTAP 2 The 1985 and 1994 Sulphur Protocols 3 The 1988 NOX Protocol 4 The 1991 Protocol on Volatile Organic Compounds 5 The 1998 Heavy Metals and POPs Protocols 6 The 1999 Gothenburg Protocol on Acidification, Eutrophication and Ground–Level Ozone B The 1991 US-Canada Air Quality Agreement C 2002 ASEAN Agreement on Transboundary Haze Pollution D Global Initiatives—UN Environment V Conclusion Bibliography Ch.29 Climate Change I Introduction II Context: Science, Politics, Drivers, and Milestones A Science and politics B Drivers and Milestones C A ‘Nested’ Regime III 1992 UNFCCC A Objective B Principles C Commitments IV The 1997 Kyoto Protocol A Principles B Commitments C Market Mechanisms V 2015 Paris Agreement and 2018 Rulebook A Purpose B Principles C Commitments D Market Mechanisms VI Increasing Reach and Influence of the UN Climate Change Regime VII Conclusion: The Effectiveness and Future of the UN Climate Regime Bibliography Ch.30 Freshwater Resources I Introduction II Evolution of Customary International Water Law A Contribution of Non-Governmental International Organizations B Contribution of Bilateral and Regional Treaties and Soft Law Instruments C Contribution of Judicial and Arbitral Decisions III Environmental Provisions of the Watercourses Convention A The Principle of Equitable Utilization and the No Harm Rule B Protection, Preservation, and Management of Watercourses Ecosystems IV Environmental Provisions of the Water Convention V Comparison of the Environmental Provisions of the Two Conventions VI Influence on Subsequent Bilateral and Multilateral Treaties VII Conclusion Bibliography Ch.31 The Protection of the Marine Environment Pollution and Fisheries I Introduction II Law of the Sea Convention and Ocean Governance III Prevention of Marine Pollution A UNCLOS Provisions on Marine Pollution B Regional Arrangements C Global Rules to Combat Marine Pollution 1 Pollution by dumping 2 Pollution from vessels 3 Pollution from seabed activities IV Conservation of Marine Living Resources A UNCLOS Provisions on Fisheries Management B The 1995 Fish Stocks Agreement C The FAO and International Law on Fisheries 1 Flag state obligations and the Compliance Agreement 2 IUU fishing and the 2009 Agreement on Port State Measures D The UNGA and Destructive Fishing Practices V Conservation of Marine Biodiversity A International Law and the Protection of Marine Biodiversity 1 UNCLOS 2 Convention on Biological Diversity B Marine Protected Areas 1 Marine protected areas under national jurisdiction 2 MPAs in areas beyond national jurisdiction C Protection of Vulnerable Marine Ecosystems: The Deep-Sea D Conservation and Sustainable Use of Marine Biodiversity of Areas Beyond National Jurisdiction (BBNJ) VI Conclusion Bibliography Ch.32 Wildlife I Introduction II Themes and Principles A Justifications for Wildlife Conservation B Permanent Sovereignty Over Natural Resources C Species and Ecosystem Perspectives D Uncertainty, Precaution, and Adaptive Management E Institutional Arrangements and Participants III Species-Centred Approaches A Trade in Endangered Species B Migratory Species C Whales IV Ecosystem and Habitat-Based Approaches A Biodiversity B Wetlands C Protected Areas D Forests V Regional Agreements VI Conclusion and Next Steps Bibliography Ch.33 Hazardous Substances and Activities I Introduction II Hazard Identification and Testing A OECD Harmonization Initiatives B Initiatives in the United Nations System III Conditions of Production and Use A Stockholm POPs Convention B Multilaterally-Agreed Standards for Pesticides and Other Toxics IV Regulation of Pollutant Releases A Examples from Domestic and Supranational Law B ECE Protocols on Toxic Air Pollution V Hazardous Processes and Industrial Accidents A ECE Convention on the Transboundary Effects of Industrial Accidents B Multilaterally-Agreed Good Practice Standards for Industrial Accidents VI International Trade in Hazardous Substances, Products, and Waste A Basel and Bamako Conventions B Rotterdam Convention VII Disposal of Toxic Waste A Basel Convention Disposal Requirements B IAEA Agreements and Standards VIII Integrated Approaches to Pollution Prevention A Minamata Convention B OECD Recommendation on Pollution Prevention IX Other Related Policies A Right to Know B Environmental Impact Assessment X Conclusion Bibliography Ch.34 Aviation and Maritime Transport I Introduction II Aviation and Maritime Transport Environmental Governance III International Environmental Law and Aviation A Aviation and the Environment: Noise and Air Quality B Aviation and Climate Change IV International Environmental Law and Maritime Transport A Maritime Transport and the Environment B Maritime Transport and Climate Change V Conclusions: Towards Sustainable Aviation and Maritime Transport Bibliography Part VI Actors Ch.35 The State I Introduction II Westphalian Myth of Unimpaired Freedom of Action III Contemporary Statehood A States as Authors of International Environmental Law B States as Addressees of International Environmental Law C States as Guardians of International Environmental Law IV Ongoing Transformation of the International Legal System A Growing Plurality of Actors B Growing Plurality of Regimes C From State-to-State Networks Towards Inter-Linked Networks 1 Linking with non-state actors 2 Cross-cutting relationships and ‘new’ networks V Conclusion Bibliography Ch.36 International Institutions I Introduction II Roles: No Centralized Coordination But a Pattern A Initiating Roles B The Institutional Structure of MEAs C Science-Oriented Institutions D The GEF, Funds, and Multilateral Investment Banks E The Pattern and Coherence III Substantive Links Between Meas A Links between Global and Regional MEAs B Links between Global MEAs 1 Synergies 2 Contestation a) Ship dismantling and ship-generated wastes: IMO and Basel Convention b) Accessing marine biological diversity in areas beyond national jurisdiction IV Conclusion: Multifaceted Governance Bibliography Ch.37 Regional Organizations the European Union I Introduction II Progressive Affirmation of EU Competence in the Environmental Field A The Consecration of EU Competence B A Shared Competence between the Union and Its Member States C EU Environmental Policy D Recent Developments and Prospects III External Dimension of Environmental Competence—the EU as a Global Actor? A Participation in Environmental International Agreements: A Shared Competence B Enforcement of International Environmental Law in the EU Legal Order C Is There an EU External Environmental Policy? IV Other Regional Organizations V Conclusion Bibliography Ch.38 Non-State Actors I Introduction II NGO Attributes III NGOs As Activists A Direct Action Advocacy B NGO Lobbying and Engagement Efforts C NGO Conferences and Dialogue IV NGOs as Diplomats V NGOs as Global Governors A Rule-Making B Implementation C Enforcement 1 Leveraging strong governmental frameworks 2 Supplementing weak or insufficient state resources 3 Exercising enforcement capacity beyond state-treaty architecture VI Conclusion Bibliography Ch.39 Sub-National Actors I Introduction II The Importance of Sub-National Actors to Solving International Environmental Problems III The Emergence of Multilevel Networks of Sub-National Governments A International-Level Climate Change Networks B National-level and Sub-National Climate Change Networks IV The Role of Sub-National Networks in International Environmental Lawmaking V Conclusion: The Evolving Role of Sub-National Actors in International Environmental Law Bibliography Ch.40 Epistemic Communities I Introduction II Epistemic Communities A The Concept of Epistemic Communities B Intellectual History of Scholarship About Epistemic Communities C The Ecological Epistemic Community and Multilateral Environmental Governance 1 Social learning 2 Institutional bargaining 3 Least common denominator 4 Parallel play 5 Epistemic communities and multilateral environmental agreements III Epistemic Communities and International Environmental Lawyers A International Environmental Lawyers Are Not an Epistemic Community B International Environmental Lawyers and the Translation of Epistemic Communities’ Ideas into International Environmental Law IV Conclusion Bibliography Ch.41 Business and Industry I Orientation II Business and the Environment: Mapping the Territory A Corporate Environmental Performance and Its Governance B Mapping the Evolution of Corporate Environmental Standards in Their Global Context C Governance Integrity in Corporate Environmental Standards III Business and International Environmental Law A Regulating Business Actors B Corporate Liability IV New Directions: Civil Society–Business Collaborations V Conclusion Bibliography Ch.42 Indigenous Peoples I Introduction II Indigenous Peoples’ Law III Development of Indigenous Peoples’ Rights at International Law A The Evolving Context for International Law Recognition B ILO’s 1989 Convention Concerning Indigenous and Tribal Peoples in Independent Countries (ILO 169) C 2007 UN Declaration on the Rights of Indigenous Peoples D Other Relevant Environmental International Law Instruments IV Significance of International Environmental Law for Indigneous Peoples V Conclusion Bibliography Part VII Inter-linkages with Other Regimes Ch.43 Trade I Introduction II Environmental Impacts of Trade III A Short History of the Trade and Environment Debate A The Early Days B Signs of Conflict C The World Trade Organization IV Multilateral Environmental Agreements and Trade V Environmental Measures Under Trade Law A Unilateral Trade Measures and Extraterritoriality B Processes and Production Methods C Environmental Exceptions VI Emerging Issues in the Trade and Environment Debate A The Rise of Regionalism B Climate Change and Trade VII Conclusion Bibliography Ch.44 Investment I Introduction II International Investment Agreements A Expropriation B National Treatment C Fair and Equitable Treatment D Most-Favoured-Nation Treatment E Investor-State Dispute Settlement III Environmental Regulation and Decision-Making as a Treaty Violation A Early Cases: Investment and Environmental Protection as Competing Norms B Recent Cases: More Balanced Approaches to Environmental Considerations IV New Trends in Treaty-Drafting A Model BITs B Environmentally Significant Investment Agreements V Synergies Between Foreign Investment and Environmental Protection VI Conclusion Bibliography Ch.45 Human Rights I Introduction II Recognition of the Human Right to a Healthy Environment III Incorporation of Human Rights Norms in Multilateral Environmental Agreements IV Application of Human Rights Law to Environmental Issues A Indigenous and Tribal Rights Relating To the Environment B ‘Greening’ Human Rights 1 Regional human rights tribunals a) African Commission and African Court on Human and Peoples’ Rights b) European Court of Human Rights c) European Committee for Social Rights d) Inter-American Commission and Court of Human Rights 2 UN human rights bodies C The Framework Principles on Human Rights and the Environment V Conclusion Bibliography Ch.46 Migration I Introduction II Addressing Environmental Factors as Drivers of Displacement and Migration A Scenarios B Conceptualizing Human Mobility in the Context of Disasters and Adverse Effects of Climate Change C The Role of Environmental Law to Reduce Displacement Risk 1 Reducing hazards 2 Reducing vulnerability and increasing resilience 3 Reducing exposure by helping people to move out of harm’s way D Protecting Displaced Persons III Mitigating Conservation-Induced Relocation and Displacement A A Dilemma B Emerging Standards IV Conclusion Bibliography Ch.47 Disaster I Introduction II The World is a Dangerous Place III The United Nations and Disaster Risk Reduction A The Hyogo and Sendai Frameworks B Operationalizing Disaster Risk Reduction IV Disaster and Ecosystem Management V Disaster and Obligations to Other States A The Disaster Prevention Duty B The Damage Compensation Duty C The Damage Minimization Duty VI Conclusion Bibliography Ch.48 Intellectual Property I Introduction II Sustainable Development, IPRs, and the Trips Agreement A Sustainable Development and IPRs B The TRIPs Agreement and Patent Protection C TRIPs and Developing Countries III Patents, Sustainable Development, and Food Security A Patents and Biotechnology B The 2001 ITPGRA IV The 1992 CBD and the 2010 Nagoya Protocol A IPRs under the CBD B IPRs under the Nagoya Protocol and Traditional Knowledge C The Nagoya Protocol and the TRIPs Agreement V IPRS, Technology Transfer, and Climate Change A IPRs under the UNFCCC B Other Intellectual Property Arrangements VI Conclusion Bibliography Ch.49 Energy I Introduction II Key Principles III Procedural Norms A Cooperative Duties B Duty to Assess Environmental Impacts C Public Participation IV Substantive Norms A Addressing the Environmental Impacts of Energy Activities 1 Vessel source pollution 2 Offshore energy exploration and exploitation activities 3 Transboundary air pollution 4 Climate change B Direct Regulation of Energy Production and Consumption 1 Nuclear 2 Energy efficiency and renewables V Conclusion Bibliography Ch.50 Armed Conflict and the Environment I Introduction II Law Applicable Across the Conflict Lifecycle III International Law Protecting the Environment Before Armed Conflict A Environment as a Cause of Armed Conflict B Preparation: Military Manuals and Training C Good Environmental Governance as Conflict Prevention Within States IV International Law Protecting the Environment During Armed Conflict A Direct Environmental Protection Under International Humanitarian Law B Indirect Environmental Protection Under International Humanitarian Law C Preventing and Minimizing Environmental Damage D Choice of Means and Methods of Warfare E Management of Waste and Other Toxic Substances F Addressing Conflict Resources V International Law Protecting the Environment After Armed Conflict A International Environmental Law as a Means to Assist Recovery B Transitional Justice and Accountability C Use of Natural Resources and Their Revenues for Rebuilding VI Conclusions Bibliography Part VIII Compliance, Implementation, and Effectiveness Ch.51 Compliance Theory I Introduction A Identifying a ‘Performance Indicator’ of Compliance as IEA Influence B Selecting a Comparator of IEA Influence 1 Legal requirements as comparators 2 Goals as comparators 3 Counterfactuals as comparators C Evaluating IEAs, IEA Provisions, or IEA Parties II Understanding How and Why Ieas Make a Difference When they Do A Two Models of Actor Behaviour B Distinguishing State Compliance from IEA Influence C Explaining Why States Fail to Comply and Why IEAs May Not Have Influence III Systems and Strategies for Inducing Behavioural Change A Types of IEAs B Systems of Regulation C Approaches to Regulation IV Other Considerations V Conclusion Bibliography Ch.52 Transparency Procedures I Introduction II Turn to Transparency in International (Environmental) Law III Techniques and Functions of Transparency A Reporting and Verification 1 Reporting: compliance-centred transparency 2 Verification: state- and public-facing transparency for compliance B Monitoring 1 Monitoring serving emancipatory transparency 2 Monitoring serving advocative transparency C Information Exchange 1 General information exchange 2 Situation-specific information exchange D Right to Information? 1 Aarhus Convention 2 Human rights law IV Conclusion Bibliography Ch.53 Market Mechanisms I Introduction II Conceptual Foundations III Lawyers and the Ascendance of Markets IV Markets and the Challenges of Implementation V International Experiences With Market Mechanisms VI Conclusion Bibliography Ch.54 Financial Assistance I Introduction II The Different Types and Sources of Financial Assistance A The SDGs and Official Development Assistance B The Greening of Development Aid Within International Institutions C The Involvement of the Private and Non-Governmental Sectors III Nature and Aim of Financial Assistance A Traditional Nature and Aims B Changing Nature and Aims—Focus on Global Public Goods IV Financial Mechanisms A First-Generation Mechanisms B Second-Generation Mechanisms C Green Climate Fund D Sui Generis Financial Mechanisms E Financial Mechanisms in the Legal Structure of Financial and Technical Assistance V Multiplication of Climate-Related Financial Mechanisms and the Risk of Fragmentation VI Conclusions Bibliography Ch.55 Technology Assistance and Transfers I Introduction: Technology Transfers—Principles and Contents II The Evolution of Technology Transfers: From Dependency and Differential Treatment to North-South Collaboration A Code of Conduct for Technology Transfers B Technology Transfers Under International Environmental Law Instruments 1 Barcelona Convention for the Protection of the Mediterranean Sea Against Pollution 2 Cartagena Convention and Noumea Convention 3 United Nations Convention on the Law of the Sea 4 Convention on Biological Diversity 5 United Nations Framework Convention on Climate Change 6 Technology transfers and sustainable development law C Synthesis: Technology Transfers in International Environmental Law III Technology Transfers and Intellectual Property: Challenges and Synergies IV Conclusion Bibliography Ch.56 Non-Compliance Procedures I Introduction II The Role of Non-Compliance Procedures in MEAs A Facilitative Versus Enforcement Approaches B Primary Rules, Compliance Information, and Non-Compliance Response III Approaches to Design and Implementation A Initiation of Non-Compliance Procedures B Scope and Mandate C Institutional Design and Process D Measures and Outcomes IV Overview of MEA Compliance Systems V Conclusion Bibliography Ch.57 Effectiveness I Introduction II Key Concepts and Methodological Challenges A International Regimes B Concept of Effectiveness C Methodological Challenges D Explaining Effectiveness III General Findings and Empirical Examples A The Climate-Change Regime—A Low-Effectiveness Regime B The Ozone Regime: A Rare Success Story C The Whaling Regime: Different Values Make Effectiveness Studies Difficult IV Concluding Remarks Bibliography Ch.58 International Environmental Responsibility and Liability I An Introduction to ‘Environmental Accountability’ II ‘Environmental Accountability’: Some Challenges III Initiatives by the International Law Commission IV International Environmental Law and State Responsibility A Breach of an International Obligation 1 Treaties 2 Customary prohibition of transboundary harm a) Environmental harm b) Compensation for environmental harm c) Threshold B Relevant State Behaviour and Attributability 1 General 2 Standard of conduct: acting with due diligence 3 Circumstances precluding wrongfulness 4 Consequences V Final Remarks Bibliography Ch.59 National Implementation I Introduction II Contexts for Implementation A Sources and Forms of International Environmental Law B Domestic Structures for Making and Giving Effect to International Law C Political, Economic, and Ecological Landscapes III Means of Implementation A Adhering to International Treaties B Recognizing International Law C Interpreting International Law IV Principles Supporting Implementation A Cooperation B Common But Differentiated Responsibilities C Public Participation V Conclusion Bibliography Ch.60 International Environmental Law Disputes Before International Courts and Tribunals I Introduction II When to Litigate III Jurisdiction Over IEL Disputes IV Preliminary Matters A Standing Before International Courts or Tribunals for IEL Disputes B Provisional Measures V Substantive Determination A Standard of Proof and Causation B Use of Experts and Witnesses VI Reparations VII Concluding Remarks Bibliography Part IX International Environmental Law in National/Regional Courts Ch.61 Africa I Introduction II Direct Application of IEL III Indirect Application of IEL IV Conclusion: Challenges, Barriers, and Opportunities Bibliography Ch.62 China I Introduction A Overview of the Chinese Legal System II Cases Directly Applying International Environmental Law A The Wild Cymbidium Orchids Case B The Shenzhen Parrots Case III Cases Indirectly Applying International Environmental Law IV Challenges for the Greater Application of International Environmental Law in National Courts V Assessment Bibliography Ch.63 European Union/United Kingdom I Introduction II Cases Directly Applying IEL III Cases Indirectly Applying IEL IV Conclusion Bibliography Ch.64 India, Bangladesh, and Pakistan I Introduction II Direct Application of International Environmental Law III Indirect Application of International Environmental Law IV Assessment and Conclusion Bibliography Ch.65 North America I Introduction II Applying International Environmental Law III Engaging With International Environmental Law IV Challenges and Opportunities A Indigenous Law B Human Rights Law C Private International Law V Assessment Bibliography Ch.66 Oceania I Introduction II The Role of International Law in Oceania III Structure and Function of Court Systems in Oceania IV Cases Directly Applying International Environmental Law in Oceania V Cases Indirectly Applying International Environmental Law in Oceania VI Barriers to Greater Application of International Environmental Law in Oceania Courts Bibliography Ch.67 South America I Introduction II Cases Directly Applying International Environmental Law III Cases Indirectly Applying International Environmental Law IV Challenges/Barriers for the Greater Application of International Environmental Law in National Courts V Assessment Bibliography Further Material Index Sign up for alerts Part VII Inter-linkages with Other Regimes, Ch.50 Armed Conflict and the Environment Carl Bruch, Cymie R Payne, Britta Sjöstedt From: The Oxford Handbook of International Environmental Law (2nd Edition) Edited By: Lavanya Rajamani, Jacqueline Peel Previous Edition (1 ed.) Content type: Book content Product: Oxford Scholarly Authorities on International Law [OSAIL] Series: Oxford Handbooks Published in print: 12 August 2021 ISBN:
https://opil.ouplaw.com/abstract/10.1093/law/9780198849155.001.0001/law-9780198849155-chapter-50?prd=OSAIL
YouTube. AR SPOT: un entorno de programación de realidad aumentada para los niños »Augmented Environments Lab. AR SPOT is an augmented-reality authoring environment for children. An extension of MIT’s Scratch project, this environment allows children to create experiences that mix real and virtual elements. Children can display virtual objects on a real-world scene observed through a video camera, and they can control the virtual world through interactions between physical objects. This project aims to expand the range of creative experiences for young authors, by presenting AR technology in ways appropriate for this audience. In this process, we investigate how young children conceptualize augmented reality experiences, and shape the authoring environment according to this knowledge. Download (Windows only) Jugamos con SPOT. Scratch y Realidad Aumentada = SPOT. Realidad Aumentada con SPOT o Scratch. Scratch y Realidad Aumentada = SPOT.
http://www.pearltrees.com/pazgonzalo/realidad-aumentada-con-scratch/id8633264
Port-au-Prince / New York, February 13, 2004 - Today, the international medical aid organization Doctors Without Borders/Médecins Sans Frontières (MSF) is sending 16 tons of medical equipment to Port-au-Prince, the capital of Haiti. The supplies consist primarily of surgical and dressing kits for the MSF programs in the hospital of Saint Nicolas, in Saint-Marc, and Saint François de Salle Hospital, in Port-au-Prince. The MSF medical emergency program aims to ensure access to treatment for the people wounded during the massive demonstrations and other violent incidents that have been occurring almost daily since December 2003. MSF is particularly concerned about the lack of access to treatment for some of the wounded people due to financial and political constraints. Many of the wounded are poor, and receive at most first aid before being sent away. The majority of them are turned away at the entrance of the hospital because they can't pay for treatment. Another constraint is the political labeling of most health structures in the country, which leads to the perception that they are on the side of either the government or the opposition. The neutrality and security of medical structures is therefore not guaranteed. MSF continues to assess how the political labeling of the health structures influences the access to treatment for the wounded of the different factions, and will adapt its activities to its findings. "We aim to ensure free access to treatment for all wounded, regardless of their political background or financial means," explains Philippe Hamel, MSF Head of Mission in Haiti. "We do this by providing medical drugs and equipment on an impartial basis in different hospitals and by sending extra medical staff when needed." The MSF supplies that are being flown to Haiti will mainly be used for surgical programs in Saint-Marc and Port-au-Prince. The freight consists of medical-surgical kits to treat 300 wounded, 5 first aid kits, an anesthetics kit for 100 patients, a dressing kit to treat 120 burn patients, general dressing kits for a total of 150 patients and an emergency health kit for 10,000 people. The cargo is leaving from Brussels and is due to arrive in Port-au-Prince over the weekend. "Our teams are concentrating on providing surgical assistance," Hamel said. "The team currently working in Saint Nicolas hospital in Saint-Marc has reported fourteen wounded so far. We were able to donate medical and surgical equipment and drugs over the past days, and most of the new supplies that are on their way now will also go that way. We are currently rotating the medical staff, to ensure 24-hour access to treatment for the wounded." The MSF team working in Petite Rivière has been evacuated to the capital and remains unable to return due to security reasons, as the north of the country is getting more and more isolated. MSF will expand its program, however, by providing assistance to the Saint François de Salle hospital in Port-au-Prince. Over the weekend, two extra medical staff - a surgeon and a nurse - will also arrive in Haiti to reinforce the existing team on the ground. Currently, MSF has twelve international and 84 national staff working for its programs in the country.
https://www.doctorswithoutborders.org/latest/msf-surgical-supplies-and-teams-haiti
Motivation and emotion/Book/2014/Child emotional abuse What are the consequences and how can it be dealt with? Overview[edit | edit source] Child emotional abuse can exist independently of other forms of abuse and due to its only recent analysis it has become increasingly hard to identify and define. This book chapter aims to provide the reader with an understanding of what emotional abuse is, how and why emotional abuse comes about and finally what consequences it can have on a child’s cognitive, social and psychological development. The problem[edit | edit source] Statements like “this child was badly verbally abused” are only recently termed phrases, expressed systematically by clinicians. According to Hornor (2012) these children are the victims of emotional abuse. However, despite the scope of the problem, the recognition of emotional abuse as a social issue has only become relatively recent. According to Egeland (2009) emotional abuse is viewed as less severe when compared to other forms of abuse but may actually be the most prevalent form of child abuse, and sadly, it is also the most under-reported, hidden and least studied form of abuse. Why is this so?[edit | edit source] You just have to open the newspaper or browse the Internet to read about cases of horrific physical or sexual abuse against children, but it seems emotional abuse on the other hand, has been slow to receive recognition as a serious problem (Hornor, 2012). This is because emotional abuse is very difficult to identify and also to define, compared to those of physical or sexual abuse. As emotional abuse has only recently begun to be explored, the long-term effects of emotional abuse are not entirely clear and there is not a wide variety of epidemiological data available regarding it. |} Definition[edit | edit source] The definition of emotional abuse when linked with children causes severe adverse effects on their emotional development (Hornor, 2012). This method of abuse, which usually comes in the form of specific behaviours, is done so by parents, adults, caregivers or older adolescents. The behaviour may be intentional or unintentional and can include acts of omission —neglect, and also commission —abuse (Bromfield, 2005; Cristofel et al., 1992; Gilbert et al., 2009). There are a number of terms, which are used to define emotional abuse. These include psychological abuse, psychological maltreatment, and emotional maltreatment. (Hornor, 2012). |Terms||Explanatory Clarification of Emotional Abuse of Children in its different forms| |Spurring||Shaming, belittling or ridiculing, punishing in such a way that singles that child out, usually by humiliation.| |Rejection||Avoiding contact and pushing a child away.| |Terrorising||with acts of violence, or threatening treasured processions.| |Neglect||Failing to provide medical or educational needs.| |Emotional un-responsiveness||Ignoring and not expressing any affection, care or love.| |Isolation||Placing unreasonable limitations on freedom and social interaction.| |Corruption||Enforcing the development of inappropriate behaviours by encouraging alcohol and drug use, sexual activity or inappropriate language.| |[[w: Inconsistant parenting| Inconsistantparenting]]||Placing conflicting demands and expectations on a child.| |Violence||Allowing a child to witness forms of domestic violence (Goldsmith & Freyd, 2005).| Doyle (1997) defines these acts of emotional abuse as “abuse of the child, as a sole or main form, which consists of acts of omission and commission which are judged on the basis of a combination of community standards and professional expertise to be psychologically damaging”. According to Doyle, such acts are usually committed by parent authoritative figures that are in a position of power over the child; this therefore leaves the child in a vulnerable position. Such acts of emotional abuse on children cause immediate damage, and have significant impact on the child’s behaviour, cognitive abilities, social interactions and ultimately the psychological functioning of that child. What constitutes emotional abuse?[edit | edit source] Emotional abuse is subject to controversy and, as it is the most recent form of identified mistreatment, it is only just being understood and treated by professionals (Hornor, 2012). Research began in the 1980’s. However, due to the problems of defining emotional abuse and lack of observational studies there have been a number of contradicting views on the issue. McGee & Wolfe (1991) argued that damaging parental behaviour should be considered as the primary indicator for emotional abuse. Conversely, other researchers have stated that parental behaviour alone cannot be the sole predictor of emotional damage in children and that emphasis should be placed on child outcomes arising from consequences of abuse and neglect (Aber & Zigler, 1981; Kavanagh, 1982). These conflicting views show that further research needs to be undertaken in this area. Risk factors that are associated with poor outcomes for children who are exposed to emotional abuse include but are not limited to: - Socio-economic disadvantage - Larger families with a number of children - Social isolation or living in dangerous neighbourhoods - A parent or caregiver with depression or a drug or alcohol related dependence - A child with a disability - Duration and the frequency of the emotional abuse - The age and developmental stage of the child when the abuse first occurred, with the younger the age of the child the more likely it is they will experience issues later in life. - The severity of the emotional abuse - The type/s of emotional abuse - The child’s perception of the person and the relationship he or she has with who is perpetrating the emotional abuse (Iwaniec, 2003). What are the consequences of emotional abuse?[edit | edit source] Emotional abuse has specific and independent consequences and might, in fact, be the most pervasive and damaging type of abuse (Hornor, 2012). Children who undergo emotional abuse suffer from a unique form of treatment. The weapons which are used against them are not visible and do not consist of things such as belts, cords or sexual acts —those of which constitute physical and sexual abuse. Rather, these children are tormented with harsh words or uncaring silence (Hornor, 2012). Although these children don’t suffer from any form of physical pain, the consequences are far longer lasting. This form of abuse is well known to serve as a risk factor for the development of mental illness in adulthood. Studies provide strong associations between emotional abuse and - Post-traumatic stress disorder - Depression - Substance abuse - Obesity - Suicide (Hart, Brassard & Karlson, 1996). Although the long-term impacts have yet to be fully explored, Hornor (2012) stated that recent studies have begun to examine effects including: depression, anxiety and difficulties with interpersonal relationships. Van Harmelen and colleagues (2010) aimed at identifying if this was in fact correct. The findings from his research found that a relationship between experiencing childhood emotional abuse and the development of a depressive or anxiety disorder in adulthood did exist. Within the study nearly 300 individuals with a current or previous diagnosis of an anxiety disorder or a major depressive disorder were interviewed in relation to whether they were subject to emotional abuse before the age of 16 years. Of these individuals, 93% reported that they had experienced emotional abuse in some form. These concerning results show the implication that being a victim of child emotional abuse can have serious adverse effects on their future development. Reactive attachment disorder (RAD)[edit | edit source] Reactive attachment disorder (RAD) is another possible consequence of emotional abuse. According to Hornor (2012) RAD is defined as a markedly disturbed and developmentally inappropriate social relatedness that usually begins before the child reaches five years of age. This disorder is usually present in instances where the child will avoid, or have a cold watchfulness, when responding to different social situations and ultimately, the child has difficulty forming any type of relationship with anyone. However, RAD can also have a strange and contrasting effect where a child can attach to any individual including strangers. Such behaviours are a result of pathogenic care, which consists of persistent disregard to the child’s basic emotional needs for stimulation, comfort and affection (Hornor, 2012). Emotional abuse in childhood can therefore threaten the security of attachment relationships and can produce maladaptive models of the self in relation to others. A study by Wright, Crawford, and Del Castillo (2009) examined if an individual, who has experienced emotional abuse and was neglected by their parents, showed maladaptive long term negative outcomes including symptoms of depression, anxiety and dissociation. Sadly, findings of this research did in fact conclude that perceptions of childhood emotional abuse and neglect continued to exert an influence on later symptoms of anxiety and depression. Wright stated that how a person evaluates an experience is more important than the event itself. Therefore, a child, as they get older, are likely to suffer long-term impacts such as negative beliefs about the self. When parents are emotionally abusive towards their children, this teaches children that aggression and hostility are an appropriate way of getting what you want. As a result children don’t learn appropriate ways to develop interpersonal behaviours and consequently, misinterpret the kind intention of others. Ensink (2012) explains this issue as reflective functioning – that is, the child does not have the ability to correctly interpret the feelings and thoughts of others and as a result these children learn aggressive tactics, and use this as a way to relate to those around them. This sense of shame that the child experiences when being emotionally abused can, during adolescence, interfere with their optimism and their ability to perform to the best of their ability in different aspects of their life. Children may feel that they don’t serve a purpose and once they reach adolescence, drug and alcohol dependence is likely to be a result (Moran et al. 2004). There is no one way that a child, who is exposed to emotional abuse, will be affected. For some children the effects of emotional abuse may be long lasting and devastating, whereas for others the experience may have less aversive outcomes. According to Hornor (2012) there are a number of life experiences that can evidently influence a child’s vulnerability or resilience’s when faced with involvements of emotional abuse. That is, the ability this child will have in coping and thriving despite being exposed to such negative experiences. When a child has suffered from emotional abuse, if they discontinue to have few protective factors i.e. positive relationships, friends or other family members, the risk of serious adverse outcomes is very likely. Relationship consequences[edit | edit source] Relationship issues can arise as a consequence of emotional abuse. Rejection and subsequent shame have been identified as one of the most damaging consequences within adult relationships. Adults who have experienced emotional abuse in their childhood are likely to have problems with anger, emotional reactivity and increased loneliness in adult intimate relationships (Healey, 2006). Listed below is an example of a case study where the mother in this instant had emotionally abused her son, the consequences of her actions resulted in the son have difficulty understanding, identifying and feeling his emotions once he reached adulthood. The role of emotion[edit | edit source] Developing brain[edit | edit source] The developing brain is very sensitive to the trauma caused by experiences of child abuse and neglect. For children the brain develops over time and through interaction with the environment. Interaction between genes and environment also plays a role in how the brain develops, particularly during the early developmental period (Hornor, 2012). Therefore exposure to complex and chronic trauma can result in persistent psychological problems. When children do not receive consistent, supportive and interactive relationships with adult caregivers, it has been found to have an especially harmful effect on young children's growing brains. Recent research has demonstrated that traumatic events, particularly during the first decade in life while the brain is still rapidly developing, appear to leave the neuroendocrine stress response systems permanently supersensitive. Individuals are therefore at risk for the development mental illness when they encounter additional stressful events in adult life (Healey, 2006). Basic human needs model[edit | edit source] According to the basic human needs model, which has had significant support for explaining the heuristic value for emotional abuse postulates that this form of abuse prevents a fulfilment of basic human safety, love and belonging and self-esteem needs. This theoretical model is useful in determining which characteristics make an individual susceptible to psychological maltreatment (Iwaniec, 2003). Attachment theories and models[edit | edit source] Attachment theory proposes that a direct relationship between the quality of the relationship between a child in infancy, and their early experiences with their caregivers, influences their later experiences and relationships with peers. A child who experiences a secure attachment during infancy is said, according to Hornor (2012), to have different qualities and predispositions to the peers of someone whose attachment was insecure. The conceptual model which is grounded in attachment theory, suggests an intergenerational cycle if intervention does not occur. This model proposes that insecure attachment styles and emotional abuse in early caregiving relationships has a profound impact on the attachment system, which affects individual development at every phase of life. Childhood emotional abuse initially interferes with the ability to regulate emotions and contributes to insecure attachment characterised by negative perceptions of the self and others. As the child develops, such deficits may create interpersonal schemas that can interfere with their social functioning and their relations with other peers in adolescence. Psychological disorders and emotional disruptions occur in adulthood due to these insecure internal models of attachment and coping patterns (Healey, 2006). As a result, these cognitive factors and avoidance tactics produce fearful attachment styles. Case study[edit | edit source] Nathan, a 32 year old man from Casey in the Act, consulted his general practitioner when he was having problems at work. Nathan was feeling down more days then not and had problems concentrating on regular daily tasks. He told his GP that he felt unhappy and sometimes empty. He continued to say that although he knows his wife loves him very much he has difficulties in feeling that love. His GP prescribed him with antidepressants, which after a few months proved to be unsuccessful, following this he was referred to cognitive therapy. Throughout his time in therapy, Nathan expressed how events in his life did not bring him joy like they once did. He knew he was unhappy but he could not identify the source of his unhappiness. It was not until the therapist asked Nathan about experiences and events from childhood where he described growing up as part of a broken family where his mother re-married and had 3 other children in her new relationship. Nathan began to tell the therapist of times where his mother had rejected him and he was overlooked by his stepfather. Nathan’s mother treated him with derision, she regularly shamed him and verbally denigrated him more often than not and on one occasion she had locked him in his room for a whole day without any food or water. He believed this treatment to be a consequence of being a constant reminder of his father, who was never present in life. How can it be dealt with?[edit | edit source] It is evident that emotional abuse of children can have serious consequences for those involved. Therefore, it is vital that cases of abuse are identified and an appropriate form of intervention be implemented to decrease the long-term effects on children (Hornor, 2012). There is no one direct way in which to help children and also their families, but a number of intervention strategies should apply and these should be based on comprehensive holistic assessment, addressing the needs of the children and also the parents. Such interventions methods and approaches should aid in healing the child and also help parents in better understanding a child’s developmental needs. Community based services can also provide families who are facing difficulties which include poverty, social isolation, family violence and substance abuse. Interventions and treatments should aim to repair the damage and strengthen the parent-child relationship, enhancing the child’s sense of security and belonging and acknowledging their basic human needs. Cognitive approach[edit | edit source] Characteristics of emotional abusive parents using include self-defeating thoughts, and the belief that they are incapability of dealing with different and difficult life tasks. Such dysfunctional thoughts and feelings consequently lead to negative outcomes for the parents and the child (Hornor, 2012). According to Beck (1976) dysfunctional thoughts and beliefs generate a sense of helplessness. Therefore, being in such a state of mind generates indifference and leads to serious neglect and a child’s emotional needs. Interventions that focus on dealing with the cognitive aspects of emotional abuse help parents focus on generating positive thoughts, beliefs and feelings about their capabilities and provide them with opportunities in learning new methods to deal with life stresses and caring for their children. The cognitive approach will help neglectful parents address attitudes and perceptions of parental responsibilities; ultimately the therapy aims to raise their awareness of the child’s developmental needs and help them reach their child’s potential to be a healthy and happy individual. Attachment theory – What can a parent do?[edit | edit source] There is a variety of different methods in which children with attachment disorders can be treated, and the parent-child bond be restored. Dot points. When the child is in infancy, Iwaniec (2003) suggests that a parent should take an approach that should involve proactive maternal behaviour during bathing, feeding, changing and responding to the child’s signals for need and act appropriately. To appreciate the bonding experience will create a secure attachment between the mother and child, if however the child shows anxiety or resistance the mother should hold the child close, this interaction should be done a few times a day. When a child is older, Iwaniec (2003) recommends being warm and reassuring when talking to the child, reading them a story or simply by doing an activity together. Such interactions should be done a few times a day as the child gets older they will begin to feel more relaxed and gradually overtime the child will seek contact with other people and begin to start to develop close relationships with others. Conclusion[edit | edit source] It is evident from research that emotional abuse is detrimental for children. Therefore, any individual, not just professionals, need to be open minded when dealing with children who have been emotionally abused. They also need to be open minded with parents. It would seem appropriate that a multidimensional approach should be applied when designing treatments and interventions as every case is going to have different circumstance. Many services are available to not only children of abuse, but also parents and caregivers who require assistance . Test your knowledge[edit | edit source] See also[edit | edit source] - Emotional development - Relationships and happiness - Bullying and emotion - Parenting and emotional development in children References[edit | edit source] Doyle, C. (1997). Emotional abuse of children: issues of intervention. Child abuse Review, 6, pp. 330-342. Egeland, B. (2009). Taking stock: Childhood emotional maltreatment and developmental psychopathology. Child Abuse & Neglect, 33, 22-26. Ensink, K. (2012). Mother-infant attachment: Reflective functioning in mothers with histories of childhood abuse and neglect. INTERNATIONAL JOURNAL OF PSYCHOLOGY, 47, 772-772. Goldsmith, R., & Freyd, J. (2005). Awareness for emotional abuse. Journal of Emotional Abuse, 5(1), 95--123. Hart, S., Brassard, M., & Karlson, H. (1996). Psychological Maltreatment (From APSAC Handbook on Child Maltreatment, P 72-89, 1996, John Briere, Lucy Berliner, et al, eds.-See NCJ-172299). Healey, J. (2006). Child maltreatment. Thirroul, N.S.W: Spinney Press. Hornor, G. (2012). Emotional maltreatment. Journal of Pediatric Health Care : Official Publication of National Association of Pediatric Nurse Associates & Practitioners, 26(6), 436-442. doi:10.1016/j.pedhc.2011.05.004. Iwaniec, D. (2003). Identifying and dealing with emotional abuse and neglect. Child Care In Practice, 9(1), 49--61. Journal of emotional abuse (1998). Interventions, research & theories of psychological maltreatment, trauma & nonphysical aggresion. McGee, R.A. & Wolfe, D.A. (1991). Between a rock and a hard place: Where do we go from here in defining psychological maltreatment?, in: D. Cicchetti (ed), Development and Psychopathology. Cambridge: Cambridge University Press. Moran, P., Ghate, D., & van der Merwe, A. (2004). What works in parenting support: A review of the international evidence. London, UK: Policy Research Bureau, Department for Education and Skills. Wright, M.0., Crawford, E., & Del Castillo, D. (2009). Childhood emotional maltreatment and later psychological distress among college students: The mediating role of maladaptive schemas. Child Abuse & Neglect, 33, 59-68. Van Harmelen, A. L., de Jong, P.J., Glashouwer, K.A., Spinhoven, P., Penninx, B. W., & Elzinga, B. M. (2010). Child abuse and negative explicit and automatic self-associations: The cognitive scars of emotional maltreatment. Behaviour Research and Therapy, 48, 486-494.
https://en.wikiversity.org/wiki/Motivation_and_emotion/Book/2014/Child_emotional_abuse
If you did not have a traumatic childhood consider yourself lucky. There are a few memories that still play back in my mind – one of them was when I was punished for something I did not do. When I say punished it doesn’t mean sit in the corner, in India it is a common practice for children to be given a couple of whacks, slapped, hit with a slipper/flip-flop or even a cane. During one of the annual festivals in suburban Bombay, Mount Mary’s festival, which was one of my favourite places to visit as a child. I used to get this little metal boat that I could put in a tub with a little candle to power it, and this used to make me really happy. On the other hand, this festival had something of interest that most parents took home, a thin bamboo cane used for whipping. So it was always a funny thing, on one hand you have a boat and on the other a cane. I still remember walking down the streets of the festival and see the shops that sold the edibles, candles, so many knick-knacks and of course shops that sold the cane. There’s a particular incident that I recollect very vividly to date, because I was not believed to be telling the truth when I was. I have always been fond of animals, and as a kid we used to have these little budgerigars or love birds at home in a little cage. One day Sunny managed to escape his cage, when I noticed it I ran and told my mother, but she did not believe the bird could get out of the cage without assistance. He was sitting on the pelmet, and we couldn’t reach him, so she called my grandfather, and it was difficult catching him with him flying about out of their reach. Although after he was put back in the cage, neither of them believed he could escape on their own, and I was at the receiving end of a situation I had not created. Every time I go back to my childhood I’m flooded with unhappy memories, I actually have to struggle to recollect the good ones, and this is actually how our brain works as humans. If you have read my article on ways to grow beyond your childhood trauma I counter this behaviour by trying to recollect as many memories I have that brought me joy in my childhood, so it becomes easier to let go of bad ones. I’m not here to talk about past traumatic experiences, I understand that the way I was brought up at that time was the norm for most children of my time. I remember the howls and screams of my neighbour’s children, seemed to me, they had it quite bad as well. I’m well aware that it’s not just me who has suffered, some have suffered far worse and some less, but today there’s far more research on how going through this in childhood impacts adult life. I do hope more parents gain strength to break through generational patterns and provide children with an environment that helps build healthier adults. “The greater a child’s terror, and the earlier it is experienced, the harder it becomes to develop a strong and healthy sense of self.” ~ Nathaniel BrandenSix Pillars of Self-Esteem Importance of Childhood The first 7 years of our lives lay the foundation for later learning and healthy development in all areas – emotional, psychological, physical or spiritual. If during that time a child has had negative or unpleasant experiences, such memories stay with the child for the rest of his/her life, leading to emotional challenges as an adult. In the first seven years the child is like a sponge absorbing everything from his environment, our beliefs, behavioural patterns, responses and so on. So a happy childhood leads to an adult who is a balanced individual, whereas an unhappy childhood may cause imbalances in our thinking, behaviour and so on. Rudolph Steiner suggests that before the soul of the child incarnates, the soul chooses its family and parents. “Experience has taught us that we have only one enduring weapon in our struggle against mental illness: the emotional discovery and emotional acceptance of the truth in the individual and unique history of our childhood.” ~ Alice MillerThe Drama of the Gifted Child: The Search for the True Self The effects of childhood trauma in adults Unfortunately it’s not just bad memories that we carry forward to our adult life, childhood trauma leaves lasting effects on the brain and personality. Childhood trauma is not just rampant but linked to multiple forms of dysfunction. Listed below are some effects of childhood trauma in adults that has been linked by research. Common Psychiatric Disorders Everyday life puts us into situations where we come across unknown people and situations, which is not very easy for people who suffered from childhood trauma. According to a study on social anxiety disorder and childhood trauma in the context of anxiety (behavioural inhibition), impulsivity (behavioural activation) and quality of life, a correlation was found between the severity of social anxiety disorder (SAD) symptoms and the amount of childhood trauma exposure. The research assessed five dimensions of childhood maltreatment: physical abuse, emotional abuse, sexual abuse, physical neglect and emotional neglect. Those who have SAD display symptoms such as an intense fear of embarrassment, humiliation and negative evaluation, especially when they have to meet with people who they don’t know or if they are publicly scrutinized. This makes things awkward, and the person avoids these situations, this ends up affecting their social life and career. Adverse reactions Life tends to throw us a curveball sometimes and in worse case scenarios there are many times we hit rock bottom but some of us manage to pull ourselves out or crawl out of the situation. It isn’t easy to do and sadly yet again adults who have experienced childhood trauma have it far worse. According to a study on “The Biological Effects of Childhood Trauma”, trauma activates the body’s biological stress response systems, this has behavioral and emotional effects that are similar to a person with post-traumatic stress symptom. A person’s biological stress response consists of different systems that interact with each other to protect oneself from threats and trigger the “fight or flight” system. Inhibition of sexual behavior, core symptoms of major depression, higher cortisol levels which have effects such as weight gain, slowed healing, muscle weakness, severe fatigue, irritability, medical illness and brain structure damage. Adults with a history of childhood trauma also display hypertension, accelerated atherosclerosis, metabolic syndrome, impaired growth and immune system suppression and poorer medical health. “The fetus is biochemically connected to the mother, and her external, internal, physical, and mental health affect the overall development of the fetus. Stress and depression during pregnancy have been proven to have long-term and even permanent effects on the offspring. Such effects include a vulnerability to chronic anxiety, elevated fear, propensity to addictions, and poor impulse control.” ~ Darius CikanaviciusHuman Development and Trauma: How Childhood Shapes Us into Who We Are as Adults Drug Use “The sheer weight of the many reports over the years certainly implicates child abuse as a possible factor in drug abuse for many people,” says Dr. Cora Lee Wetherington, NIDA’s Women’s Health Coordinator. Although there isn’t enough data to understand the complete situation, studies are being conducted that help link childhood physical abuse and adult substance abuse. In a particular study which included the social history and demographic data of 178 where 101 patients undergoing treatment for drug/alcohol addiction were from the United States and the other 77 from Australia, the study determined that a whooping 84% of the sample reported a history of child abuse/neglect. Another study of 733 women found those who were physically abused as children more likely to abuse drugs as they grew older. Strangely that in this sample even after controlling family history of substance abuse the results were the same, making a healthy childhood imperative and even intervention and support. Quality of Life Abuse survivors are less happy, less satisfied and find living less worthwhile as compared to people who were not abused as children. According to research, 9% more of people who have not been abused as children feel their health is very good. Apart from an inclination to drugs, studies have found that opioid dependent individuals who have a history of sexual abuse have poorer mental and physical health as compared to those with no sexual abuse in the past. While other studies link adverse childhood experiences to a variety of issues that lead to early death, due to diseases like cancer, diabetes and more… either due to lifestyle or habits that are coping mechanisms to deal with the trauma. Sadly people who have been abused have a higher probability of entering an abusive relationship in future. Anti-social Behaviour Two studies that I came across while doing this article linked childhood maltreatment and even harsh punishments dealt to children to anti-social behaviour in adults. Pushing, grabbing, shoving, slapping, and hitting were considered as harsh physical punishment while child maltreatment included physical abuse, sexual abuse, emotional abuse, physical neglect, emotional neglect, and even exposure to intimate partner violence. Both the studies confirmed that abused children display an increase in anti-social behaviour as adults. While one of the studies was a long-term study proved that even at the age of 50 years the survivors still show this anti-social behaviour. Protecting Childhood Frederick Douglass said that “It is easier to build strong children than to repair broken men.” Apart from the fact that he’s correct, if we pause and take a look at how human systems are functioning, from war, to poverty, to racism or nationalism, we’re ensuring we have an endless cycle of dysfunctional adults. Those of us who have children have to really work hard to ensure we create healthy functional human beings, it’s even harder for those who have experienced trauma as they grew up to try breaking the cycle of abuse and consciously parent our children. My article on 4 ways to grow past your childhood trauma should help adults deal with their past, so they can rewrite their future and help protect childhood. References: People who were abused as children are more likely to be abused as an adult Associations of Harsh Physical Punishment and Child Maltreatment in Childhood With Antisocial Behaviors in Adulthood Child maltreatment and the risk of antisocial behaviour: A population-based cohort study spanning 50 years A study of the relationship between child abuse and drug addiction in 178 patients: preliminary results Image Source: Unequally Yoked by Kirsten Beitler Subscribe to our weekly newsletter and get the latest updates straight in your inbox!
https://fractalenlightenment.com/51161/self-development/the-disastrous-effects-of-childhood-trauma-in-adults
In this article, the author asserts that if libraries are to be safe places for patrons of all backgrounds, library workers in general must incorporate insights from other disciplines into their practice and begin to meaningfully address the complicated roles of police and security guards... Report As American communities face increasing levels of social and economic division, this report showcases how five U.S. cities are reimagining public spaces—parks, trails, plazas, libraries—to bring residents together and revive neglected neighborhoods. Annual Report UN-Habitat’s Global Public Space Programme, launched in 2012, is now active in more than 30 cities across the world. The programme’s objective is to promote public spaces as a keystone for sustainable cities to ensure a good quality of life for all. This is done... Report This report maps policies, strategies and instruments used by metropolitan governments to foster safer public spaces, with particular reference to the safety of women and girls. Guide This guide is intended for community members who see an opportunity to create better streets, but may be struggling to get their neighbors on board or spur government officials into action. As part of our Transportation Program, we conducted a “scan,” speaking with 20... Report With the 2030 Agenda for Sustainable Development, Member States agreed on 17 Sustainable Development Goals (SDGs) with 169 global targets, and nearly 234 indicators that will be monitored for the period 2015–2030. The targets are designed to be integrated and indivisible and to balance the... Conference paper This paper discussed urban specific wind patterns and their influence on urban liveability in Adelaide CBD. Possible urban wind comfort design guidelines are presented and discussed for application in Australian context. Conference paper Thermally uncomfortable outdoor environments can significantly affect liveability of cities. Australia is likely to experience between 0.6 °C and 3.8 °C increase in temperature by the end of the 21st century. In warmer climates, increased demand for indoor air-conditioning results in higher energy demand and... Conference paper Sound is a dynamic part of the urban landscape and is increasingly understood to be a central aspect that helps to shape people’s experiences of the public realm. Australian urban planners, however, have little engagement with the theories on urban sounds, and as a result,... Conference paper Public security and anti-terror urban design is increasing in Australian cities as governments respond to continued extremist attacks worldwide. However, controlling safety measures are driven by security agencies and police, rather than urban design professionals. Oftentimes, such outcomes prove detrimental to urban amenity, sacrificing quality... Journal article Limiting global warming to 1.5 °C will require rapid decarbonisation of the world’s electricity and transport systems. This must occur against a background of continuing urbanisation and the shift to the information economy. While replacement of fossil fuels in electricity generation is underway, urban transport... Annual Report A sound knowledge of how informal green spaces are used, or of why they are not being used, can inform planners and decision-makers when intervening in such spaces to increase the liveability of urban neighbourhoods. Read the full article on The Conversation Article More and more of us living in denser cities where apartments and high-rise developments are increasingly common. This creates specific health concerns for residents of these areas, and for lower-income households in particular. Report An extensive consultation program took place in 2016 and 2017 for the Caulfield to Dandenong Level Crossing Removal Project’s 22.5 hectares of new open space. This report outlines the key feedback, ideas and recommendations from the Community Open Space Expert Panel (COSEP). Report An extensive consultation program took place in 2016 and 2017 for the Caulfield to Dandenong Level Crossing Removal Project’s 22.5 hectares of new open space. This report outlines the key feedback, ideas and recommendations received from the community, businesses and stakeholders during this consultation program... Policy This policy establishes a baseline of what is expected to achieve good design, across all projects in NSW. Report This audit reviewed the performance of the NSW Office of Strategic Lands (OSL). It found that the OSL does not currently have a strategic focus to improve land planning outcomes and also needs to address significant financial risks by implementing a proper, long-term financial strategy.... Strategy Adopted in 2008 following extensive community consultation, Sustainable Sydney 2030 expressed the community’s vision and the City’s commitment to the sustainable development of our city to 2030 and beyond. Sustainable development is not just about the physical environment. It is also about the economy, society... Report The report analyses data from the first large scale Australian survey of public wi-fi use. The report highlights shared challenges for public wi-fi users, employers, public wi-fi network providers, and policy-makers to promote public wi-fi security, while retaining the benefits of accessibility offered by this... Conference paper Adelaide’s planning history is replete with examples of the adoption and adaptation of iconic urban open space ideas. The making of urban open spaces, beginning with the Adelaide parklands, is a direct result of the diverse roles attributed to those spaces and the values placed...
https://apo.org.au/taxonomy/term/22444
The following draft petition and frequently asked questions is posted here on the KEC Website to solicit input before such a petition is sent. Please send your comments to us at [email protected] Petition to Commissioner of Public Lands, Hilary Franz (Draft 01-01-2021) Stop Clear-cutting on State Forest Lands We, the undersigned, call on Washington State leaders to take actions that will STOP ALL CLEAR-CUTTING OF STATE FOREST LANDS FOR TEN YEARS and use only selective harvesting of uneven growth trees. The urgency of impending (and already begun) climate catastrophe outweighs all other considerations regarding our forests. We understand that livelihoods and the state’s funding are linked to the current way our state manages its forests. But we can no longer afford to maintain the present system at the expense of the planet’s greater survival. Destructive wildfires have become routine, their smoke choking our towns and cities; summers are hotter and drier; hemlocks and cedars are starting to die: climate change is already with us. The time has come to direct all possible resources toward realistic solutions for preserving the precious ecosystem that sustains life. A moratorium on clear-cutting can reduce 52.5 metric tons of greenhouse gases from the present Washington State carbon footprint and restore the health of the forests, soil, water, air and all species that depend on forests for habitat and food. The moratorium on clear-cutting does not by any means advocate ceasing all timber harvesting. Selective logging (a) supports the ability of mature trees to sequester carbon, (b) helps prevent forest fires, (c) protects the diversity of plant and animal species, (d) protects our rivers and streams, (e) helps prevent mudslides and erosion, (f) helps control rainfall, and (e) supports healthy soil in forests. Clear-cutting has involved large-scale application of toxic herbicides. The use of sustainable alternatives will promote the recovery of habitat critical to the health of soil, wildlife, streams and sea life, including salmon and orcas —ultimately, every aspect of the natural environment. The tribal nations of Washington State are the original stewards of the environment, living with the natural world in mutuality, not exploitation. Just as indigenous people’s identities, health, and survival depend on nature in balance, so does the survival of all Washingtonians. A moratorium on clear-cutting might seem like an inconceivable social and economic shift. But if the coronavirus pandemic has taught us anything, it is that when great crisis occurs, citizens and leaders are capable of marshaling the will for great, and rapid, change. Frequently Asked Questions (Draft 01-01-2021) The following Frequently Asked Questions (FAQ) provides background for this petition. How does this request fit with the mission of the Washington State Department of Natural Resources? In its own words, the mission of Washington State’s Department of Natural Resources (DNR) is to “manage, sustain, protect the health and productivity of Washington’s lands and water to meet the needs of present and future generations.” It also has responsibilities for some funding of education and county services through its trust lands. Currently, the priority appears to be the state budget rather than protecting our ailing forests, lands, and waters. The routine business of the Board of Natural Resources (BNR, which oversees the DNR) consists of selling state lands, and timber rights on state lands, to the highest bidder in order to help fund state schools and county services. Current forestry practices do not support the long-term health of our forests that are essential to the well-being of all of life for future generations. If clear cutting is not allowed, what forestry would be allowed? Rather than engaging in clear cutting of even growth trees, we call on DNR to support selective logging (harvesting) of uneven growth trees. That is, a forest needs to have trees of many different ages to support the health and well-being of the soil, water, plants, and animals in the forest. Such harvesting needs to be done in a way that preserves the many functions of trees in a forest. Why is stopping clear cutting of our forests so important? Here are three reasons: (a) the ability of forests especially mature trees to sequester carbon, (b) the prevention of forest fires, and (c) the protection of biodiversity of plant and animal species . Carbon Sequestration As the WEC has stated: The forests of the Pacific Northwest are among the most carbon-dense ecosystems on the planet. Short rotations and intensive harvesting significantly reduce our forests’ ability to sequester carbon. Carbon sequestration can be dramatically increased by changes in forest management. A diverse forest with the many forms of life within it, contributes to the natural cycle of water (including capturing moisture from the clouds and bringing it down to the streams), the protection of the streams (through, e.g., shading the streams from excessive heat from the sun and maintaining the structure of the soil during heavy rains), the support of species from bacteria and fungi in the soil to large mammals that live in the forests, and more. In industrial-scale forestry, trees are often harvested when they are in the 40-50 age range when their natural life is in the range of 100-120 years. The older trees sequester more carbon than younger trees. Those trees are being harvested as they approach the years when they are sequestering the most carbon. Forest Fire Prevention In September 2020, entire towns and forests in Washington, Oregon, and California were devastated by wildfire and the very air we breathe was being polluted. Accelerating climate change, long predicted, is surely to blame. Clear cutting is hastening the irreversible loss of habitat for wildlife and livable environment for human beings. Clear cutting relies on use of toxic pesticides which also act as desiccants—drying agents which enhance flammability. Protection of Biodiversity Biodiversity of plants and animals is an essential means by which nature handles the range of conditions that affect life. Although we often want to think otherwise, humans have insufficient knowledge to handle the complexity of nature. The diversity of trees and other species which make for a regenerative forest that maintains flexibility and supports the biodiversity needed to sustain life is being undermined by clear cutting. How are decisions being made about forestry practices? Currently the interests of industry, big business, and the prioritization of using our forests to generate revenue, have dominated decision-making. Despite the pleas of citizens’ groups citing scientific evidence of the direct relationship between deforestation and climate change, the current practices of DNR are hastening climate collapse. How urgent is the need for a moratorium on clear cutting? The UN has warned that ten years remain to affect climate change. The recent fire crisis in California, Oregon, and Washington, are evidence of the urgency and suggest that ten years may be optimistic. Can we make such a major change? The state, nation, and world’s response to COVID 19 shows that we can make changes that once seemed impossible when faced with a crisis. Climate change is a bigger crisis than COVID 19; it is just relatively slower moving. However, it’s pace is picking up and is of a greater magnitude. We must leave remaining forest lands to do their precious work of maintaining watersheds and carbon sequestration. How was the greenhouse gas sequestration amount estimated?
https://kitsapenvironmentalcoalition.org/2020/11/08/a-call-for-a-moratorium-on-clear-cutting-of-state-forest-lands/
Estimated difficulty: 💚💚 It’s a wonder what is up in those clouds, clouds can mean many different things in this day and age. Clouds live in the sky, there is something called cloud computing and more importantly, (for this article) clouds can represent the internet – especially in a network diagram! But what actually is the internet, how does it work and is there really a castle up in the clouds? This post is going to talk about what the internet is, what the world wide web is and how we can protect ourselves as a part of this network. What is the internet? You like many other humans are connected to the internet through your mobile phone, laptop, desktop, tablet, smartwatch and so many different devices! The internet is a gigantic web of computers and devices connected together. For this post, I am going to refer to these devices as computers. A “large system of connected computers around the world that allows people to share information and communicate with each other“.Cambridge Dictionary The main idea behind the internet was for one computer to be able to communicate with another computer. Originally called ARPANET the first recorded communication between two computers was in 1969. Academic researchers wanted to connect these computers so they could share research and other similar documents. Since then, the internet has evolved into what we know it to be like now. However, I am sure a handful of us still hear the war stories of websites taking minutes to load, or worse yet, life without a computer! Protocols Computers use something called protocols to communicate with each other. A protocol is a defined way of how these computers should “talk” to one another. The Transmission Control Protocol/ Internet Protocol (TCP/IP) protocol suite was created at the early development of the internet. Protocols have been added to this suite as years have gone by and as the functionality of the internet has changed. To explain this, let’s have a look at what protocols might be involved in you sending a message to your friend next door. This shows the communication at a high level. Tech Quickie on YouTube explains TCP/IP well. The message is sent from Lisa’s machine, that data is encapsulated as it goes top-down each layer of the TCP/IP model and then when Zara receives the message the data is decapsulated and going bottom-up each layer of the TCP/IP model. How do we use the internet? We can use the internet for many different types of things. Communicating with each other like we talked about earlier; social media, watching videos and learning things! You are using the internet to learn something by reading this blog post right now! But what actually happens when we visit a website like this blog? URL First things first we open up our browsers and navigate to something called a URL. Our browser is an application that is used to retrieve and display content from a website. A URL – standing for Uniform Resource Locator – is the address for that website you want to go to. Your computer needs to find the IP of this URL so it knows where to send your request to access the website. The URL is separated by a “.” (full stop) to break the URL down into different sections. https://securityqueens.co.uk .uk – This is a country code Top Level Domain (ccTLD). Other TLD’s may look like: .com, .org and .net, but really there are many different TLD’s and ccTLD’s out there. This signifies that a domain is from a particular place. .co. – This is a second-level domain of the ccTLD. You might often see .co.uk or even .gov.uk. .co.uk normally means that the website you are going to a commercial UK website and .gov.uk can mean you are navigating to a government UK website. securityqueens – This is the domain name. You can buy your own domain to set up your own website if you really wanted to! You can purchase a domain through a registrar who will manage different domain names that have been registered. https:// – HyperText Transfer Protocol Secure. It encrypts and handles the transmission of web pages. There are other protocols for different things. The entire URL / address is unique to the website you want to visit. Top tip: Sometimes malicious websites will try to look like the URL you want to visit, but often have a typo in it to trick you into trusting their site. Look out for these mistakes to avoid going on a potentially dangerous website. How does it work? To put it into words, you will enter in the address https://securityqueens.co.ul into your browsers URL address bar. That request if not found already in the cache of a router, will be routed to a root name server, this server will look at the URL and depending on the TLD send it to the appropriate TLD server. In our case, another request would be sent to a ccTLD name server for .co.uk domains. A third request is sent to the authoritative name server which should contain the record of the correct IP for our URL. World Wide Web The world wide web is a crucial part of the internet. Instead of connecting physical computers, the world wide web connects information. Documents and different types of information are identified by URLs (like we mentioned earlier) and these URLs can be embedded or linked through a hyperlink in these web resources like a document or web page. You can see an example of this in this very blog post. I have added hyperlinks to this page to allow you to click the link and check out another webpage to read more on the topic or know my source for this information. Game On! So now we know more about the internet and the world wide web, let’s test the theory! When I was younger in our ICT lessons we used to play a game. This game involved a friend and I racing get from one webpage to another in the shortest time. In Wikipedia, many of the pages are linked through hyperlinks, what we learnt about earlier. The rules were you had to go from one Wikipedia page to another agreed Wikipedia page by only clicking on hyperlinks within your starting Wikipedia webpage. Try it yourself or with your mates (albeit when we are allowed to see friends again). - Try going from the Wikipedia page Internet to the Wikipedia page Phishing, by only clicking the hyperlinks within the page. My answer: - You can click on the links in the following order: - Internet - Blogging - Hackers - Exploit - Vulnerabilities - Social Engineering - Phishing Leave a comment below if you managed to get from Internet to Phishing in fewer clicks! How to surf safely There are many different things we can do to make sure that we are safe when we are searching the web. I have created a checklist of things to look out for to make sure you are safe when surfing the web and when you are connected to the internet. From my computer to your computer, I hope you have enjoyed reading this post and do let us know what you think!
https://securityqueens.co.uk/castle-on-a-cloud/
leaders in health care, we want to know how to make the experience better for patients and for our care teams. In academics, we want to strengthen educational experiences and excel at research. In the clinical operations, we want to provide quality care in a learning environment. To that end, we ask for feedback from our teams. Sometimes, though, the feedback isn’t really what we (think we) want to hear, the data isn’t clear, or the next steps aren’t obvious. U of U Health uses two main surveys to get faculty and team member input: Waggl and WellCheck. | | Waggl | | WellCheck |Audience||academic employees||hospitals and clinics employees| |Domains|| || | Both surveys cover similar domains and both are intended to examine the needs of the team, rather than serve as a leadership gauge. When we examine needs of our teams, we gain insight on how we can create environments conducive for engagement and improvement. This article provides an approach to looking at U of U Health engagement survey results. Is the data accurate? The first question we often hear is: Is this data accurate? Follow up questions include: Does it represent my whole team? How do parts of the group compare to others? Sometimes, when these answers aren’t clear, paralysis sets in. The data gets pushed to the back burner and nothing changes. These surveys will never be 100% accurate. The responses from the surveys aren’t “right or wrong.” Rather, they give us a jumping off point to think about the needs of our teams, and not simply a referendum on our leadership. What does a low score mean for my team? First, let’s acknowledge the obvious. You’re human, you work hard, and you care a lot about your team. It’s hard not to take this feedback personally and that’s okay. It’s normal to not feel good about it. It can be hard to remember groups struggle for lots of reasons and the purpose of gathering data is to support teams, not punish individual leaders. The impulse to “fix it” in health care is strong. Before heading into your group with, "You don't think I keep you informed?" Or, "You don't think you can tell me your opinions? Tell me why you can't tell me your opinions?" Let's pause and consider ways to approach this feedback in a more thoughtful way. Starting from a place of reflection and curiosity can help expose changes that have occurred over time that might have led to breakdowns in communication and team connection. There are many factors that can contribute to low scores in communication. It can often stem from interpersonal conflict or unclear roles and expectations. While addressing these can be tricky in the beginning, acknowledging the concerns and working toward improvement can have a high impact on the well-being of individuals and the effectiveness of the team. We recommend you start with a few basic questions: #1: Is my group ready to have a conversation about how we are doing? Leaders and team members are often in different places about how safe it is to share opinions or provide critical feedback, especially in front of one another. Psychological safety is present when people can share their ideas without fear of retribution. Psychological safety is essential for high functioning teams, and it is important to know how your team is doing in this realm before you decide how to talk about the data. There are a few survey questions that can shed some light on whether your team is ready to wade in. The communication domain consists of the following questions: - I can express my opinions without fear of retribution. (Waggl) - My input is sought, heard, and considered. (Waggl) - My immediate supervisor keeps me informed. (Waggl) - People here are held accountable for their words and actions. (WellCheck) - I feel comfortable bringing up problems and issues that I see. (WellCheck) When you’re seeing a decrease or, for Waggl, yellow or red in these areas: Particularly in the question that asks about fear of retribution, we recommend connecting with a facilitator to help you determine next steps. Facilitators who can help: A facilitator will help you explore what might be contributing to low scores and chart a path forward. A facilitator can help ensure a process that helps people feel comfortable speaking up. Listening involves open and honest discussion of what’s working and what isn’t. It has to be a no-blame, no-shame, any-idea-goes exercise. Shutting someone down, even once, can prevent important information from surfacing. When you’re team is feeling good: For those with a high degree of psychological safety, we recommend leaders share the data with their teams, acknowledging what the data appears to say. With curiosity, ask questions like: - Does this data match what we see day to day? - Is this our reality? - What should we be doing as a group to address these concerns? - Something we should be doing more of? Less of? These questions can get the group moving towards improvement. You can also take a look at this list of questions to help identify what might be holding the team back. #2: What does this question mean? We also get a lot of questions about how to interpret the meaning of some questions. In addition to communication, we group the remaining questions into burnout and stress, respect, control and resources, and advancement. Burnout is not a problem for me. (Waggl & WellCheck) My work-related stress is manageable. (Waggl) The important thing to remember is that stress and burnout are downstream, outcome variables. These scores are impacted by a myriad of things occurring upstream, like psychological safety, control and staffing. If these scores are low, a conversation about upstream drivers is the next step. What is contributing to burnout and stress for your group? You want to consider problems at the individual, team, and system level. One way to generate local insight is to use the Team Assessment Tool for Thriving. The tool takes a deeper dive into reflection questions and provides a score. You can repeat the assessment over time to learn what’s normal—and what’s not—for your individual team. If your team is ready to dig into local solutions, check out the Listen, Sort, Empower Tool. My organization values and respects employees across gender, race, age, religion, ability, etc. (Waggl & WellCheck) Results for this item tend to look positive in aggregate, but scores for this item can be lower when we start to break down results by historically marginalized groups. Low scores here track with issues related to burnout, stress, advancement, and communication. If you have not had any conversations related to equity, diversity and inclusion with your group or if your team scored low with this item, we recommend consulting with a facilitator before approaching this question. Addressing equity, diversity and inclusion is imperative for every employee to feel seen, valued and heard. It is important that no one feels singled out in the process. I have control over my workload. (Waggl) I have access to the tools and resources I need to do my job. (Waggl) People on my team are held accountable for their words and actions. (WellCheck) I feel comfortable bringing up issues and problems that I see. (WellCheck) Control over workload is linked to stress, burnout, and decreased satisfaction with work. While increasing control can be challenging in healthcare settings, consider options when feasible, such as increased flexibility in work hours or location, ability to take vacation, and ability to provide input into workload or work type. The same goes for having access to the tools and resources needed to do a particular job. Listening and problem solving with your team may generate some practical solutions to address these issues. For struggles that require a more long-term approach, it is important to remember that simply acknowledging the problem and sharing what can and cannot be done about the issue can go a long way with your team. I have adequate opportunities to advance my career at the University of Utah. (Waggl) My manager sets clear expectations, manages performance, and provides useful feedback. (WellCheck) Not everyone has big aspirations for their future career, but most people are interested in growing their role over time. Having time devoted to meaningful work is linked to more job satisfaction and less burnout. Consider asking your team members what brings them joy at work and cultivating an area of focus. Many faculty and staff need some coaching or mentorship to get there. #3: How low is low? When should I be worried? Well-being data is a moving target. During the pandemic, scores have gone up and down. In February 2022, burnout and work-related stress are definitely up. Faculty and staff turnover continue to be high. The fatigue of two years of Covid is real. Groups that are markedly higher or lower than average for the institution are worth taking a deeper look. If you would like help looking at the data, reach out to the Hospitals and Clinics Organizational Development team (WellCheck), Waggl team (academic human resources), the Resiliency Center, or Health Equity Diversity and Inclusion. These teams can walk you through how to interpret results and discuss what next steps might look like. If you identify an individual who is struggling, resources are here.
https://accelerate.uofuhealth.utah.edu/resilience/what-does-this-question-mean
Cancer is characterized by multiple genetic and epigenetic alterations that drive malignant cell proliferation and confer chemoresistance. The ability to correct or ablate such mutations holds immense promise for combating cancer. Recently, because of its high efficiency and accuracy, the CRISPR-Cas9 genome editing technique has been widely used in cancer therapeutic explorations. Several studies used CRISPR-Cas9 to directly target cancer cell genomic DNA in cellular and animal cancer models which have shown therapeutic potential in expanding our anticancer protocols. Moreover, CRISPR-Cas9 can also be employed to fight oncogenic infections, explore anticancer drugs, and engineer immune cells and oncolytic viruses for cancer immunotherapeutic applications. Here, we summarize these preclinical CRISPR-Cas9-based therapeutic strategies against cancer, and discuss the challenges and improvements in translating therapeutic CRISPR-Cas9 into clinical use, which will facilitate better application of this technique in cancer research. Further, we propose potential directions of the CRISPR-Cas9 system in cancer therapy. Trial registration: ClinicalTrials.gov NCT02793856 NCT02867345 NCT02863913 NCT02867332. Keywords: CRISPR-Cas9; Cancer genome manipulation; Cancer therapy; Gene therapy; Genome editing. Copyright © 2016 Elsevier B.V. All rights reserved.
https://pubmed.ncbi.nlm.nih.gov/27641687/?dopt=Abstract
Serena Williams thrashes Maria Sharapova at US Open with big sister Venus also starring Serena Williams began her US Open campaign in brutal fashion, thrashing long-time rival Maria Sharapova 6-1, 6-1 on Arthur Ashe. Sharapova has always struggled against Williams, but perhaps expected to do better in this one given her excellent record in night matches at the US Open. However, she was never really involved in the match, with Williams calmly dismantling to Russian’s game in under an hour. “Obviously I am going against a player that has won five Grand Slams and has been in finals of even more,” Williams said as she was keen to stress her immense respect for Sharapova. “It’s not easy and every practice has been very intense because that is an incredibly tough draw. She’s such a good player so you have to be super focused. “Every time I come up against her I bring out some of my best tennis. She’s the kind of player that gets momentum when she gets going. It was a fun match.” It was a happy day for Venus Williams too, as she was even more dominant than her younger sister. The two-time champion dropped just a single game as she dismissed the challenge of China’s Zheng Saisai 6-1, 6-0. “I was happy with today, so I’m not going to ask for more,” Venus said. “Whether the win is easy or whether it’s tough, a win is a win. “Getting to the next round is about getting the win on your side and building yourself up during the tournament and patting yourself on the back for every good achievement.” Follow us on Twitter @T365Official and like our Facebook page. Latest - Tennis News Andy Murray preparing for a return to the ATP Tour in France Murray has accepted a wildcard to play another tournament in France. - Tennis News Remarkable statistics confirm Daniil Medvedev is the new king of hard courts Daniil Medvedev can now be hailed as the new king of hard courts. - Kevin Palmer Exclusive – Myleene Klass believes Emma Raducanu can inspire a generation Myleene Klass has revealed Emma Raducanu has inspired her daughters to reach for the stars after her stunning win at the US Open. - US Open Dominic Thiem on Emma Raducanu’s US Open win: ‘It’s one of the greatest achievements in women’s sport’ “Perhaps it is the greatest breakthrough of all time,” says Dominic Thiem. - US Open ‘Unfair that Novak Djokovic is always the bad guy and Roger Federer and Rafael Nadal are the good guys’ “I hope he is treated a little more fairly,” says Boris Becker. - Tennis News Venus Williams explains backstory behind photo with Serena Williams and Maria Sharapova at Met Gala Just Venus, Serena and Maria enjoying a laugh in the bathroom at the Met Gala. - News ‘I think it’s gradually sinking in a bit more,’ says Emma Raducanu after ‘whirlwind experience’ The teenager took part in a round of breakfast TV and radio interviews. - WTA Tour ‘People are talking about tennis again,’ revels Boris Becker after ‘Dame Emma Raducanu’s’ heroics Boris Becker waxes lyrical about Emma Raducanu. - Tennis News Tennis to introduce ‘Stefanos Tsitsipas bathroom break rule’ following Greek’s lengthy loo visits? Stricter rules on the cards following furore over Stefanos Tsitsipas’ loo breaks. - Tennis News LTA chief insists Emma Raducanu will not lose focus after stunning rise to the top Lloyd has called for the newly-crowned US Open champion to be given “breathing space”.
https://www.tennis365.com/us-open/serena-williams-thrashes-maria-sharapova-at-us-open-with-big-sister-venus-also-starring/
Merchiston Community Council welcomes you to our home on the web, where you can find a range of issues and information about the area — as well as information on Merchiston Community Council meetings both past and future. If you have any questions, please do feel free to contact us. Latest News Police Long Term Survey Influence the priorities of the Police for Merchiston Police Scotland would like to know what issues you think they should ... Read More Read More PUBLIC CONSULTATION – West Princes Street Gardens There are 5 days left for the public consultation described as "The Quaich Project’s vision is to reimagine West Princes ... Read More Read More Make your voice heard: Upcoming Consultations There are a number of ongoing consultations which may be relevant to residents of Merchiston. Please take advantage of these ... Read More Read More Trial to cleanse the detritus from in and around allocated bin bays in Morningside extended There is good news for the Morningside Ward thanks to Councillor Neil Ross following a response to a Morningside resident ... Read More Read More A greener feel to the redevelopment of the Fountainbridge former brewery sites Read more about community-led suggestions on the redevelopment of the former brewery sites at Fountainbirdge.The Green Plan Report via the ... Read More Read More Community Council member elections held Welcome to re-elected and newly elected members of the Merchiston Community Council!The MCC will officially elect office bearers in the ...
https://merchistoncc.org.uk/
Travel through the state’s vast national parks and discover a land of jaw-dropping landscapes and incredible wildlife Without warning, the wolf hurtled into view, bounding across the tundra, its silvery coat flashing, tongue flapping from terrible jaws. Whump! The animal slammed into the undergrowth as it pounced on a ptarmigan, a squat little bird a bit like a grouse. But in a flurry of feathers, the fowl made a miraculous escape, rocketing skyward while the wolf clutched at thin air. Everyone in the bus stared in silent thrill or snapped photos through the open windows. The wolf slunk away, hungry. Just a few turns farther down the lone road through Denali National Park, the driver of our public shuttle bus (no cars are allowed) switched off the engine and put a finger to his lips. A honey-coloured grizzly bear was 10m away, ripping wild blueberries off the bushes as it fattened up for winter. It was only 8am. By lunchtime we had also seen bald eagles, a caribou, a moose, a mother bear with her two gambolling cubs and, against a bluebird sky, Denali, the highest mountain in North America, which is often wreathed entirely in cloud, was snowcapped and glinting in the sunshine. I knew Alaska would be spectacular, but this was beyond my dreams. Americans call the vast, remote state the Last Frontier, and the few human residents seem almost irrelevant in this beautifully raw land, where the beasts and the weather are ostensibly in charge. Even here, though, the wildflowers, whales, grizzlies and glaciers need their havens from enthusiastic miners, oilmen — and tourists. Amazingly, Alaska is home to more than half of the area protected by federal law in the form of national parks in the US. The state has 17 national parks, including seven of America’s ten largest. Few people have even heard of the largest — Wrangell-St Elias National Park and Preserve in eastern Alaska, which is so huge it could swallow both of its better known cousins, Yellowstone and Yosemite. Oh, and Wales, twice over. It’s one of Alaska’s best-kept wilderness secrets in a place already big on the great outdoors. Denali National Park — renamed from the original Mount McKinley Park when President Obama gave the highest peak in North America its original native Alaskan title of Denali — also has some of the most rugged wilderness on Earth. It makes a great contrast too to my next stop, the coastal environment of Kenai Fjords National Park, a chilly oceanic paradise of ice and more fantastic animals about 125 miles south of Anchorage in Alaska’s south-central region. From the start it was spectacular — a boat trip out through the narrow channel of Resurrection Bay from the port of Seward, where snowy peaks and glaciers looked down on piercing blue waters populated by large families of sea otters. They floated on their backs in the sun, paws crossed on their chests and gorgeous furry faces looking up innocently at the humans, who thankfully these days are there to take their picture not their pelts, as they did in the old fur-trapper days. Eagles were everywhere, cruising overhead or sitting on the tall pines lining the headlands. To an Alaskan it’s about as common as seeing a blackbird, while we tourists were practically fainting with the novelty. Then there were flashes in the water. Several black and white Dall’s porpoises were leaping and darting in the boat’s foaming bow wave. They swam breathtakingly fast. Another flash, but this time in the air, tiny and bright. Puffins, flying past so swift and low to the water that they were tricky to follow with binoculars, but, if you lost track, there was always another close behind. Farther out into the open ocean, great clusters of the striped-beak cuties were tucked safely on the cliffs of rocky islands above jostling sea lions. Something else black and white but much, much larger approached. Killer whales, a pod of three or four, glided into view, diving and surfacing, then passing gracefully by. I watched ice breaking away while sitting in a kayak almost beneath the glacier What I was waiting for, however, were humpback whales. In Alaska’s short tourist high season, from June to September, the chances of seeing humpbacks are good. On this occasion, however, I wasn’t lucky. It was early June, so most of the whales were probably 500 miles to the south, in the “inside passage” archipelago region that adjoins British Columbia. This part of Alaska, which includes the state capital, Juneau, is the stamping ground for cruise holidays. But being more of a landlubber, I had planned only one short, spectacular boat trip, squeezed in between forays into the mountainous areas of the interior. There might have been no humpbacks, but there was a surprise not far away. Our day trippers’ vessel chugged on, from the brisk wind of the open ocean into the shelter of a wide fjord full of small icebergs topped with seals, like cherries on an ice-cream sundae. At the back of the fjord was a towering glacier, its glistening blue-and-white face dropping sheer from the massive ice field that covers the top of the Kenai peninsula, into the deep bay where we now floated. Suddenly, an ear-splitting crack, like a thunderbolt or a rifle shot, rent the air and a block of ice the size of a lorry broke from the glacier and plunged into the sea. Then, boom, another big chunk splintered away and splashed down into the fjord. This wasn’t global warming, although glaciers all over Alaska are retreating at an alarming rate because of it. We were lucky to be witnessing the natural process of a glacier “calving” icebergs when it reaches the sea. Later I stayed in a wilderness lodge in that very bay and watched more ice breaking away while sitting in a kayak almost underneath it. That was the day I saw a black bear and a freshwater otter too, and I wondered if I should move to Alaska . . . But, oh, the long, harsh winters. And Sarah Palin! Sticking to holidays, but getting even more remote, I flew into the interior, to the small, rustic, old mining settlement of McCarthy in Wrangell-St Elias National Park, in many ways keeping the best for last in this wild corner of the Last Frontier. The park is home to the continent’s largest bunch of glaciers and so many peaks that it’s known as the mountain kingdom of North America. Ironically, in a national park where nature is now protected, Wrangell-St Elias features the ghost town of a huge copper-mining operation that prospectors established at the turn of the 20th century and abandoned in the late Thirties. I even stayed in a guesthouse that was the old brothel. Now the area is all about back-country hiking, biking, fishing and kayaking for the few intrepid souls who venture this far. I donned crampons and crunched across ice for several hours with a local guide on one of the majestic glaciers creeping down from a bowl of mountains that hugged in on us. We peered up at their frost-shattered summits and down into the glowing blue, almost bottomless crevasses of the glacier. It seemed an impossibly long way from civilization, and yet the ghost town clinging to the side of the valley above was a reminder of how far man will go for minerals and treasure. Need to know Joanna Walters was a guest of the State of Alaska Tourism Office (travelalaska.com) How to get there British Airways (ba.com) flies from Heathrow to Anchorage via Seattle from £1,004 return. Icelandair (icelandair.co.uk) flies from Gatwick or Heathrow to Anchorage, via Reykjavik, from £801 return. Motorhome hire through Great Alaskan Holidays (greatalaskanholidays.com) costs from £95 a day. Where to stay Hotel Captain Cook (captaincook.com), the best hotel in Anchorage with views from its restaurant to Danali, costs from £185 a night. Windsong Lodge, Seward (sewardwindsong.com), makes a good base for the Kenai Fjords National Park and has doubles from £110 a night. Aspen Haus, Healy (aspenhaus.com), 12 miles north of the entrance to Denali National Park, has cabins from £85 a night, with a minimum two-night stay. Ma Johnson’s Hotel, McCarthy (mccarthylodge.com), is a mining-era hotel that’s more fun than luxury, despite the price of £156 a night B&B. It is a good base from which to explore Wrangell-St Elias National Park and Kennecott ghost town. The 60-mile road from the nearest town of Chitina to McCarthy is so rough that many vehicle-hire firms exclude it from insurance cover. Park in Chitina, then ride or fly. More information Kenai Fjords Tours (kenaifjords.com) has wildlife and glacier day cruises from Seward from £68pp a person. More great summer adventures in Alaska Cruise the Hubbard Glacier Journey through the labyrinthine fjords and bays that make up the Inside Passage from the comfort of a cruise ship. The voyage, on Celebrity Infinity, begins in Vancouver and passes the magnificent Hubbard Glacier, before calling at the historic town of Icy Strait, where guests can learn about Tlingit tribal customs. On board, relax in the spa or stargaze on the new roof-top terrace. The price, from £2,689pp, includes return flights to Vancouver and seven nights’ full board in an interior cabin based on a September 4 departure (0800 4414054, celebritycruises.co.uk). Alaska and Hawaii twin trip Combine ice and fire with a trip to the Alaskan wilderness and tropical Hawaii. Trailfinders (020 7368 1200, trailfinders.com) has a 14-night tour that takes in both states. The first seven days are spent exploring Anchorage’s museums, bear watching, riding the Alaska Railroad and kayaking to the Aialik Glacier. Then fly to Hawaii, where activities include surfing at Waikiki Beach and a helicopter ride over the Big Island. The price of £5,699pp includes international and domestic flights, 14 nights’ accommodation and some meals, and is valid for departures from June to August. Hike in Denali National Park Lace up your walking boots for a group-hiking adventure with World Expeditions (0800 0744135, worldexpeditions.co.uk). On the seven-day Denali Unexplored tour you will spend five days walking a 30-mile circuit through tundra and glaciers in Denali National Park, with unparalleled views of North America’s highest mountain, Denali. A float plane will transport you over the Ruth Glacier and into the wilderness to begin your journey, which ends after a train transfer to Anchorage. The price, from £2,290pp, includes five nights’ camping and one night in a lodge, plus all meals, with departures on August 21, 28 and September 4. International flights cost extra. Ride the Alaska Railroad See Alaska by train with the Alaska Railroad tour, which takes in some of the state’s most mesmerising regions. The nine-day trip starts in Anchorage and heads south to Seward for a cruise through the Kenai Fjords National Park to spot sea otters and whales. Other highlights of this trip with Artisan Travel (01670 785085, artisantravel.co.uk) include a thrilling jet-boat ride through the rapids of Talkeetna and a riverboat tour around Fairbanks in a vintage boat, above left. It costs £2,045pp, including transfers, nine nights’ accommodation, some meals, Alaska Railroad standard-class travel and most activities. International flights cost extra. Tailor-made departures are available until September 3.
https://notanotherguide.com/grizzlies-moose-whales-untamed-alaska/
The liver is the body second largest organ after skin. It is responsible for many important functions in the body that ranges from the metabolism of fats, carbohydrates, and proteins to production of clotting factors and detoxification of the toxins, free radicals, and drugs. Any disturbance in the function of the liver can cause liver disease and loss of these functions can cause significant damage to the body. The circulation of blood works differently in the liver. For example, all the veins in the body transport blood from the peripheral organs to the heart, however, portal vein supplies blood from the heart to the liver for processing and filtering before entering into the general circulation. The same blood supply also delivers chemicals and enzymes that are required by the liver cells to produce protein, glycogen, and cholesterol for optimal body activities. Liver disease is a broader term that involves conditions of the liver and its associated organs. The diseases and conditions that affect the liver can damage its structure or function of regulating metabolism, producing macronutrients and detoxification of drugs and chemicals. Many factors that can affect the liver include excessive use of medicines like paracetamol, alcohol abuse, infections such as hepatitis B and C, non-alcoholic fatty liver disease, iron overload, and tumors. Interestingly, the liver is the organ with strong regenerative properties, that’s why most of the signs and symptoms of any liver disease appear when 75% or three-quarters of the liver become diseased or injured. Following are some common signs and symptoms that indicate that your liver is suffering from a disease or illness. 1Jaundice It is characterized by yellowing of eyes and skin. Jaundice is not a disease, but a common symptom of several conditions that can be hepatic as well as non-hepatic. When the underlying cause is liver, there is either too much production or accumulation of bilirubin in your body. Bilirubin is a green-yellow pigment that is produced as a result of the breakdown of red blood cells in the liver. In normal circumstances, it is the way by which your body gets rid of old and worn out red blood cells. Jaundice develops when your liver is not metabolizing bilirubin the way it is supposed to do. Sometimes there is an obstruction of bilirubin flow into the intestine, from where it is normally excreted from the body through the stools. In such cases, there is too much accumulation of bilirubin in the liver that slowly diffuses into the blood circulation and thus results in yellowing of the skin Paleness of the skin and yellow-tinted eyes typically characterize jaundice. In some severe liver diseases, the whites of your eyes may also turn yellow or brown. Another characteristic of jaundice is the dark urine or pale stools. If an underlying condition is infectious such as viral hepatitis, you may also experience other associated symptoms like nausea, vomiting, and excessive fatigue. Jaundice in adults can be due to one of the following hepatic conditions. - Cirrhosis (scarring of the liver, usually due to alcohol) - Alcohol misuse - G6PD deficiency - Biliary (bile duct) obstruction - Gallstones (pigment stones made of bilirubin or cholesterol stones made of hardened fat material) - Hepatitis A, B, C, D, E - Liver cancer A majority of people may misdiagnose themselves when they experience jaundice. When jaundice develops, both of your eyes and skin will turn yellow. If there is only one symptom, i.e., either yellowing of the skin or one eye, it could be due to several other reasons such as too much beta carotene in your body. It is pigment commonly found in foods such as sweet potatoes, pumpkins, and carrots. An excess of this pigment can also result in yellowing of the skin, however, it does not cause jaundice.
https://betahealthy.com/10-warning-signs-and-symptoms-of-the-liver-disease-that-require-immediate-attention/
Lumina Intelligence, the insights division of William Reed, are keeping the industry informed of the latest insight covering the market and consumer trends impacting all channels of the UK hospitality and grocery retail sectors. Lumina support businesses across the UK food and drink industry with data subscription solutions, reports and bespoke projects. Outside of this, they also work closely with the other divisions within William Reed and also regularly share free insight in a variety of formats, such as infographics, blogs, whitepapers, webinars and podcasts. To stay on top of the latest consumer and market trends impacting the sector and to receive these regular updates, please:
https://www.william-reed.com/follow-lumina-for-the-latest-data-and-insights-on-uk-food-and-drink/
Magyarország (The Hungarian name for their country) is derived from the Magyar tribe that settled in Hungary. The Magyar tribe was one of the many tribes united under the Huns. In basic terms, the name means Hungarian Country. Magyar for Hungarian, and orszag for Country. (that is not an actual translation) The name "Hungary" Is probably a combination of the Hun empire and a mispronunciation of Magyar. Hungary, Kingdom of Hungary. Since 1000 AD. The assassin's name was GAVRILO PRINCIP. yes i am Hungary Hungary is the name of a country where i live. What is the question? :D are you hungry? Are you from Hungary? Very Hungary. Go make me some Turkey. With Chile. Hungary is the name of no capital city anywhere. Hungary is a country with a capital at Budapest. Hungary does not share a common border with Switzerland. Olaszorszog Hungary Gyurcsány Ferenc, our prime minister. First name: Ferenc Last name: Gyurcsány /In Hungary we write the last name in the first place/ Austria, Switzerland, and Hungary!!Austria, Slovakia, and Hungary. Hungary Hungary Hungary Republic of Hungary Hungary Alsóház. Hungary Magyarország is the Hungarian name for the country of Hungary. The capital of Hungary is Budapest. Austria, Hungary, Slovenia, Czech Republic, Slovakia The Hungarian name for the country, Magyarorszag, comes from the Magyar tribe who were the original peoples of the region. The name Hungary derives from Ugrian, which are people who speak a Ugric language such as Hungarian.
https://www.answers.com/Q/Where_did_Hungary_get_its_name
DNV GL’s Bjørn Kjærand Haugland is the new CEO of the climate network Norway 203040. “It is extremely motivating for me to lead this business-led climate initiative that gathers companies with the most ambitious climate commitments in Norway to take a leading role in the transition to the low-emission society,” Kjærand Haugland said. “The business sector plays a crucial role in the transition we are facing. Norway 203040 has an industrial perspective. We will turn climate challenges into opportunities and develop green competitiveness.” “We work with top management and see that gathering companies from different sectors drives innovation. Therefore, it has been crucial that several of the companies that have come the longest have joined the coalition,” said Kjærand Haugland, who also invites companies with climate ambitions to get in touch. Kjærand Haugland comes from the position as the Chief Sustainability Officer in DNV GL, where he played a central role in driving company-wide sustainability initiatives and in supporting DNV GL to build a strong global sustainability position, focus and profile. Sustainability is today incorporated into DNV GL’s overall business strategy and operations, and it stays at the core of DNV GL’s vision of making a global impact for a safe and sustainable future. Starting in the company’s Oil and Gas business area and having held various roles and management positions within all business areas in Norway, Korea and China, Kjærand Haugland is a globally recognised voice within sustainability and has a broad experience engaging business, government and academia. Expanding the influence of the business-driven climate network Norway 203040 is a business-driven climate initiative. Its purpose is to identify new business opportunities on the way to the low-emission society and be a driving force in reaching Norway’s climate goals by 2030. The initiative will lift business opportunities that exist in the transition to the low-emission society and point out measures that are needed for business and for the authorities. Jens Ulltveit-Moe, CEO of industrial investment company UMOE and founder and chairman of Norway 203040, said: “We wanted to strengthen the outcome of the network by hiring a CEO. Our voice has not been strong enough, so we need to strengthen it. I am glad that Bjørn is now entering the role of CEO and I am confident that hiring him will lift us to new levels”. Commitment at top management level Participants in the climate network are committed to engaging at the top management level. The companies have clear climate goals, and they are willing to walk the talk about it. Idar Kreutzer, CEO of Finance Norway and board member, Norway 203040 said: “The business community plays a crucial role in the transition to the low-emission society, and in the efforts to develop green competitiveness. Therefore, it is important that some of the companies in Norway that have come furthest have joined forces in the climate network Norway 203040. When Bjørn Kjærand Haugland now joins as CEO, it will further increase the power of the work and reinforce the best practice sharing and targeted collaboration between the companies.” The environmental organization ZERO and WWF World Wildlife Foundation contribute to the climate network by facilitating partnership and supporting the network’s main objectives. ZERO is the host office for the climate network in the period 2019-2021.
https://www.eco-business.com/press-releases/dnv-gl-top-executive-to-lead-norways-business-sector-climate-network/
This study explored the influence of task-based language teaching (TBLT) on adolescents' second language (L2) learning in small group interactions. Specifically, it aimed to investigate how the performance of tasks avails non-native speakers (NNS) opportunities in language learning, especially as evidenced through their conversational moves and the development of student-generated scaffolding practices. A longitudinal data was collected in an urban setting with a total of forty-two participants who were identified as beginner level Spanish as a foreign language (FL) learners. Two of the classes were divided into small focus groups of NNSs and their classroom interactions were investigated for a total of six months. Quantitative data were collected through the administration of pre- and post- survey and questionnaire; Qualitative data were collected as audio and video recordings of interactions during tasks, student artifacts, observational field notes, and student introspections. In particular, three input-providing language tasks were administered to highlight students' sequential interrelated conversational moves, as well as their lexical development resulting from scaffolding. The study's main findings are as follows: First, it suggests that TBLT is a useful pedagogical construct for the FL classroom. Specifically, it uncovered the effective engagement of students' language development through their strategic use of the tasks. Significantly, unlike has been suggested through interactional literature, NNS/NNS scaffolding facilitated L2 lexical development through the use of a context in which students asked questions, responded, facilitated comprehension, and elicited peer and teacher feedback. This study contributes to our knowledge of FL pedagogy, draws implications for practice, and extends classroom-based research in the investigation of using task-based methodology in an FL setting.
https://ubir.buffalo.edu/xmlui/handle/10477/50592
During a training course on geopolitics, just a few days after the Paris terrorist attacks, I was asked whether there are “geopolitical advantages and disadvantages” for countries and, if so, which are the disadvantages for Romania, considering the current challenges Europe is facing. My reply was that there are no advantages and disadvantages – there are only geopolitical imperatives. When analyzing those, we look at the geography shaping up the human society from a particular location and the way borders were set throughout history. We focus on the nation states and on the fundamentals shaping up their behavior: politics, economics and security. Events that may influence any of the three fundamentals are capable of having an impact over states’ behavior. Geography explains the opportunities and the limitations of a country, as well as its priorities and needs. States make the most of their geography in times of peace – when external threats are at the minimum. This is when coherent governing, focusing on realistic goals is important. When there is no major international conflict, the political game supports, in theory, the consolidation of the national economy. During peacetime, countries influence one another establishing economic dependency links. Security should be an important beneficiary of the economic advances: national strategy should ensure that the economic development also builds on increasing independence and innovation, through technological progress also invested in the security field. However, while technological progress is at a high during peacetime, defense and security budgets get cut. Because there is the perception that peace is to remain for the long term. The tragic events in Paris highlight that perception: France, as other Western countries thought peace is to stay and therefore cuts in the defense budges have been normal. As the attacks occurred, the perception changed radically. French President Hollande has said repeatedly during the last days that France is at war. He didn’t only mean that France is getting more involved in Syria – which it does, but has also brought forth the needs for increasing security spending in France: “the security pact prevails over the stability pact”. This is not true only for France, but for all the EU member states. The terrorist attacks as well as the refugee crisis are both good arguments for countries to increase their security spending. In its latest release, the EU Commission already noted that the budgetary impact of the exceptional inflow of refugees would be taken into account when assessing possible deviation from the deficit rules for 2015-2016. This is how spending plans in Italy, Lithuania or Austria, initially criticized by the Commission for violating the bloc’s rules, may actually get Brussels nod. And thus, the current crisis could become the reason for increased spending in the EU. Something that will certainly affect European economics. Paris attacks have also raised a very important political question relating to the future of the EU: how will the Schengen Agreement be affected? On the short term, we’ve seen France, Germany, Sweden and Slovenia reestablishing border controls. We’ve also seen interesting discussions, originating in the Netherlands about the establishment of a mini-Schengen area between Netherlands, Belgium, Germany and Austria. While it is only an idea, it highlights an ongoing regionalization trend. Nationalism was on the rise before the attacks – in their aftermath we could see the rise accelerating. The policy changes that may follow in asylum regulation in Germany will likely be followed by political changes, with Merkel having to admit some of her mistakes in front of the voters. In light of the recent developments, Turkey relationship with the EU will become even more important, considering all the leverage that goes to Ankara post-Paris attacks. While military reactions towards Syria are still expected from nation states, they will remain limited: airstrikes will likely intensify in the short term and Italy and the UK will join in, along France. But no troops will be sent to Syria. Germany will continue being against any form of military intervention in Syria and will push for a diplomatic solution. All in all, European countries could be more open to accommodate with Russia on Syria. This may open the door for cooperation on other issues, like Ukraine. This has been one of the Russian goals from the very beginning – but it is still unclear how successful they’ll be without giving into the implementation of the Minsk agreement as the Europeans have been demanding. This is a time when the geopolitical risk is raising – a time of change and of reset for alliances that have slowly been forming within the borders of the West and East.Author : Antonia Colibasanu Comments Leave a Reply You must be logged in to post a comment.
https://antoniacolibasanu.blogactiv.eu/2015/11/19/the-paris-pivot-geopolitical-imperatives-in-europe/
Course ObjectiveAt the end of the course the student is able: - To understand what institutions are and how institutions impact the behaviour of economic actors. - To understand the institutional aspects of microeconomics in general, i.e. how effective institutions can stimulate the functioning of markets and decrease market failures such as market power, imperfect information, externalities and public goods. - To situate the New Institutional School within the broader context of the history of economic thought. - To think conceptually, in terms of theory, analyzing questions from different perspectives and identifying links between seemingly different problems. Course ContentAll economic activity takes place within a framework of institutions that constrain individual behavior and thereby affect resource allocation, income distribution and economic growth. This course introduces recent approaches in the field of ‘New Institutional Economics’. The course consists out of three parts. We begin with studying methods and fundamental concepts (what are institutions, property rights, transaction costs, agency costs, information costs, power, etc.). Next, we analyze the development of the institutional environment, or ‘rules of the game’, that guide individual behavior. These are both formal, explicit rules (like constitutions, laws and property rights) and informal, implicit rules (like social conventions and norms). We conclude the course by studying specific institutional arrangements with applications to individuals, markets, firms and the State (e.g., marriage, money, trust, speculation and herd behavior, morality and corruption). Teaching MethodsLectures + Tutorials Method of Assessment- Exam (50%) - Paper (50%) - Seminar assignments (pass/fail) LiteratureA selection of chapters from Samuel Bowles (2006) Microeconomics: Behavior, Institutions and Evolution and a reader containing a selection of seminal papers. Target AudienceSecond year PPE students Additional InformationPlease note that participation in the seminars is mandatory. Custom Course RegistrationThere is a slightly different enrollment procedure for this module. The standard procedure of the Faculty of Humanities has students sign up for (i) the module, (ii) the form of tuition (lecture and/or preferred seminar group), and (iii) the exam. However, for this module the instructor will assign the students to the seminar groups. Therefore, students should sign up for (i) the module, (ii) lecture and (iii) the exam, but not for the seminar groups. Recommended background knowledgeMandatory courses PPE specialization Track 3: Economics General Information |Course Code||W_JSM_218| |Credits||6 EC| |Period||P5| |Course Level||200| |Language of Tuition||English| |Faculty||Faculty of Humanities| |Course Coordinator||dr. R.I. Luttens| |Examiner||dr. R.I. Luttens| |Teaching Staff|| dr. R.I. Luttens | Practical Information You need to register for this course yourself Last-minute registration is available for this course. |Teaching Methods||Lecture, Seminar*| *You cannot select a group yourself for this teaching method, you will be placed in a group. Target audiences This course is also available as:
https://studiegids.vu.nl/en/2019-2020/courses/W_JSM_218
I play the guitar and have dabbled with the piano and this is a question that’s always bugged me. On my guitar it is trivial to switch keys, especially if I use a capo. Why is the piano designed so that the white keys are the key of C? This makes it much harder for neophytes like me to play in different keys. Wouldn’t it have been more useful to have a piano with all the keys the same? The white keys are easy to hit, because they’re wider than the black keys, and stick out in front. The black keys are easy to hit, because they stick up above the white keys. Make all the keys the same, and something’s got to give. Besides, with all of the white keys exactly forming one of the major scales, it at least makes songs in that key easy to play. If all the keys were the same, then you’d be perpetually hitting accidental accidentals even in C. There are some pianos that have a “crank” to shift the entire piano changing the keys (meaning “C” may become “D” or “F#”), and if I had to guess there’s probably electronic keyboards that do the same nowadays for cheaper. I think you’re getting a little confused though, it’s in the key of C because it means the button corresponding to “C” written on a page sounds as a C. String instruments in general are in the key of C. The instruments that aren’t in C are wind instruments with fingerings, because the fingering corresponding to a written C don’t sound as a C, the reason for this is so players (especially brass) can easily switch instruments and use the same fingerings they’re accustomed to without having to retrain which fingerings mean what note. Even if the keys were broken up differently, it’d still be in C. The only way it’d be in something else is if you had another piano with the same division of keys, but each key sounded a whole step lower but the music was written the same (Key of Bb). The answer as to why the white keys correspond with the notes in the C scale is because it makes it a hell of a lot easier to distinguish octaves, and just navigate in general, could you imagine finding the note “D” just by looking at an undivided piano? Never mind not hitting the wrong note during a concert because you can’t “feel” your intervals. Muscle memory will help to an extent, but this is also why string instruments have dots and (in the case of the guitar) frets, to tell where intervals are. It’s just not practical for an instrument with such a range to do that. C just happened to be the key with no sharps or flats and sharps and flats seemed like the best way to distinguish keys. Also, since the black keys are raised it makes them easier to hit when you don’t want to hit white keys. I’d also assume some keys and intervals were more common when it was invented so it was structured to make those easier to finger, but don’t quote me on that one. The more interesting questions I suppose are “why didn’t we just letter from A - L and not have sharps and flats to distinguish strangely like this” or “why is C the one with no sharps? Why not A?” But those are pretty tangential. My guess: Try playing a chord that includes two notes an octave apart (e.g., middle C and high C.) Assuming you have normal-sized hands, it’s not too hard. Now, imagine if your hand had to span 13 white keys instead of 8. These are not satisfying answers! =P Any easy solution would be to have markings on the keys but leave all the keys the same size. On my guitar there are dots on the fret board to denote, for example, the 5th and 12th fret. If the size of the keys is a problem then make them a bit smaller. Another compromise would be to have alternating black and white keys; then you’d only have to memorize two (piano) key patterns to cover all the musical keys. As it currently is, you have to memorize a different pattern for each musical key. It’s like the design of the piano is made purposefully baffling. Imagine if your frets were twice as big as they are now, and you had to play some really gnarly chords. Even if you had visual markings on the keys, playing large chords would be impossible unless you had gigantic hands. Further, making all the keys the same means you have to look at the keyboard to see where you are. A person sight-reading, or a blind guy, needs tactile feedback to know where they are on the keyboard. Yeah, just to play an octave in one hand on piano, you’d need what is , on the currently sized keys, about an octave and a half reach. 7/12 of the tones are white keys; 8/13 if you count the octave. I don’t play piano, which makes it extra sickening that Stevie Wonder et al are so uber good at it. Maybe they practice so much that they can’t lose their place. But having some black keys as guideposts must help as well. You don’t have to memorise a different pattern for each key, it’s exactly the same. Different collection of notes of course, but that’s a given. But it’s the exact same pattern for every key. Intervals as follows, 0 being the root note: 0, 2, 2, 1, 2, 2, 2, 1. This is the same pattern whatever root note you use. Now because of the nature of the acoustic piano, the way they’re built, a key had to be chosen for the white notes. C is a popular key, especially for nursery rhymes and it corresponds with the way musical notation works (having no sharps or flats until you add them to change the key.) Or…was music notation designed around the piano? In any case, a key had to be chosen for the white notes. Electric keyboards have the function to change the key so you can play only in the white notes. But why not ask the same question about the guitar? Why is E, A, D, G, B, E? so common? Why those notes? Simple answer, is it makes it easier to play the most amount of chords. Same with the piano, it’s the best layout for ease to play in any key. It looks illogical at first glance though, why not have the notes go A, B, C, D, E, F (and maybe add a seventh string, G?) Well, if you try tune your guitar that way, you’ll find chords incredibly difficult to play. It’s the same principle on the piano really. Once you get the hang of it, a key is a key is a key, be it white or black, so the piano is not really in any particular key. There’s a civil rights joke in here… somewhere. The question of how the piano keyboard came be is actually a fairly good one, but unfortunately, it’s genesis isn’t quite clear. The piano is of course a fairly recent instrument, and the real question isn’t about it being in the key of C, but rather why is it based on a diatonic scale (rather than a chromatic one), so the question should more properly be: why is the organ keyboard the way it is. The organ appears to go back to at least roman times. We know that roman organs were controlled by sliders that blocked and opened the pipes. In the early medieval period, these sliders were connected to lever-like keys that were pushed with the hand or fist. 13th century art depicts organs with a very limited range (a little over an octave) and a sequence of identical keys. The limited range of the instrument was due to the fact that wind from the bellow could not be efficiently distributed to many pipes. These organs were likely only able to play the notes of the diatonic scale. With a limited number of pipes available, this was the best arrangement for playing the music of the times. Eventually, though, technical advancements allowed the number of pipes to grow. At the same time, changing musical tastes lead to accidentals being added to the existing diatonic scale. These chromatic keyboards had a second row of keys placed above the natural keys. However, as you can see here the layout of the accidentals wasn’t necessarily the same as a modern keyboard. Eventually, the current layout seems to have been fixed in the 15th century. The size of the keys wasn’t normalized until much later. Here’s a very nice video of Bernard Foccroulle playing a 1688 organ; notice how small the keys are. Why weren’t the new keys added alongside the existing one? A good reason might be that this would have been confusing for organists trained with diatonic keyboards. Another explanation might be that these new notes were accidentals, and the diatonic scale was still the foundation of the music. Note that the notion of “musical key” hadn’t been invented yet, so transposition was not an issue. In the 19th and 20th century, some composers, notably Schoenberg, complained about the limitations of the piano keyboard layout. It makes playing in certain keys much easier than in others. It also gives the diatonic scale a central role that some thought archaic. Some have also argued that the layout is not ergonomically optimal. There were attempts at coming up with a better keyboard. The most famous one was the Janko keyboard invented in 1882. It’s designed so that the fingering stays exactly the same no matter what key you play in. It never caught on, however, for the same reason that the Dvorak keyboard and spelling reform never caught on either: too many people with too much time invested in the old ways. Classical piano player chiming in here. I’ve found that key signatures with more sharps or flats are much more easier to play in than the key of C. Actually the more sharps or flats, the easier it is to play. When I play predominately on black notes, it is much much much easier to feel my way around the keyboard. Anything halfway challenging that is played on only white keys takes much more practice to be able to play with no mistakes. As others have said, playing only on white keys gives you no tactile feedback. The only thing that is slightly easier in the key of C is sightreading. On a hunch, went looking for an electronic version of the Janko keyboard (of which I’d not heard before). Found one called a Chromatone. Kinda fun to watch. Skip the first video. Great post, jovan. Thanks! There are a few chromatone videos on Youtube, but I’d like to see and hear a real Janko piano. So far, I’ve only seen photographs. There’s no law that says the guitar has to be tuned to those. Slide guitar is tuned open…either E B E G# B E or D A D F# A D or others. Though not a slide song, here’s Cat Stevens’ “If I Laugh.” It uses D A D F# A D. I bet it can’t be as easily played with std tuning. Here’s Steve Miller on acoustic/The Joker: He’s fingering chords in the key of G…but it’s in the concert key of F. He’s tuned down a step. That gives that nice rattle. It would be a lot harder to get those little riffs if he were in F. Anyway, you probably could tune the piano up a step, down a half step, etc. Maybe that would alter the tone, giving the rattle you hear in Steve’s guitar when playing strings that are too loose. You could also re-design the “harp” inside. Ebony… and … Ivory… llive together in perfect … Dammit, WarmNPrickly, now I’m gonna have that song in my head! Something to think about: What key are you playing in if you play only the black keys? An interesting video: Pentatonic Spriituals on the Black Keys There are transposing keyboards. There was a famous songwriter, can’t think of it right now (maybe Hoagy Carmichael) who always composed on the black keys. It’s a hilarious question to me, because I took decades of piano lessons and now have taken up the guitar, which makes absolutely totally no sense whatsoever.
https://boards.straightdope.com/t/why-is-the-piano-in-the-key-of-c/484783
Q: Is it possible for certain people to perceive colors differently? What if someone perceives a color as 'red' when it is actually 'green'? Since different people have preferences for different colors, and colors are perhaps constructed in the mind, is it possible that the entire space of colors can be experienced quite differently for some people? A: Yes, this scenario is possible, occurring with certain cases of brain lesions in specific areas of the visual cortex, the fusiform, lingual and posterior parahippocampal gyri. These areas are analogous to what is referred to in primates as V4, or the 4th visual cortex, and are known to be involved (at least partly) in the perception of color (though see this commentary). Courtesy wikipedia In this study, subjects with brain lesions were asked to identify colors in a blue ambient light (meant to mimic the outdoors) and under a reddish light (meant to mimic indoor lighting). ... the three patients...with abnormal matches on the blue, green and red chips made different matches than control subjects for pink, violet, turquoise and green targets... Their matches for these chips were shifted towards orange-red for the pink and violet targets and towards green for the turqoise target. One patient displayed a shift toward turquois[e] for the green target. So, the level at which this "signal" is getting crossed can only be inferred from the level of the lesion, but presumably, the transduction of color by the cones and the processing of the colors in the P and K cells of the Lateral Geniculate is all still intact.   Clarke, S., Walsh, V.,Schoppig, A.,Assal, G., Cowey, A. (1998) Colour constancy impairments in patients with lesions of the prestriate cortex. Exp Brain Res 123:154-158. A: As @Gray mentioned, the philosophical problem you are interested in is known as the inverted spectrum. Unfortunately, @Gray's claim about no empirical difference is not exactly true. As @ChuckSherrington pointed out, we can have differences in color perception due to brain lesions, but this is cheating in way. We don't have to go this far, we already have differences in color perception between neurotypical people based on their culture/language. More dramatically, we can observe this difference within a single individual! As I explained in a previous answer: physics does not have colour, it just has a continuous spectrum of wavelengths. Even when you look at the sensitivity of the 3 types of cones in the retina it is not discrete, but continuous. The categories of colours (i.e. "that's red", "that's blue") are produced by perception and these discrete-ish categories form the basis of colour qualia. Scientists can study these categories by asking participants if various stimuli feel like the same colour. The arbitrary boundaries of the categories people draw between colours is language dependent (Regier & Kay, 2009)! In other words, we have support for the Whorf hypothesis: language effects your subjective conscious experience. But the buck doesn't stop there. Gilbert et al. (2006) showed that the Whorf hypothesis is supported in the right visual field but not the left. In other words, when I present colours in one part of your visual field, you experience them one way and when I present them to the other then you experience them in a fundamentally different way. Thus different colours can be experienced differently by different neurotypical people, and in fact they can be experienced differently by the same person based on which visual field the stimuli is presented in. Further, this difference is empirically measurable! Of course this doesn't resolve the inverted spectrum problem completely, but that is to be expected since philosophy always has a way to run away from science. A: I would point you towards the debate on qualia in cognitive science. It has been argued by some philosophers such as David Chalmers that there are internal qualitative states separate from their physical realization. http://en.wikipedia.org/wiki/Inverted_spectrum With the exception of color-blindness and other differences in visual circuitry such as Tetrachromacy there would be no empirical difference in the response of such a person with differences in their qualitative experience of color. So its an interesting question whether such differences in qualia exist or are even meaningful to talk about.
The last decade has seen a surge of interest in historical fiction. Led by Hilary Mantel’s Wolf Hall and Bring Up the Bodies—novels that chronicle the rise to power of Thomas Cromwell (1485–1540) in the court of King Henry VIII—these stories have dominated bestseller charts and shortlists for literary prizes. Meanwhile, public appetite for their adaptations on stage and screen continues to grow. Many historians and literary scholars welcome this trend, contending that historical fiction gives readers new ways of understanding historical experience and encourages them to engage with history in more critical ways. Others argue that, in an era of fake news, we should keep facts separate from fiction. A conference I’m convening at The Huntington takes the recent popularity of the historical novel as a starting point to explore relationships between various calibrations and understandings of history and fiction. Titled “Fictive Histories/Historical Fictions,” it takes place on May 12 and 13 in Rothenberg Hall. The conference delves into the connection between history and fiction from many different angles. We will look at the boundaries that might exist between them, and in what ways they overlap, considering the intrinsic ethical and political implications. We will also examine whether the recent success of historical fiction can be viewed as a new development, or rather, should be seen as a return to (or inflection of) an older literary tradition. Another angle will be to study whether this success poses any danger to academic history and literary historicism. Or might it rather offer opportunities to bring critical and creative approaches together to develop new lines of thought and practice? Finally, we will look at how creative and critical manifestations of history respond to cultural and political imperatives of the 21st century. The papers of Hilary Mantel, who has been at the forefront of the renaissance of historical fiction, are housed at The Huntington, which began acquiring her papers in 2001 and continues to receive additional material. The collection contains more than 1,300 items, including literary manuscripts, correspondence, photographs, and ephemera. Reflecting these strengths of The Huntington’s collections, the focus of the conference will be the interplay between fictional histories and historical fiction written in or about Britain—and Mantel’s writing in particular. This topic will be embedded and addressed within wider international, methodological, and generic contexts. The creative-critical focus of the conference will be replicated in its line-up of speakers and in the forms of their presentations: historians and literary scholars will speak alongside novelists. (Indeed, many of our speakers perform more than one of these roles simultaneously and will reflect on that experience). The first day will focus on calibrating new understandings of the relationship between the creative and the critical, and the second will focus in a more granular fashion on the ways in which historical fiction, past and present, frames and articulates these relationships. Hilary Mantel, whose work inspired the conference, will be delivering two plenary sessions over the course of three days. On Thursday, May 11, at 7:30 p.m., she will deliver the Ridge Lecture, “I Met A Man Who Wasn’t There,” in Rothenberg Hall. (The lecture is already sold out, but you may watch it in real time on Livestream.) As she works to the conclusion of the Cromwell trilogy that began with Wolf Hall, Mantel will describe her 10-year effort to pin to the page her compelling and elusive subject. On Saturday, May 13, at 4:15 p.m., she will reflect further on her own work and the themes of the conference in a conversation with Mary Robertson, a Tudor historian and former curator of British manuscripts at The Huntington, to whom Mantel dedicated Wolf Hall. (This event is restricted to conference attendees.) You can read more about the conference program and registration on The Huntington’s website. In conjunction with the conference, The Huntington is displaying two items of special interest for readers of historical fiction in the East Foyer of the Library’s Main Exhibition Hall through Monday, May 8. One page of Hilary Mantel’s notes for Wolf Hall is on view. (Another page of notes may be viewed nearby in the “Library Today” gallery.) Also on display is a first edition of The Scottish Chiefs (1810), by Jane Porter, a best-selling British novelist of the early 19th century. The Scottish Chiefs, which tells of the exploits of William Wallace (1270–1305), a leader of the Wars of Scottish Independence, is one of the earliest examples of the historical novel. The Huntington holds the archive of Jane Porter’s papers. Sophie Coulombeau is lecturer in English Literature at Cardiff University.
http://huntingtonblogs.org/2017/05/fictive-histories-and-historical-fictions/
If a countrys Gross Domestic Product increases each year, but so does the percentage of its people deprived of basic education, health care, and other opportunities, is that country really making progress? If we rely on conventional economic indicators, can we ever grasp how the worlds billions of individuals are really managing? More than a century after Hartley Withers's "The Meaning of Money" and 80 years after Keynes's "Treatise on Money", the fundamentals of how banks create money still needs explaining and this book meets that need with clear exposition and expert marshalling of the relevant facts. Economics, Culture and Social Theory examines how culture has been neglected in economic theorising and considers how economics could benefit by incorporating ideas from social and cultural theory. Experimental economists are leaving the reservation. They are recruiting subjects in the field rather than in the classroom, using field goods rather than induced valuations, and using field context rather than abstract terminology in instructions. In this book, the author critically examines a number of socialist proposals that have been put forward since the end of the Cold War. It is shown that although these proposals have many merits, their inability effectively to incorporate the benefits of information technology into their models has limited their ability to solve the problem of socialist construction. The final section of the book proposes an entirely new model of socialist development, based on a "needs profile" that makes it possible to convert the needs of large numbers of people into data that can be used as a guide for resource allocation. This analysis makes it possible to rethink and carefully specify the conditions necessary for the abolition of capital and consequently the requirements for socialist revolution and, ultimately, communist society. Thomas Piketty's Capital in the Twenty-First Century is the most widely discussed work of economics in recent history, selling millions of copies in dozens of languages. But are its analyses of inequality and economic growth on target? Where should researchers go from here in exploring the ideas Piketty pushed to the forefront of global conversation? A cast of economists and other social scientists tackle these questions in dialogue with Piketty, in what is sure to be a much-debated book in its own right. Work defines who we are It determines our status and dictates how where and with whom we spend most of our time It mediates our self worth and molds our values But are we hard wired to work as hard as we do Did our Stone Age ancestors also live … Those who control the world’s commanding economic heights, buttressed by the theories of mainstream economists, presume that capitalism is a self-contained and self-generating system. Adam Smith and Karl Marx recognized that the best way to understand the economy is to study the most advanced practice of production. Today that practice is no longer conventional manufacturing: it is the radically innovative vanguard known as the knowledge economy. In this book, the author, Intan Suwandi, engages with the question of imperialism through the specific channel of Global Value Chains. A Theory From bestselling writer David Graeber a master of opening up thought and stimulating debate Slate a powerful argument against the rise of meaningless unfulfilling jobs and their consequences Does your job make a meaningful contribution to the world In the spring of 2013 David Graeber asked this question … In this new book Smith returns to Solow s classic productivity paradox which essentially states that we can see automation everywhere like the spheres of leisure sociality and politics but not in the productivity statistics He examines why labor saving automation in the service age in the Global North has … Framing borders as an instrument of capital accumulation imperial domination and labor control Walia argues that what is often described as a migrant crisis in Western nations is the outcome for the actual crisis of capitalism conquest and climate change This book shows the displacement of workers in the global … Aim of this intensive workshop is 1.) to introduce the participants to the macroeconomic workings of the climate crisis as the background of sustainable finance; 2.) to introduce financial assets with ESG (Environmental, Social and Governance) criteria attached to them and their markets and important institutional players; 3.) to provide a critical perspective on the current setup of sustainable finance; 4.) and to work on in-depth case studies illustrating the workings on ESG-finance markets, its emitters and traders as well as their macroeconomic implications. Popular anger against the financial system has never been higher, yet the practical workings of the system remain opaque to many people. The Heretic's Guide to Global Finance aims to bridge the gap between protest slogans and practical proposals for reform. Aim of this intensive workshop is to understand macroeconomic workings of climate change as as the background of sustainable finance; to analyse financial assets with ESG (Environmental, Social and Governance) criteria attached to them and their markets and important institutional players; to develop a critical perspective on the current setup of sustainable finance; and to synthesise this knowledge by applying it on in-depth case studies. The documentary proceeds along the lines of Karl Marx' biography, inquiring into his workings as a journalist, social scientist, revolutionary and historian and his travels through Europe. In chronological order historical events, such as the 1848 revolution or the Paris Commune as well as concepts such as dialectics, the labour theory of value or the reform-revolution debate are revisited. The documentary is narrated by John Kenneth Galbraith and by an actor, who plays Marx and recites quotes from his writings. First historical instances of colonialism such as the crusades are revisited. Then a lengthy account of the colonial experience of the Spanish Kingdom in South America and of the British Empire in India is given. The Indian case is illustrated with large amounts of archival materials from a colonial administrator. There the workings of the colonial bureaucracy and law and its (positive) achievements as well as the ignorance and arrogance of the external rulers are demonstrated. After narrating the Indian independence to some depth some recent colonial wars (Algeria, Vietnam, Congo, Angola) are briefly examined. In the end, the impact of colonialism on current, i.e. 1970s, (economic) international relations is discussed. The general tenor is that colonialism is a dysfunctional system. Still, agency is mostly placed with the empire rather than with the ruled. Adam Smith's The Wealth of Nations provided the first, most influential and lasting explanation of the workings of modern economics. But with his focus on "the market" as the best mechanism for producing and distributing the necessities of life, Smith's concepts only told part of the story, leading to flawed economic models that devalue activities that fall outside of the market's parameters of buying and selling. One hundred years ago the idea of 'the economy' didn't exist. Now, improving the economy has come to be seen as perhaps the most important task facing modern societies. Politics and policymaking are conducted in the language of economics and economic logic shapes how political issues are thought about and addressed. As seen with the United Nations significant promotion of the Sustainable Development Goals (SDGs) in the past few years, the issue of global development is of growing concern to many international organizations. As humanity continues to become more interconnected through globalization, the inequalities and injustices experienced by inhabitants of impacted countries becomes increasingly clear. While this issue can be observed in the papers of different types (e.g., different schools of thought) of economists throughout the world, the work of behavioral and complexity economists offer a unique, collaborative perspective on how to frame decisions for individuals in a way that can positively reverberate throughout society and throughout time. Nathan Tankus created this series to introduce people outside of the inner financial circles of professionals, journalists and policymakers to the basic mechanisms and dynamics of monetary policy. First published in 1983. A collection of papers directed at those outside the field of Economics, to open up discussions around the scientific worth of Economics. Some economic events are so major and unsettling that they “change everything.” Such is the case with the financial crisis that started in the summer of 2007 and is still a drag on the world economy. Yet enough time has now elapsed for economists to consider questions that run deeper than the usual focus on the immediate causes and consequences of the crisis. In this clear and accessible book, an eminent political scientist offers a jargon-free introduction to the market system for all readers, with or without a background in economics A Study of Capitalist Rule This book aims at presenting and assessing imperialism as a theoretical concept It aims to provide a comprehensive evaluation focusing specifically on the tension between Marx s theoretical system of the Critique of Political Economy and the theories of capitalist expansion and domination For over … The recent financial meltdown and the resulting global recession have rekindled debates regarding the nature of contemporary capitalism. This book is intended as a textbook for a course in behavioural economics for advanced undergraduate and graduate students who have already learned basic economics. The book will also be useful for introducing behavioural economics to researchers. Unlike some general audience books that discuss behavioural economics, this book does not take the position of negating traditional economics completely. This book is a collection of Steve Keen's influential papers published over the last fifteen years. The topics covered include methodology, microeconomics, and the monetary approach to macroeconomics that Keen - along with many other non-mainstream economists - has been developing. The Austrian tradition in economic thought had a profound influence on the development of post-war economics including neoclassical orthodoxy, game theory, public choice, behavioral economics, experimental economics and complexity economics. This collection of previously published and new papers is a major intervention in the on-going debate about the nature and future of economics. Instead of the present deductivist-formalist orientation of mainstream economics, Lars Syll advocates for the adoption of a more pluralist approach to economics, arguing for more realism and relevance with less insistence on mathematical modeling.
https://www.exploring-economics.org/en/search/?q=Working+Paper&page=7
Q: Getting Factor Means into the dataset after calculation I am trying to create a normalization value for a variable I am working with based on individual conference means and SDs. I found the conference means using the function: confavg=aggregate(base$AVG, by=list(base$confName), FUN=mean) And so after getting the means for the 31 conferences, I want to go back and for each individual player put these means in so I can easily calculate a normalization factor based on the conference mean. I have tried to create large ifelse or if statements where confavg is the conference average. ifelse((base$confName=="America East Conference"),confavg[1,2]->base$CAVG,0->base$CAVG) but nothing works. Ideally I would want to take every player and say: Normalization = (player average - conference average)/conference standard deviation How should I go about doing that? edit: Here is some sample data: AVG = c(.350,.400,.320,.220,.100,.250,.400,.450) Conf = c("SEC","ACC","SEC","B12","P12","ACC","B12","P12") Conf=as.factor(Conf) sampleconfavg=aggregate(AVG, by=list(Conf), FUN=mean) sampleconfsd=aggregate(AVG, by=list(Conf), FUN=sd) So each player would have their average - the conference average / sd of conference so for the first guy it would be: (.350 - .335) / 0.0212132 = 0.7071069 but I am hoping to build a function that does it for all people in my dataset. Thank you! edit2: Alright the answer below is amazing but I am running into (hopefully) one last problem. I want to basically do this process to three variables like: base3=do.call(rbind, by(base3, base3$confName, FUN=function(x) { x$ScaledAVG <- scale(x$AVG); x})) base3=do.call(rbind, by(base3, base3$confName, FUN=function(x) { x$ScaledOBP <- scale(x$OBP); x})) base3=do.call(rbind, by(base3, base3$confName, FUN=function(x) { x$ScaledK.AB <- scale(x$K.AB); x})) Which works but then when I search the datafile like: base3[((base3$ScaledAVG>2)&(base3$ScaledOBP>2)&(base3$ScaledK.AB<.20)),] it resets the Scaled K.AB value and doesn't use it as part of the parameters of the search. A: Here is an example to scale iris$Sepal.Length, within groups of iris$Species: scaled.iris <- do.call(rbind, by(iris, iris$Species, FUN=function(x) { x$Scaled.Sepal.Length <- scale(x$Sepal.Length); x } ) ) head(scaled.iris) ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species Scaled.Sepal.Length ## setosa.1 5.1 3.5 1.4 0.2 setosa 0.26667447 ## setosa.2 4.9 3.0 1.4 0.2 setosa -0.30071802 ## setosa.3 4.7 3.2 1.3 0.2 setosa -0.86811050 ## setosa.4 4.6 3.1 1.5 0.2 setosa -1.15180675 ## setosa.5 5.0 3.6 1.4 0.2 setosa -0.01702177 ## setosa.6 5.4 3.9 1.7 0.4 setosa 1.11776320 Edit: Using your sample data (Conf and AVG only): d <- data.frame(Conf, AVG) dd <- do.call(rbind, by(d, d$Conf, FUN=function(x) { x$Scaled <- scale(x$AVG); x})) # Remove generated row names rownames(dd) <- NULL dd ## Conf AVG Scaled ## 1 ACC 0.40 0.7071068 ## 2 ACC 0.25 -0.7071068 ## 3 B12 0.22 -0.7071068 ## 4 B12 0.40 0.7071068 ## 5 P12 0.10 -0.7071068 ## 6 P12 0.45 0.7071068 ## 7 SEC 0.35 0.7071068 ## 8 SEC 0.32 -0.7071068
Mass spectrometry (ms) is a very sensitive technique which can be used to analyse and characterise compounds to give information about their mass, charge,. Mass spectrometry-based differential proteomics is a comprehensive analysis of protein expression that involves comparing distinct proteomes, such as cells,. Mass spec tips is a collection of tips relating to the operation and service of mass spectrometers. Teledyne isco's peaktrak® software provides single point control of all mass spectrometer parameters the user can select from three predetermined. Mass spectrometry is an analytical technique used to measure the mass-to- charge ratio of ions it is most generally used to find the composition of a physical . Mass spectrometry and purification techniques discusses the latest research innovations and important developments in this field. Mass spectrometry is a powerful analytical technique used to quantify known materials, to identify unknown compounds within a sample, and to elucidate the. Compare and learn about mass spectrometers on labcompare. This overview outlines the role of mass spectrometry (ms) in the field of proteomics, reviews ms methodology and instrumentation, and touches on sample. Accelerator mass spectrometry (ams) dating is an advanced technique used to measure the carbon-14 content of materials it involves accelerating the ions to. Student and professor in lab with mass spectrometer the mass spectrometer is complementary to the center's existing analytical capability and will be integrated . The mass spectrometry unit consists of one sciex 5500 qtrap triple quadrupole tandem mass spectrometer, and one agilent high resolution quadrupole-time. This page describes how a mass spectrum is produced using a mass spectrometer. Ambient mass spectrometry methods address the need for rapid analysis with minimal sample preparation in our laboratory, we use two main approaches to. 2015-2016 join us at our ncsu msf literature series, a platform for discussion of important and interesting topics in the mass spectrometry literature click for. The mass spectrometry facility in the chemistry department provides support for the analysis of a wide range of molecules using mass spectrometry based. Mass spectrometry (ms) is an analytical technique that ionizes chemical species and sorts the ions based on their mass-to-charge ratio in simpler terms, a mass. Rev sci instrum 2017 nov88(11):113307 doi: 101063/14986043 the nanopore mass spectrometer bush j(1), maulbetsch w(1), lepoitevin m(1), wiener. A novel membrane inlet mass spectrometer method to measurenh4 for isotope- enrichment experiments in aquatic ecosystems guoyu yin , lijun hou , min liu . Welcome to the university of waterloo mass spectrometry facility (uwmsf) as a shared facility, we provide instrumentation and expertise to the university of. A simple description of how a mass spectrometer works. Mass spectrometry definition is - an instrumental method for identifying the chemical constitution of a substance by means of the separation of gaseous ions . Selected ion flow tube mass spectrometry (sift-ms) is a form of direct mass spectrometry that analyses volatile organic compounds (vocs) in air with typical . 2018.
http://laessayhbrx.jayfindlingjfinnindustries.us/mass-spectrometer.html
Wed 06-02-2019 22:57 PM ABU DHABI, 6th February, 2019 (WAM) — President His Highness Sheikh Khalifa bin Zayed Al Nahyan, has bestowed the First Class Order of Zayed II upon Monsignor Yoannis Gaid, the Personal Secretary of His Holiness Pope Francis, Head of the Catholic Church, in appreciation of his efforts leading to the success of peace initiatives and spread of the culture of peaceful coexistence among followers of different religions. His Highness Sheikh Mohamed bin Zayed Al Nahyan, Crown Prince of Abu Dhabi and Deputy Supreme Commander of the UAE Armed Forces, presented the Order to Monsignor Gaid during the February 3-5 visit of Pope Francis to the UAE. His Highness Sheikh Mohamed bin Zayed appreciated the role played by the Pope’s personal secretary in promoting the culture and values of fraternity, harmony and coexistence among people, highlighting the importance of spotting light on the role of men of peace, tolerance and fraternity. H.H. also called for honouring them for their efforts in this respect. For his part, Monsignor Gaid expressed his delight at being honoured with this Order and applauded the UAE’s role in spreading human values of different religions for promoting goodness, cooperation as well as assisting and accepting the other.
https://totaluae.com/news/president-khalifa-confers-order-of-zayed-ii-on-popes-personal-secretary/
One of the more exciting innovations in scholarly research in recent years has been the introduction of scholarly collaboration networks (SCNs) – platforms that host content and facilitate article sharing and collaboration among researchers. At Elsevier, we found the power and potential of collaboration to advance research so promising that we acquired Mendeley, one of the largest and most innovative SCNs, in April 2013. Sharing and collaboration has always been at the foundation of research and discovery, but the Internet has changed how researchers draft and interact with journal articles using the latest technology, platforms and tools. With this proliferation of online sharing has come a good deal of confusion, as publishers have not yet modernized their policies to deal specifically with sharing on platforms like SCNs. Different platforms have different approaches: some support private sharing of articles, others support public sharing, and some support both. Sometimes the platforms encourage authors to share their own articles, and at other times the platforms encourage researchers to share any article they have access to. For this reason, publishers also need to be clear about how the platforms fit within their green open access and licensing approaches. It can be truly challenging for researchers or libraries to correctly navigate in this landscape. Given the popularity of SCNs, and the amount of time and resources it takes to operate them, publishers should provide clear guidelines to SCNs on how to host journal articles, to collaborate with them to develop technical solutions that promote and facilitate responsible sharing. Through collaboration, publishers and SCNs can effectively execute on our shared missions to help support researchers. We can make it possible for researchers to seamlessly share and collaborate across different platforms, overcome a wide range of copyright and technical challenges, and improve their experiences throughout the whole scholarly research cycle. The STM Working Group Voluntary Principles are open for consultation until April 10. More information from STM about their announcement can be found here. We would like to encourage researchers, libraries, SCNs and other stakeholders to find out more about the Principles and comment upon them. When finalized, they will mark a substantial step towards providing clear guidance to SCNs, and we will aim to incorporate them into our policy framework. The Principles reflect our belief that publishers and SCNs can – and should – work together to facilitate sharing which benefits researchers, institutions and society as a whole. [divider] Elsevier Connect Contributor As VP and Head of Global Corporate Relations at Elsevier, Tom Reller (@TomReller) leads a global team of media, social and web communicators for the world's largest provider of scientific, technical and medical (STM) information products and services. Together, they work to build on Elsevier's reputation by promoting the company's numerous contributions to the health and science communities, many of which are brought to life in this online community and information resource: Elsevier Connect. Tom directs strategy, execution and problem-solving for external corporate communications, including media relations, issues management and policy communications, and acts as a central communications counsel and resource for Elsevier senior management. Additionally, he develops and nurtures external corporate/institutional relationships that broaden Elsevier's influence and generate good will, including partnerships developed through the Elsevier Foundation.
https://www.elsevier.com/connect/elsevier-welcomes-new-stm-principles-to-facilitate-academic-sharing
L’assestamento forestale basato su servizi ecosistemici e pagamenti per servizi ecosistemici: considerazioni a valle del progetto LIFE+ Making Good Natura(2017)Forests are important for timber production and provide a wide range of ecosystem services (ES), including water provision and regulation, carbon sequestration, erosion control, and recreational services. However, these ... - Supporting the management of ecosystem services in protected areas: trade-offs between effort and accuracy in evaluation(2017)Integrating ecosystem services (ES) into the management of protected areas, such as European Natura 2000 sites, can improve biodiversity conservation and human well-being; yet, the assessment and application of ES remains ... - Servizi ecosistemici nei siti della Rete Natura 2000(2016)Gli ecosistemi naturali presenti nelle aree protette forniscono una grande quantità di servizi ecosistemici (SE) essenziali per il benessere delle popolazioni locali e dei turisti oltre che a contribuire all’economia locale. ... - Participative Spatial Scenario Analysis for Alpine Ecosystems(2017)Land use and land cover patterns are shaped by the interplay of human and ecological processes. Thus, heterogeneous cultural landscapes have developed, delivering multiple ecosystem services. To guarantee human well-being, ... - Using conjoint analysis to gain deeper insights into aesthetic landscape preferences(2019)Enjoyable landscapes are important resources for recreational activities and the socio-economic development of tourism destinations. A profound understanding of landscape preferences can support landscape management and ... - Historical trajectories in land use pattern and grassland ecosystem services in two European alpine landscapes(2017)Land use and spatial patterns which reflect social-ecological legacies control ecosystem service (ES) supply. Yet, temporal changes in ES bundles associated with land use change are little studied. We developed original ... - Future impacts of changing land-use and climate on ecosystem services of mountain grassland and their resilience(2017)Although the ecosystem services provided by mountain grasslands have been demonstrated to be highly vulnerable to environmental and management changes in the past, it remains unclear how they will be affected in the face ... - Integrating supply, flow and demand to enhance the understanding of interactions among multiple ecosystem services(2019)A comprehensive understanding of the relationships among ecosystem services (ES) is important for landscape management, decision-making and policy development, but interactions among multiple ES remain under-researched. ... - Operationalising ecosystem services for effective management of protected areas: Experiences and challenges(2017)Protected areas are crucial for biodiversity conservation and the provision of ecosystem services (ES), but management efforts seem not to be sufficient. To increase management effectiveness, the ES framework offers new ...
https://bia.unibz.it/handle/10863/6032/discover?filtertype_0=type&filtertype_1=author&filter_relational_operator_1=equals&filter_relational_operator_0=equals&filter_1=Schirpke+U&filter_0=Article&filtertype=availability&filter_relational_operator=equals&filter=none
Hippocampal functional connectivity-based discrimination between bipolar and major depressive disorders. Psychiatry Res Neuroimaging. 2019 Jan 12;284:53-60 Authors: Fateh AA, Long Z, Duan X, Cui Q, Pang Y, Farooq MU, Nan X, Chen Y, Sheng W, Tang Q, Chen H Abstract Despite the impressive advancements in the neuropathology of mood disorders, patients with bipolar disorder (BD) are often misdiagnosed on the initial presentation with major depressive disorder (MDD). With supporting evidence from neuroimaging studies, the abnormal functional connectivity (FC) of the hippocampus has been associated with various mood disorders, including BD and MDD. However, the features of the hippocampal FC underlying MDD and BD have not been directly compared. This study aims to investigate the hippocampal resting-state FC (rsFC) analyses to distinguish these two clinical conditions. Resting-state functional magnetic resonance imaging (fMRI) data was collected from a sample group of 30 patients with BD, 29 patients with MDD and 30 healthy controls (HCs). One-way ANOVA was employed to assess the potential differences of the hippocampus FC across all subjects. BD patients exhibited increased FC of the bilateral anterior/posterior hippocampus with lingual gyrus and inferior frontal gyrus (IFG) relative to patients MDD patients. In comparison with HCs, patients with BD and MDD had an increased FC between the right anterior hippocampus and lingual gyrus and a decreased FC between the right posterior hippocampus and right IFG. The results revealed a distinct hippocampal FC in MDD patients compared with that observed in BD patients. These findings may assist investigators in attempting to distinguish mood disorders by using fMRI data. PMID: 30684896 [PubMed - as supplied by publisher] Increased functional segregation of brain network associated with symptomatology and sustained attention in chronic post-traumatic stress disorder. J Affect Disord. 2019 Jan 17;247:183-191 Authors: Zhu H, Li Y, Yuan M, Ren Z, Yuan C, Meng Y, Wang J, Deng W, Qiu C, Huang X, Gong Q, Lui S, Zhang W Abstract BACKGROUND: Traditional regional or voxel-based analyses only focus on specific brain regions or connectivity rather than the whole brain's functional organization. Using resting state functional magnetic resonance imaging (rs-fMRI), we aimed to explore the altered topological metrics, clinical symptoms and cognitive function in chronic post-traumatic stress disorder (PTSD) in order to identify the brain network mechanisms underlying these clinical and cognitive symptoms. METHODS: Forty patients with unmedicated chronic PTSD and forty-two matched trauma-exposed healthy controls (TEHCs) underwent rs-fMRI, and the topological organization of the whole-brain network was calculated using graph theory. The Rapid Visual Information Processing (RVP) task and Wechsler Memory Scale-IV (WMS-IV) were used to evaluate the subjects' sustained attention and memory capacity. All clinical and cognitive measures and topological parameters of the PTSD patients and TEHCs were compared, and the relationships between altered network metrics and symptom severity were explored. RESULTS: Compared with the TEHCs, the patients showed increases in the normalized clustering coefficient, small-worldness, normalized local efficiency and efficiency-based small-worldness. The left middle occipital gyrus showed increases in nodal global efficiency and nodal degree that were negatively correlated with the severity of PTSD symptoms. The altered connections in PTSD only involved the default mode network (DMN) and the occipital network. LIMITATIONS: Comorbid conditions were included, and current cross-sectional study cannot conclude on causality. CONCLUSIONS: Patients with chronic PTSD showed increased functional brain network segregation, mainly in the occipital cortex, which could be a protective or compensatory mechanism to alleviate clinical symptoms. PMID: 30684892 [PubMed - as supplied by publisher] Active information storage in Parkinson's disease: a resting state fMRI study over the sensorimotor cortex. Brain Imaging Behav. 2019 Jan 25;: Authors: Puche Sarmiento AC, Bocanegra García Y, Ochoa Gómez JF Abstract Parkinson's disease (PD), the second most frequent neurodegenerative disease, affects significantly life quality by a combination of motor and cognitive disturbances. Although it is traditionally associated with basal ganglia dysfunction, cortical alterations are also involved in disease symptoms. Our objective is to evaluate the alterations in brain dynamics in de novo and recently treated PD subjects using a nonlinear method known as Active Information Storage. In the current research, Active Information Storage (AIS) was used to study the complex dynamics in motor cortex spontaneous activity captured using resting state functional Magnetic Resonance Imaging (rs-fMRI) at early-stage in non-medicated and recently medicated PD subjects. Supplementary to AIS, the fractional Amplitude of Low Frequency Fluctuation (fALFF), which is a better-established technique of analysis of rs-fMRI signals, was also evaluated. Compared to healthy subjects, the AIS values were significantly reduced in PD patients over the analyzed motor cortex regions; differences were also found at less extent using the fALFF measure. Correlations between AIS and fALFF values showed that the measures seem to capture similar neuronal phenomena in rs-fMRI data. The highest sensitivity when detecting group differences revealed by AIS, and not captured by traditional linear approaches, suggests that this measure is a promising tool for the analysis of rs-fMRI neural data in PD. PMID: 30684153 [PubMed - as supplied by publisher] Hyperactivity/restlessness is associated with increased functional connectivity in adults with ADHD: a dimensional analysis of resting state fMRI. BMC Psychiatry. 2019 Jan 25;19(1):43 Authors: Sörös P, Hoxhaj E, Borel P, Sadohara C, Feige B, Matthies S, Müller HHO, Bachmann K, Schulze M, Philipsen A Abstract BACKGROUND: Adult attention-deficit/hyperactivity disorder (ADHD) is a serious and frequent psychiatric disorder of multifactorial pathogenesis. Several lines of evidence support the idea that ADHD is, in its core, a disorder of dysfunctional brain connectivity within and between several neurofunctional networks. The primary aim of this study was to investigate associations between the functional connectivity within resting state brain networks and the individual severity of core ADHD symptoms (inattention, hyperactivity, and impulsivity). METHODS: Resting state functional magnetic resonance imaging (rs-fMRI) data of 38 methylphenidate-naïve adults with childhood-onset ADHD (20 women, mean age 40.5 years) were analyzed using independent component analysis (FSL's MELODIC) and FSL's dual regression technique. For motion correction, standard volume-realignment followed by independent component analysis-based automatic removal of motion artifacts (FSL's ICA-AROMA) were employed. To identify well-established brain networks, the independent components found in the ADHD group were correlated with brain networks previously found in healthy participants (Smith et al. PNAS 2009;106:13040-5). To investigate associations between functional connectivity and individual symptom severity, sex, and age, linear regressions were performed. RESULTS: Decomposition of resting state brain activity of adults with ADHD resulted in similar resting state networks as previously described for healthy adults. No significant differences in functional connectivity were seen between women and men. Advanced age was associated with decreased functional connectivity in parts of the bilateral cingulate and paracingulate cortex within the executive control network. More severe hyperactivity was associated with increased functional connectivity in the left putamen, right caudate nucleus, right central operculum and a portion of the right postcentral gyrus within the auditory/sensorimotor network. CONCLUSIONS: The present study supports and extends our knowledge on the involvement of the striatum in the pathophysiology of ADHD, in particular, in the pathogenesis of hyperactivity. Our results emphasize the usefulness of dimensional analyses in the study of ADHD, a highly heterogeneous disorder. TRIAL REGISTRATION: ISRCTN12722296 ( https://doi.org/10.1186/ISRCTN12722296 ). PMID: 30683074 [PubMed - in process] Long-term Chinese calligraphic handwriting training has a positive effect on brain network efficiency. PLoS One. 2019;14(1):e0210962 Authors: Chen W, He Y, Chen C, Zhu M, Bi S, Liu J, Xia M, Lin Q, Wang Y, Wang W Abstract As a visual art form, Chinese calligraphic handwriting (CCH) has been found to correlate with certain brain activity and to induce functional connectivity reorganization of the brain. This study investigated the effect of long-term CCH training on brain functional plasticity as assessed with network measures. With the resting-state fMRI data from 31 participants with at least five years of CCH training and 40 controls, we constructed brain functional networks, examined group differences at both the whole brain and modular levels, and correlated the topological characteristics with calligraphy skills. We found that, compared to the control group, the CCH group showed shorter characteristic path lengths and higher local efficiency in certain brain areas in the frontal and parietal cortices, limbic system, basal ganglia, and thalamus. Moreover, these network measures in the cingulate cortex, caudate nucleus, and thalamus were associated with CCH performance (i.e., copying and creating skills). These results suggest that long-term CCH training has a positive effect on the topological characteristics of brain networks. PMID: 30682084 [PubMed - in process] Absence of dentate nucleus resting-state functional connectivity changes in nonneurological patients with gadolinium-related hyperintensity on T1 -weighted images. J Magn Reson Imaging. 2019 Jan 25;: Authors: Mallio CA, Piervincenzi C, Gianolio E, Cirimele V, Papparella LG, Marano M, Quintiliani L, Aime S, Carducci F, Parizel PM, Quattrocchi CC Abstract BACKGROUND: The dentate nuclei of the cerebellum are the areas where gadolinium predominantly accumulates. It is not yet known whether gadolinium deposition affects brain functions. PURPOSE/HYPOTHESIS: To assess whether gadolinium-dependent high signal intensity of the cerebellum on T1 -weighted images of nonneurological adult patients with Crohn's disease is associated with modifications of resting-state functional connectivity (RSFC) of the cerebellum and dentate nucleus. STUDY TYPE: Observational, cross-sectional. POPULATION: Fifteen patients affected by Crohn's disease were compared with 16 healthy age- and gender-matched control subjects. All participants underwent neurological, neurocognitive-psychological assessment, and blood sampling. FIELD STRENGTH/SEQUENCE: 1.5-T magnet blood oxygenation level-dependent (BOLD) functional MRI. ASSESSMENT: High signal intensity on T1 -weighted images, cerebellum functional connectivity, neurocognitive performance, and blood circulating gadolinium levels. STATISTICAL TESTS: An unpaired two-sample t-test (age and sex were nuisance variables) was used to investigate between-group differences in cerebellar and dentate nucleus functional connectivity. Z-statistical images were set using clusters determined by Z > 2.3 and a familywise error (FWE)-corrected cluster significance threshold of P = 0.05. RESULTS: Dentate nuclei RSFC was not different (P = n.s.) between patients with gadolinium-dependent high signal intensity on T1 -weighted images and controls. Pre- and postcentral gyrus bilaterally and the right supplementary motor cortex showed a decrease of RSFC with the cerebellum hemispheres (P < 0.05 FWE-corrected) and was related to disease duration but not to gadodiamide cumulative doses (P = n.s.). DATA CONCLUSION: Crohn's disease patients with gadolinium-dependent hyperintense dentate nuclei on unenhanced T1 -weighted images do not show dentate nucleus RSFC changes. LEVEL OF EVIDENCE: 2 Technical Efficacy Stage: 5. PMID: 30681245 [PubMed - as supplied by publisher] Transient states of network connectivity are atypical in autism: A dynamic functional connectivity study. Hum Brain Mapp. 2019 Jan 25;: Authors: Mash LE, Linke AC, Olson LA, Fishman I, Liu TT, Müller RA Abstract There is ample evidence of atypical functional connectivity (FC) in autism spectrum disorders (ASDs). However, transient relationships between neural networks cannot be captured by conventional static FC analyses. Dynamic FC (dFC) approaches have been used to identify repeating, transient connectivity patterns ("states"), revealing spatiotemporal network properties not observable in static FC. Recent studies have found atypical dFC in ASDs, but questions remain about the nature of group differences in transient connectivity, and the degree to which states persist or change over time. This study aimed to: (a) describe and relate static and dynamic FC in typical development and ASDs, (b) describe group differences in transient states and compare them with static FC patterns, and (c) examine temporal stability and flexibility between identified states. Resting-state functional magnetic resonance imaging (fMRI) data were collected from 62 ASD and 57 typically developing (TD) children and adolescents. Whole-brain, data-driven regions of interest were derived from group independent component analysis. Sliding window analysis and k-means clustering were used to explore dFC and identify transient states. Across all regions, static overconnnectivity and increased variability over time in ASDs predominated. Furthermore, significant patterns of group differences emerged in two transient states that were not observed in the static FC matrix, with group differences in one state primarily involving sensory and motor networks, and in the other involving higher-order cognition networks. Default mode network segregation was significantly reduced in ASDs in both states. Results highlight that dynamic approaches may reveal more nuanced transient patterns of atypical FC in ASDs. PMID: 30681228 [PubMed - as supplied by publisher] Reward network connectivity "at rest" is associated with reward sensitivity in healthy adults: A resting-state fMRI study. Cogn Affect Behav Neurosci. 2019 Jan 24;: Authors: Adrián-Ventura J, Costumero V, Parcet MA, Ávila C Abstract The behavioral approach system (BAS), based on reinforcement sensitivity theory (RST), is a neurobehavioral system responsible for detecting and promoting motivated behaviors towards appetitive stimuli. Anatomically, the frontostriatal system has been proposed as the core of the BAS, mainly the ventral tegmental area and the ventral striatum and their dopaminergic connections with medial prefrontal structures. The RST also proposes the personality trait of reward sensitivity as a measurable construct of stable individual differences in BAS activity. However, the relationship between this trait and brain connectivity "at rest" has been poorly studied, mainly because previous investigations have focused on studying brain activity under reward-related contingency paradigms. Here, we analyzed the influence of reward sensitivity on the resting-state functional connectivity (rs-FC) between BAS-related areas by correlating the BOLD time series with the scores on the Sensitivity to Reward (SR) scale in a sample of 89 healthy young adults. Rs-FC between regions of interest were all significant. Results also revealed a positive association between SR scores and the rs-FC between the VTA and the ventromedial prefrontal cortex, and between the latter structure and the anterior cingulate cortex. These results suggest that reward sensitivity could be associated with different resting-state activity in the mesocortical pathway. PMID: 30680664 [PubMed - as supplied by publisher] The effects of lutein and zeaxanthin on resting state functional connectivity in older Caucasian adults: a randomized controlled trial. Brain Imaging Behav. 2019 Jan 24;: Authors: Lindbergh CA, Lv J, Zhao Y, Mewborn CM, Puente AN, Terry DP, Renzi-Hammond LM, Hammond BR, Liu T, Miller LS Abstract The carotenoids lutein (L) and zeaxanthin (Z) accumulate in retinal regions of the eye and have long been shown to benefit visual health. A growing literature suggests cognitive benefits as well, particularly in older adults. The present randomized controlled trial sought to investigate the effects of L and Z on brain function using resting state functional magnetic resonance imaging (fMRI). It was hypothesized that L and Z supplementation would (1) improve intra-network integrity of default mode network (DMN) and (2) reduce inter-network connectivity between DMN and other resting state networks. 48 community-dwelling older adults (mean age = 72 years) were randomly assigned to receive a daily L (10 mg) and Z (2 mg) supplement or a placebo for 1 year. Resting state fMRI data were acquired at baseline and post-intervention. A dictionary learning and sparse coding computational framework, based on machine learning principles, was used to investigate intervention-related changes in functional connectivity. DMN integrity was evaluated by calculating spatial overlap rate with a well-established DMN template provided in the neuroscience literature. Inter-network connectivity was evaluated via time series correlations between DMN and nine other resting state networks. Contrary to expectation, results indicated that L and Z significantly increased rather than decreased inter-network connectivity (Cohen's d = 0.89). A significant intra-network effect on DMN integrity was not observed. Rather than restoring what has been described in the available literature as a "youth-like" pattern of intrinsic brain activity, L and Z may facilitate the aging brain's capacity for compensation by enhancing integration between networks that tend to be functionally segregated earlier in the lifespan. PMID: 30680611 [PubMed - as supplied by publisher] Neurometabolic and functional connectivity basis of prosocial behavior in early adolescence. Sci Rep. 2019 Jan 24;9(1):732 Authors: Okada N, Yahata N, Koshiyama D, Morita K, Sawada K, Kanata S, Fujikawa S, Sugimoto N, Toriyama R, Masaoka M, Koike S, Araki T, Kano Y, Endo K, Yamasaki S, Ando S, Nishida A, Hiraiwa-Hasegawa M, Edden RAE, Barker PB, Sawa A, Kasai K Abstract Human prosocial behavior (PB) emerges in childhood and matures during adolescence. Previous task-related functional magnetic resonance imaging (fMRI) studies have reported involvement of the medial prefrontal cortex including the anterior cingulate cortex (ACC) in social cognition in adolescence. However, neurometabolic and functional connectivity (FC) basis of PB in early adolescence remains unclear. Here, we measured GABA levels in the ACC and FC in a subsample (aged 10.5-13.4 years) of a large-scale population-based cohort with MR spectroscopy (MEGA-PRESS) and resting-state fMRI. PB was negatively correlated with GABA levels in the ACC (N = 221), and positively correlated with right ACC-seeded FC with the right precentral gyrus and the bilateral middle and posterior cingulate gyrus (N = 187). Furthermore, GABA concentrations and this FC were negatively correlated, and the FC mediated the association between GABA levels and PB (N = 171). Our results from a minimally biased, large-scale sample provide new insights into the neurometabolic and neurofunctional correlates of prosocial development during early adolescence. PMID: 30679738 [PubMed - in process] Repetitive Transcranial Electrical Stimulation Induces Quantified Changes in Resting Cerebral Perfusion Measured from Arterial Spin Labeling. Neural Plast. 2018;2018:5769861 Authors: Sherwood MS, Madaris AT, Mullenger CR, McKinley RA Abstract The use of transcranial electrical stimulation (TES) as a method to augment neural activity has increased in popularity in the last decade and a half. The specific application of TES to the left prefrontal cortex has been shown to produce broad cognitive effects; however, the neural mechanisms underlying these effects remain unknown. In this work, we evaluated the effect of repetitive TES on cerebral perfusion. Stimulation was applied to the left prefrontal cortex on three consecutive days, and resting cerebral perfusion was quantified before and after stimulation using arterial spin labeling. Perfusion was found to decrease significantly more in a matched sham stimulation group than in a group receiving active stimulation across many areas of the brain. These changes were found to originate in the locus coeruleus and were broadly distributed in the neocortex. The changes in the neocortex may be a direct result of the stimulation or an indirect result via the changes in the noradrenergic system produced from the altered activity of the locus coeruleus. These findings indicate that anodal left prefrontal stimulation alters the activity of the locus coeruleus, and this altered activity may excite the noradrenergic system producing the broad behavioral effects that have been reported. PMID: 30254668 [PubMed - indexed for MEDLINE] A domain-general brain network underlying emotional and cognitive interference processing: evidence from coordinate-based and functional connectivity meta-analyses. Brain Struct Funct. 2018 Nov;223(8):3813-3840 Authors: Chen T, Becker B, Camilleri J, Wang L, Yu S, Eickhoff SB, Feng C Abstract The inability to control or inhibit emotional distractors characterizes a range of psychiatric disorders. Despite the use of a variety of task paradigms to determine the mechanisms underlying the control of emotional interference, a precise characterization of the brain regions and networks that support emotional interference processing remains elusive. Here, we performed coordinate-based and functional connectivity meta-analyses to determine the brain networks underlying emotional interference. Paradigms addressing interference processing in the cognitive or emotional domain were included in the meta-analyses, particularly the Stroop, Flanker, and Simon tasks. Our results revealed a consistent involvement of the bilateral dorsal anterior cingulate cortex, anterior insula, left inferior frontal gyrus, and superior parietal lobule during emotional interference. Follow-up conjunction analyses identified correspondence in these regions between emotional and cognitive interference processing. Finally, the patterns of functional connectivity of these regions were examined using resting-state functional connectivity and meta-analytic connectivity modeling. These regions were strongly connected as a distributed system, primarily mapping onto fronto-parietal control, ventral attention, and dorsal attention networks. Together, the present findings indicate that a domain-general neural system is engaged across multiple types of interference processing and that regulating emotional and cognitive interference depends on interactions between large-scale distributed brain networks. PMID: 30083997 [PubMed - indexed for MEDLINE] Local connectivity of the resting brain connectome in patients with low back-related leg pain: A multiscale frequency-related Kendall's coefficient of concordance and coherence-regional homogeneity study. Neuroimage Clin. 2019 Jan 14;21:101661 Authors: Zhou F, Wu L, Guo L, Zhang Y, Zeng X Abstract Increasing evidence has suggested that central plasticity plays a crucial role in the development and maintenance of (chronic) nonspecific low back pain. However, it is unclear how local or short-distance functional interactions contribute to persisting low back-related leg pain (LBLP) due to a specific condition (i.e., lumbar disc herniation). In particular, the multiscale nature of local connectivity properties in various brain regions is still unclear. Here, we used voxelwise Kendall's coefficient of concordance (KCC) and coherence (Cohe) regional homogeneity (ReHo) in the typical (0.01-0.1 Hz) and five specific frequency (slow-6 to slow-2) bands to analyze individual whole-brain resting-state functional magnetic resonance imaging scans in 25 persistent LBLP patients (duration: 36.7 ± 9.6 months) and 26 healthy control subjects. Between-group differences demonstrated significant alterations in the KCC- and Cohe- ReHo of the right cerebellum posterior lobe, brainstem, left medial prefrontal cortex and bilateral precuneus in LBLP patients in the typical and five specific frequency bands, respectively, along with interactions between disease status and the five specific frequency bands in several regions of the pain matrix and the default-mode network (P < .01, Gaussian random field theory correction). The altered ReHo in the five specific frequency bands was correlated with the duration of pain and two-point discrimination, which were assessed using partial correlational analysis. These results linked the course of disease to the local connectivity properties in specific frequency bands in persisting LBLP. In future studies exploring local connectome association in pain conditions, integrated frequency bands and analytical methods should be considered. PMID: 30677731 [PubMed - as supplied by publisher] Extraction of time-varying spatio-temporal networks using parameter-tuned constrained IVA. IEEE Trans Med Imaging. 2019 Jan 23;: Authors: Bhinge S, Mowakeaa R, Calhoun VD, Adali T Abstract Dynamic functional connectivity (dFC) analysis is an effective way to capture the networks that are functionally associated and continuously changing over the scanning period. However, these methods mostly analyze the dynamic associations across the activation patterns of the spatial networks while assuming that the spatial networks are stationary. Hence, a model that allows for the variability in both domains and reduces the assumptions imposed on the data provides an effective way for extracting spatio-temporal networks. Independent vector analysis is a joint blind source separation technique that allows for estimation of spatial and temporal features while successfully preserving variability. However, its performance is affected for higher number of datasets. Hence, we develop an effective two-stage method to extract time-varying spatial and temporal features using IVA, mitigating the problems with higher number of datasets while preserving the variability across subjects and time. The first stage is used to extract reference signals using group independent component analysis (GICA) that are used in a parameter-tuned constrained IVA (pt-cIVA) framework to estimate time-varying representations of these signals by preserving the variability through tuning the constraint parameter. This approach effectively captures variability across time from a largescale resting-state fMRI data acquired from healthy controls and patients with schizophrenia and identifies more functionally relevant connections that are significantly different among healthy controls and patients with schizophrenia, compared with the widely used GICA method alone. PMID: 30676948 [PubMed - as supplied by publisher] BST1 rs4698412 allelic variant increases the risk of gait or balance deficits in patients with Parkinson's disease. CNS Neurosci Ther. 2019 Jan 24;: Authors: Shen YT, Wang JW, Wang M, Zhi Y, Li JY, Yuan YS, Wang XX, Zhang H, Zhang KZ Abstract AIMS: We aimed to explore effects of bone marrow stromal cell antigen-1 (BST1) rs4698412 allelic variant on brain activation and associative clinical symptoms in Parkinson's disease (PD). METHODS: A total of 49 PD patients and 47 healthy control (HC) subjects were recruited for clinical evaluations, blood samples collection for genotypes, and resting-state functional MRI (rs-fMRI) scans. Based on BST1 rs4698412 allelic variant (G → A), participants were further divided into 18 PD-GG, 31 PD-GA/AA, 20 HC-GG, and 27 HC-GA/AA carriers, which respectively indicated subjects carrying ancestral or risk allele in that locus in PD or HC. Two-way analysis of covariance (ANCOVA) was applied to investigate main effects and interactions between PD and BST1 rs4698412 allelic variant on brain function via amplitude of low-frequency fluctuations (ALFF). Spearman's correlations were then utilized to detect associations between interactive brain regions and clinical symptoms. RESULTS: Compared to HC subjects, PD patients exhibited increased ALFF values in left cerebellum_8 and cerebellum_9. Significant interaction was in right lingual gyrus, where there were the lowest ALFF values and ALFF values were only negatively associated with Timed Up and Go (TUG) test time in PD-GA/AA subgroup. CONCLUSION: BST1 rs4698412-modulated lingual gyrus functional alterations could be related to gait and balance dysfunction in PD. PMID: 30676692 [PubMed - as supplied by publisher] Physical characteristics not psychological state or trait characteristics predict motion during resting state fMRI. Sci Rep. 2019 Jan 23;9(1):419 Authors: Ekhtiari H, Kuplicki R, Yeh HW, Paulus MP Abstract Head motion (HM) during fMRI acquisition can significantly affect measures of brain activity or connectivity even after correction with preprocessing methods. Moreover, any systematic relationship between HM and variables of interest can introduce systematic bias. There is a large and growing interest in identifying neural biomarkers for psychiatric disorders using resting state fMRI (rsfMRI). However, the relationship between HM and different psychiatric symptoms domains is not well understood. The aim of this investigation was to determine whether psychiatric symptoms and other characteristics of the individual predict HM during rsfMRI. A sample of n = 464 participants (174 male) from the Tulsa1000, a naturalistic longitudinal study recruiting subjects with different levels of severity in mood/anxiety/substance use disorders based on the dimensional NIMH Research Domain Criteria framework was used for this study. Based on a machine learning (ML) pipeline with nested cross-validation to avoid overfitting, the stacked model with 15 anthropometric (like body mass index, BMI) and demographic (age and sex) variables identifies BMI and weight as the most important variables and explained 10.9 percent of the HM variance (95% CI: 9.9-11.8). In comparison ML models with 105 self-report measures for state and trait psychological characteristics identified nicotine and alcohol use variables as well as impulsivity inhibitory control variables but explain only 5 percent of HM variance (95% CI: 3.5-6.4). A combined ML model using all 120 variables did not perform significantly better than the model using only 15 physical variables (combined model 95% confidence interval: 10.2-12.4). Taken together, after considering physical variables, state or trait psychological characteristics do not provide additional power to predict motion during rsfMRI. PMID: 30674933 [PubMed - in process] Prognostication of chronic disorders of consciousness using brain functional networks and clinical characteristics. Elife. 2018 08 14;7: Authors: Song M, Yang Y, He J, Yang Z, Yu S, Xie Q, Xia X, Dang Y, Zhang Q, Wu X, Cui Y, Hou B, Yu R, Xu R, Jiang T Abstract Disorders of consciousness are a heterogeneous mixture of different diseases or injuries. Although some indicators and models have been proposed for prognostication, any single method when used alone carries a high risk of false prediction. This study aimed to develop a multidomain prognostic model that combines resting state functional MRI with three clinical characteristics to predict one year-outcomes at the single-subject level. The model discriminated between patients who would later recover consciousness and those who would not with an accuracy of around 88% on three datasets from two medical centers. It was also able to identify the prognostic importance of different predictors, including brain functions and clinical characteristics. To our knowledge, this is the first reported implementation of a multidomain prognostic model that is based on resting state functional MRI and clinical characteristics in chronic disorders of consciousness, which we suggest is accurate, robust, and interpretable. PMID: 30106378 [PubMed - indexed for MEDLINE] Correlation between intrinsic brain activity and thyroid-stimulating hormone level in unmedicated bipolar II depression. Neuroendocrinology. 2019 Jan 23;: Authors: Zhong S, Chen G, Zhao L, Jia Y, Chen F, Qi Z, Huang L, Wang Y Abstract <br>Background/aims: Although abnormalities of amplitude of low-frequency fluctuations (ALFF) and hormone levels of hypothalamus-pituitary-thyroid (HPT) axis have been reported in patients with bipolar disorder (BD). However, the associations between abnormal ALFF and serum thyroid hormone levels remain unknown. METHOD: Ninety patients with unmedicated BD II depression and 100 healthy controls (HCs) underwent resting-state functional magnetic resonance imaging (rs-fMRI), and then routine band (0.01-0.1Hz), slow 5 band (0.01-0.027Hz) and slow 4 band (0.027-0.073Hz) ALFF analysis were performed. Additionally, serum thyroid hormone levels including free tri-iodothyronine (FT3), total tri-iodothyronine (TT3), free thyroxin (FT4), total thyroxin (TT4) and thyroid-stimulating hormone (TSH), were detected. Then the correlation between abnormal serum thyroid hormone levels and ALFF values in patients with BD II depression was calculated. RESULTS: Compared with the HCs, the patients with BD II depression showed decreased ALFF in the bilateral precuneus (PCu)/posterior cingulate cortex (PCC) in routine and slow-4 frequency bands, decreased ALFF in the right PCu and increased ALFF in the right middle occipital gyrus (MOG) in the slow-5 frequency band. Additionally, the patients with BD II depression showed lower TSH level than HCs, and TSH level was positively correlated with ALFF values in the bilateral PCu/PCC in the routine frequency band. CONCLUSIONS: These findings suggest that the patients with BD II depression display intrinsic activity abnormalities mainly in the PCu/PCC and MOG which are associated with specific frequency bands. Moreover, altered intrinsic activity in the PCu/PCC may be related to TSH levels in bipolar II depression. <br>. PMID: 30673659 [PubMed - as supplied by publisher] The Cerebellar Predictions for Social Interactions: Theory of Mind Abilities in Patients With Degenerative Cerebellar Atrophy. Front Cell Neurosci. 2018;12:510 Authors: Clausi S, Olivito G, Lupo M, Siciliano L, Bozzali M, Leggio M Abstract Recent studies have focused on the role of the cerebellum in the social domain, including in Theory of Mind (ToM). ToM, or the "mentalizing" process, is the ability to attribute mental states, such as emotion, intentions and beliefs, to others to explain and predict their behavior. It is a fundamental aspect of social cognition and crucial for social interactions, together with more automatic mechanisms, such as emotion contagion. Social cognition requires complex interactions between limbic, associative areas and subcortical structures, including the cerebellum. It has been hypothesized that the typical cerebellar role in adaptive control and predictive coding could also be extended to social behavior. The present study aimed to investigate the social cognition abilities of patients with degenerative cerebellar atrophy to understand whether the cerebellum acts in specific ToM components playing a role as predictive structure. To this aim, an ad hoc social cognition battery was administered to 27 patients with degenerative cerebellar pathology and 27 healthy controls. In addition, 3D T1-weighted and resting-state fMRI scans were collected to characterize the structural and functional changes in cerebello-cortical loops. The results evidenced that the patients were impaired in lower-level processes of immediate perception as well as in the more complex conceptual level of mentalization. Furthermore, they presented a pattern of GM reduction in cerebellar portions that are involved in the social domain such as crus I-II, lobule IX and lobule VIIIa. These areas showed decreased functional connectivity with projection cerebral areas involved in specific aspects of social cognition. These findings boost the idea that the cerebellar modulatory function on the cortical projection areas subtends the social cognition process at different levels. Particularly, regarding the lower-level processes, the cerebellum may act by implicitly matching the external information (i.e., expression of the eyes) with the respective internal representation to guarantee an immediate judgment about the mental state of others. Otherwise, at a more complex conceptual level, the cerebellum seems to be involved in the construction of internal models of mental processes during social interactions in which the prediction of sequential events plays a role, allowing us to anticipate the other person's behavior. PMID: 30670949 [PubMed] Towards fast and reliable simultaneous EEG-fMRI analysis of epilepsy with automatic spike detection. Clin Neurophysiol. 2018 Dec 17;130(3):368-378 Authors: Omidvarnia A, Kowalczyk MA, Pedersen M, Jackson GD Abstract OBJECTIVE: The process of manually marking up epileptic spikes for simultaneous electroencephalogram (EEG) and resting state functional MRI (rsfMRI) analysis in epilepsy studies is a tedious and subjective task for a human expert. The aim of this study was to evaluate whether automatic EEG spike detection can facilitate EEG-rsfMRI analysis, and to assess its potential as a clinical tool in epilepsy. METHODS: We implemented a fast algorithm for detection of uniform interictal epileptiform discharges (IEDs) in one-hour scalp EEG recordings of 19 refractory focal epilepsy datasets (from 16 patients) who underwent a simultaneous EEG-rsfMRI recording. Our method was based on matched filtering of an IED template (derived from human markup) used to automatically detect other 'similar' EEG events. We compared simultaneous EEG-rsfMRI results between automatic IED detection and standard analysis with human EEG markup only. RESULTS: In contrast to human markup, automatic IED detection takes a much shorter time to detect IEDs and export an output text file containing spike timings. In 13/19 focal epilepsy datasets, statistical EEG-rsfMRI maps based on automatic spike detection method were comparable with human markup, and in 6/19 focal epilepsy cases automatic spike detection revealed additional brain regions not seen with human EEG markup. Additional events detected by our automated method independently revealed similar patterns of activation to a human markup. Overall, automatic IED detection provides greater statistical power in EEG-rsfMRI analysis compared to human markup in a short timeframe. CONCLUSIONS: Automatic spike detection is a simple and fast method that can reproduce comparable and, in some cases, even superior results compared to the common practice of manual EEG markup in EEG-rsfMRI analysis of epilepsy. SIGNIFICANCE: Our study shows that IED detection algorithms can be effectively used in epilepsy clinical settings. This work further helps in translating EEG-rsfMRI research into a fast, reliable and easy-to-use clinical tool for epileptologists.
http://www.rfmri.org/NewRFMRIStudies?page=7
Sort By: Relevance A-Z By Title Z-A By Title A-Z By Author Z-A By Author Date Ascending Date Descending Article Peer Reviewed Early time dynamics of laser-ablated silicon using ultrafast grazing incidence X-ray scattering Hull, C Raj, S Lam, R Katayama, T Pascal, T Drisdell, WS Saykally, R Schwartz, CP et al. Recent Work (2019) Controlling the morphology of laser-derived nanomaterials is dependent on developing a better understanding of the particle nucleation dynamics in the ablation plume. Here, we utilize the femtosecond-length pulses from an x-ray free electron laser to perform time-resolved grazing incidence x-ray scattering measurements on a laser-produced silicon plasma plume. At 20 ps we observe a dramatic increase in the scattering amplitude at small scattering vectors, which we attribute to incipient formation of liquid silicon droplets. These results demonstrate the utility of XFELs as a tool for characterizing the formation dynamics of nanomaterials in laser-produced plasma plumes on ultrafast timescales. Article Peer Reviewed A Hybrid Catalyst-Bonded Membrane Device for Electrochemical Carbon Monoxide Reduction at Different Relative Humidities Sullivan, I Han, L Lee, SH Lin, M Larson, DM Drisdell, WS Xiang, C et al. Recent Work (2019) A hybrid catalyst-bonded membrane device using gaseous reactants for a carbon monoxide reduction (COR) reaction in the cathode chamber, an aqueous electrolyte for an oxygen evolution reaction (OER) in the anode chamber, and an anion exchange membrane (AEM) for product separation was modeled, constructed, and tested. The Cu electrocatalyst was electrodeposited onto gas diffusion layers (GDLs) and was directly bonded to AEM by mechanical pressing in the hybrid device. The impacts of relative humidity at the cathode inlet on the selectivity and activity of COR were investigated by computational modeling and experimental methods. At a relative humidity of 30%, the Cu-based catalyst in the hybrid device exhibited a total operating current density of 87 mA cm with a -2.0 V vs Ag/AgCl reference electrode, a Faradaic efficiency (FE) for C H generation of 32.6%, and an FE for a liquid-based carbon product of 42.6%. Significant improvements in the partial current densities for COR were observed in relation to planar electrodes or flooded gas diffusion electrodes (GDEs). In addition, a custom test bed was constructed to characterize the oxidation states of the Cu catalysts in real time along with product analysis though the backside of the GDLs via operando X-ray absorption (XAS) measurements. -2 2 4 Article Peer Reviewed Electronic Structure, Optoelectronic Properties, and Photoelectrochemical Characteristics of γ-Cu3V2O8 Thin Films Jiang, CM Farmand, M Wu, CH Liu, YS Guo, J Drisdell, WS Cooper, JK Sharp, ID et al. Recent Work (2017) Thin films of n-type γ-Cu V O are prepared with high phase purity via reactive co-sputtering deposition. Complementary X-ray spectroscopic methods are used to reveal that the valence band maximum consists of O 2p states, while the conduction band minimum is primarily composed of Cu 3d states. Therefore, γ-Cu V O is classified as a charge transfer insulator, in which the 1.80 eV indirect band gap corresponds to the O 2p → Cu 3d transition. Through photoelectrochemical measurements, the surface of γ-Cu V O photoanodes is found to display intrinsic activity for catalyzing water oxidation that is stable with time. The combination of a small optical band gap, suitable valence band energy, and excellent photoelectrochemical stability suggests that γ-Cu V O could be a promising photoanode material. However, it is found that the charge extraction efficiency from these semiconductor photoanodes is strongly limited by a short (20-40 nm) hole diffusion length. Characterization of the electronic structure and transport properties of γ-Cu V O photoanodes suggests strategies for improving energy conversion efficiency and provides fundamental insights that can be used for understanding and evaluating function in a broader class of emerging ternary metal oxides. 3 2 8 3 2 8 3 2 8 3 2 8 3 2 8 Article Peer Reviewed Correlating Oxidation State and Surface Area to Activity from Operando Studies of Copper CO Electroreduction Catalysts in a Gas-Fed Device Lee, SH Sullivan, I Larson, DM Liu, G Toma, FM Xiang, C Drisdell, WS et al. Recent Work (2020) The rational design of high-performance electrocatalysts requires a detailed understanding of dynamic changes in catalyst properties, including oxidation states, surface area, and morphology under realistic working conditions. Oxide-derived Cu catalysts exhibit a remarkable selectivity toward multicarbon products for the electrochemical CO reduction reaction (CORR), but the exact role of the oxide remains elusive for explaining the performance enhancements. Here, we used operando X-ray absorption spectroscopy (XAS) coupled with simultaneous measurements of the catalyst activity and selectivity by gas chromatography (GC) to study the relationship between oxidation states of Cu-based catalysts and the activity for ethylene (C2H4) production in a CO gas-fed cell. By utilizing a custom-built XAS cell, oxidation states of Cu catalysts can be probed in device-relevant settings and under high current densities (>80 mA cm-2) for the CORR. By employing an electrochemical oxidation process, we found that the Cu oxidation states and specific ion species do not correlate with C2H4 production. The difference in the CORR activity is also investigated in relation to electrochemical surface area (ECSA) changes. While the hydrogen evolution reaction (HER) activity is positively correlated to the ECSA changes, the increased C2H4 activity is not proportional to the ECSA. Ex situ characterization from microscopic techniques suggests that the changes in the C2H4 activity and selectivity may arise from a morphological transformation that evolves into a more active structure. These comprehensive results give rise to the development of a cell regeneration method that can restore the performance of the Cu catalyst without cell disassembly. Our study establishes a basis for the rational design of highly active electrocatalysts for broad-range reactions in a gas-fed device. 1 supplemental PDF Article Peer Reviewed An Operando Investigation of (Ni-Fe-Co-Ce)Ox System as Highly Efficient Electrocatalyst for Oxygen Evolution Reaction Favaro, M Drisdell, WS Marcus, MA Gregoire, JM Crumlin, EJ Haber, JA Yano, J et al. Recent Work (2017) The oxygen evolution reaction (OER) is a critical component of industrial processes such as electrowinning of metals and the chlor-alkali process. It also plays a central role in the development of a renewable energy field for generation a solar fuels by providing both the protons and electrons needed to generate fuels such as H or reduced hydrocarbons from CO . To improve these processes, it is necessary to expand the fundamental understanding of catalytically active species at low overpotential, which will further the development of electrocatalysts with high activity and durability. In this context, performing experimental investigations of the electrocatalysts under realistic working regimes (i.e., under operando conditions) is of crucial importance. Here, we study a highly active quinary transition-metal-oxide-based OER electrocatalyst by means of operando ambient-pressure X-ray photoelectron spectroscopy and X-ray absorption spectroscopy performed at the solid/liquid interface. We observe that the catalyst undergoes a clear chemical-structural evolution as a function of the applied potential with Ni, Fe, and Co oxyhydroxides comprising the active catalytic species. While CeO is redox inactive under catalytic conditions, its influence on the redox processes of the transition metals boosts the catalytic activity at low overpotentials, introducing an important design principle for the optimization of electrocatalysts and tailoring of high-performance materials. (Chemical Equation Presented). 2 2 2 Article Peer Reviewed Bimetal-Organic Framework Self-Adjusted Synthesis of Support-Free Nonprecious Electrocatalysts for Efficient Oxygen Reduction You, B Jiang, N Sheng, M Drisdell, WS Yano, J Sun, Y et al. Recent Work (2015) The development of low-cost catalysts with oxygen reduction reaction (ORR) activity superior to that of Pt for fuel cells is highly desirable but remains challenging. Herein, we report a bimetal-organic framework (bi-MOF) self-adjusted synthesis of support-free porous Co-N-C nanopolyhedron electrocatalysts by pyrolysis of a Zn/Co bi-MOF without any post-treatments. The presence of initial Zn forms a spatial isolation of Co that suppresses its sintering during pyrolysis, and Zn evaporation also promotes the surface area of the resultant catalysts. The composition, morphology, and hence ORR activity of Co-N-C could be tuned by the Zn/Co ratio. The optimal Co-N-C exhibited remarkable ORR activity with a half-wave potential of 0.871 V versus the reversible hydrogen electrode (RHE) (30 mV more positive than that of commercial 20 wt % Pt/C) and a kinetic current density of 39.3 mA cm at 0.80 V versus RHE (3.1 times that of Pt/C) in 0.1 M KOH, and excellent stability and methanol tolerance. It also demonstrated ORR activity comparable to and stability much higher than those of Pt/C in acidic and neutral electrolytes. Various characterization techniques, including X-ray absorption spectroscopy, revealed that the superior activity and strong stability of Co-N-C originated from the intense interaction between Co and N, the high content of ORR active pyridinic and pyrrolic N, and the large specific surface area. -2 Article Peer Reviewed Determining Atomic-Scale Structure and Composition of Organo-Lead Halide Perovskites by Combining High-Resolution X-ray Absorption Spectroscopy and First-Principles Calculations Drisdell, WS Leppert, L Sutter-Fella, CM Liang, Y Li, Y Ngo, QP Wan, LF Gul, S Kroll, T Sokaras, D Javey, A Yano, J Neaton, JB Toma, FM Prendergast, D Sharp, ID et al. Recent Work (2017) © 2017 American Chemical Society. We combine high-energy resolution fluorescence detection (HERFD) X-ray absorption spectroscopy (XAS) measurements with first-principles density functional theory (DFT) calculations to provide a molecular-scale understanding of local structure, and its role in defining optoelectronic properties, in CH3NH3Pb(I1-xBrx)3 perovskites. The spectra probe a ligand field splitting in the unoccupied d states of the material, which lie well above the conduction band minimum and display high sensitivity to halide identity, Pb-halide bond length, and Pb-halide octahedral tilting, especially for apical halide sites. The spectra are also sensitive to the organic cation. We find that the halides in these mixed compositions are randomly distributed, rather than having preferred octahedral sites, and that thermal tilting motions dominate over any preferred structural distortions as a function of halide composition. These findings demonstrate the utility of the combined HERFD XAS and DFT approach for determining structural details in these materials and connecting them to optoelectronic properties observed by other characterization methods. Article Peer Reviewed Soft X-Ray Second Harmonic Generation as an Interfacial Probe. Lam, RK Raj, SL Pascal, TA Pemmaraju, CD Foglia, L Simoncig, A Fabris, N Miotti, P Hull, CJ Rizzuto, AM Smith, JW Mincigrucci, R Masciovecchio, C Gessini, A Allaria, E De Ninno, G Diviacco, B Roussel, E Spampinati, S Penco, G Di Mitri, S Trovò, M Danailov, M Christensen, ST Sokaras, D Weng, T-C Coreno, M Poletto, L Drisdell, WS Prendergast, D Giannessi, L Principi, E Nordlund, D Saykally, RJ Schwartz, CP et al. UC Berkeley Previously Published Works (2018) Nonlinear optical processes at soft x-ray wavelengths have remained largely unexplored due to the lack of available light sources with the requisite intensity and coherence. Here we report the observation of soft x-ray second harmonic generation near the carbon K edge (∼284 eV) in graphite thin films generated by high intensity, coherent soft x-ray pulses at the FERMI free electron laser. Our experimental results and accompanying first-principles theoretical analysis highlight the effect of resonant enhancement above the carbon K edge and show the technique to be interfacially sensitive in a centrosymmetric sample with second harmonic intensity arising primarily from the first atomic layer at the open surface. This technique and the associated theoretical framework demonstrate the ability to selectively probe interfaces, including those that are buried, with elemental specificity, providing a new tool for a range of scientific problems. Article Peer Reviewed Two-photon absorption of soft X-ray free electron laser radiation by graphite near the carbon K-absorption edge Lam, RK Raj, SL Pascal, TA Pemmaraju, CD Foglia, L Simoncig, A Fabris, N Miotti, P Hull, CJ Rizzuto, AM Smith, JW Mincigrucci, R Masciovecchio, C Gessini, A De Ninno, G Diviacco, B Roussel, E Spampinati, S Penco, G Di Mitri, S Trovò, M Danailov, MB Christensen, ST Sokaras, D Weng, TC Coreno, M Poletto, L Drisdell, WS Prendergast, D Giannessi, L Principi, E Nordlund, D Saykally, RJ Schwartz, CP et al.
https://escholarship.org/search/?q=author%3A%22Drisdell%2C%20WS%22
Q: lisp parsing for 'not' (defun simplify (x) (if (and (not (null x)) (listp x)) (if (and (equal '(car x) '(cadr x)) (equal '(car x) 'not)) (simplify (cddr x)) (cons (car x) (simplify (cdr x))) ) 'nil ) ) This lisp function is meant to take an expression as an argument then remove superfluous 'not's from it and return it. It checks if the argument is a non-empty list and returns nil if it isn't (base case). If it is non-empty, I want to check if the car(x) = car(cdr(x)) = 'not'. If they aren't detected to be a pair of 'not's then it should recurse and build on a list to return. If they are detected to be both 'not' then it should still recurse but also skipping both car(x) and car(cdr(x)). Right now all this code does is return an expression identical to the argument so I assume the problem is that my condition in the nested if statement isn't being set off properly, how can I check if car(x) and cadr(x) are both 'not'? A: "when you assume..." Actually, the test is semi-ok (but you'll end up taking (car nil) if x is (not) ). The problem is the recursion. Try it on paper: (simplify '(and (not (not y)) (or x (not (not z))))` (car x) is not not. so: (cons (car x) (simplify (cdr x)) Now x is '((not (not y)) (or x (not (not z))))So(car x)is(not (not y)), which is not equal tonot`. Recurse again Now x is ((or x (not (not z)))and(car x)is(or x (not (not z)))`. But you probably get the picture. Hint: (map simplify x) and fix your termination condition to return x if x is an atom.
Q: can dijkstra's algorithm be applied as it is for undirected graph I am wondering why can't Dijkstra's algorithm be applied as it is for undirected graphs. I mean instead of adding 2 directed edges to make it equivalent to a directed graph , why wouldn't it work if this algorithm is applied as it is for undirected graphs ? So, if that is the case , then can someone please give some example of an undirected graph where applying Dijkstra's algorithm as it is will give a wrong answer ? EDIT : All edge weights are non-negative. I know i can solve Dijkstra's algorithm for undirected graph by replacing edge by 2 equivalent directed edges. What i am asking is why can't i apply the algorithm as it is without replacing each edge ? EDIT2 : OK , i may be making some huge mistake. But here is what i have in mind. I keep a heap of vertices with each element of heap representing the distance of that vertex from the source vertex found till now. At each step , i pull the min from the heap and say that the distance of this vertex is actually the shortest lenght path of this vertex from the source vertex. Now i iterate over its neighbours and check if their distances can be updated. If they can be just update them in the heap . Now what i don't see if how is the directions of edges any relevant in this algorithm ? I just apply the same thing if i have an undirected graph. Can someone please correct me if i doing some huge mistake on my part ? A: Dijkstra's algorithm works just fine for undirected graphs. As others have pointed out, if you are calling a library function that expects a directed graph, then you must duplicate each edge; but if you are writing your own code to do it, you can work with the undirected graph directly.
BRADLEY R. GITZ: The center doesn't hold "Asymmetrical polarization" was an overused term among political scientists during the Obama years, defined as a situation in which one party moved to the radical fringe while the other stuck close to the center. More specifically, a combination of evangelical influence on social issues and Tea Party movement influence on economic ones was thought to have pushed the Republican Party sharply to the right, leaving the Democrats to occupy the sensible, moderate and presumably more electorally advantageous middle (which also happened to be where most Democrat-leaning political scientists saw themselves). There was always a great deal wrong with this view--Barack Obama's policies were far more leftist than centrist (although often disguised by his soothing demeanor) and there were few positions that Republicans took in 2012 that Ronald Reagan would have disagreed with in 1982--but it at least had the virtue of inadvertently capturing a fundamental political truth, which is that electoral defeat and opposition status tend to produce not recalibration but ideological radicalization (just as electoral victory and the holding of power tend to produce ideological moderation). The Democratic Party moved leftward, away from Bill Clinton's successful "New Democrat" triangulation, after the disputed election of 2000, with that movement accelerating in tandem with opposition to the war in Iraq. Republicans then demonized Obama largely to the same extent as Democrats demonized George W. Bush, with Obamacare playing a role therein comparable to the role Iraq had played for Democrats. In any event, by 2016 asymmetrical polarization, to the extent it had ever really existed, had been replaced by the plain old symmetrical kind, with the Democrats further to the left than they had been when Clinton had left office and Republicans further to the right than when Bush had. More important for explaining our current predicament, neither side saw any fundamental reason to try to enhance their electoral appeal by moving back toward the center (as Clinton's New Democrats did in the early 1990s)--the Democrats had won the past two presidential elections and were supremely confident that demography was destiny and that their "coalition of the ascendant" would propel them to still more victories. They (and most everyone else) assumed right up to the night of Nov. 8, 2016, that the Clintons would be moving back into the White House. Republicans, for their part, had exploited anti-Obama sentiment to pick up more than 1,000 elected offices nationwide during his watch, including both chambers of Congress, putting them in a stronger position than at any time since the 1920s. Lest we forget, at least before they were hit by the perfect storm that was Donald Trump, the GOP had also put forth what was widely thought to be a fairly impressive field of contenders for 2016. The result was a two-party system in which each party increasingly fell back on its base over time and, with the ironic exception of Trump, de-emphasized appeals to those who weren't part of it. Lost in the polarization that has only worsened in the past two years has therefore been the likelihood that a majority of Americans are neither hard-core liberal Democrats nor hard-core conservative Republicans and consequently feel neglected by both parties; that they view both Trump and the "Resistance" with equal disdain and are shaking their heads in embarrassment at the ugly spectacle. If anything, the Democratic ideological shift has now become the more extreme, with the embrace of a host of positions that were once confined to the radical fringe--abortion on demand without restriction at taxpayer expense, abolition of ICE, Medicare and free college for all, a $15 minimum wage, a spectacularly expensive "Green New Deal," and so on. Occupy Wall Street, Black Lives Matter, #MeToo and what has come to be called "Trump derangement syndrome" has produced an increasingly toxic Democratic mix of unabashed socialism, environmental hysteria, and divisive identity politics. In short, while the Republican problem is Trump, the Democratic problem is now Democrats; more precisely, a party base that demands ever more radical policies (including Trump's impeachment) and which is likely to further radicalize the party as the 2020 primary approaches and the various contenders seek to "outbid" each other for base support. No Democratic candidate will want to run the risk of being outflanked on their leftward flank, with the ideological space between their positions at any given moment and what amounts to sheer wing-bat nuttiness sure to shrink. As soon as one Democratic aspirant embraces that 70 percent top marginal tax rate proposed by Alexandria Ocasio-Cortez, as will surely happen, another will go higher, to 80 percent or more. If one candidate pushes reparations for slavery as a means of corralling the often decisive black primary vote, as will also surely happen, the rest will have to embrace it as well. Donald Trump has made the Republican Party as ideologically incoherent as he is. But he has also made the Democratic Party both more ideologically coherent and radioactive. As the Democrats are about to painfully discover, the problem with being "woke" is that you can never be "woke" enough. So you must remain a constantly leftward moving target, with no stopping point. ------------v------------ Freelance columnist Bradley R. Gitz, who lives and teaches in Batesville, received his Ph.D. in political science from the University of Illinois.
Under section 235(b)(1) of the Immigration and Nationality Act (“INA”),[1] U.S. immigration officers may order certain non-citizens[2] who are arriving in the United States to be removed on an expedited basis, without any appeal or meaningful judicial review. This “expedited removal” process can lead to unfair results, but the conventional wisdom has been that it is impossible to challenge such results effectively because of the broad bars on review. Recent cases in other areas of the law, however, suggest that it may be possible to challenge some expedited removal orders and the related bar on judicial review as unconstitutional, especially in the case of certain nonimmigrants who return from a relatively brief trip abroad after having spent substantial time in the United States. The expedited removal process[3], which was created by the Illegal Immigration Reform and Immigrant Responsibility Act of 1996 (“IIRIRA”),[4] applies to “arriving aliens”[5] seeking admission at a U.S. port of entry who are said by U.S. Customs and Border Protection (“CBP”) to be inadmissible under INA § 212(a)(6)(C), regarding fraud and false claims to U.S. citizenship, or INA § 212(a)(7), regarding lack of proper documentation.[6] (It also can be applied to certain groups of people who have entered the U.S. unlawfully and been here relatively briefly,[7] but that is beyond the scope of this article.) The statute explains that in such a case, “the officer shall order the alien removed from the United States without further hearing or review unless the alien indicates either an intention to apply for asylum under [INA § 208] or a fear of persecution.”[8] Even an asylum claimant will only escape expedited removal (and instead be placed in ordinary removal proceedings under INA § 240) if an asylum officer or immigration judge finds him or her to have a “credible fear” of persecution or torture.[9] Although expedited removal is restricted by statute to cases of fraud or lack of documentation, those with facially valid nonimmigrant visas are sometimes subjected to this process if CBP does not believe that they intend to comply with the conditions of their nonimmigrant admission, but instead believes them to be intending immigrants. As the Court of Appeals for the Seventh Circuit explained in Khan v. Holder, The troubling reality of the expedited removal procedure is that a CBP officer can create the [INA § 212(a)(7), 8 U.S.C.] § 1182(a)(7) charge by deciding to convert the person’s status from a non-immigrant with valid papers to an intending immigrant without the proper papers, and then that same officer, free from the risk of judicial oversight, can confirm his or her suspicions of the person’s intentions and find the person guilty of that charge.[10] In a January 2010 web article[11] and two related postings on the Insightful Immigration Blog in January and February 2010,[12] for example, Cyrus D. Mehta described expedited removal orders issued against a number of H-1B nonimmigrants by CBP at Newark airport, based not on any allegation that the nonimmigrants lacked genuine visa stamps but on CBP’s objections to the nature of their employment (which CBP believed was not in compliance with H-1B status).[13] One who is removed under these expedited procedures can then be subjected to a five-year bar on re-entry to the United States,[14] although CBP can sometimes be convinced as a matter of its discretion to rescind the order after the fact and convert it retroactively to a voluntary decision to withdraw the application for admission, which carries no such bar on re-entry.[15] According to INA § 242(e), judicial review of an INA § 235 expedited removal order “is available in habeas corpus proceedings, but shall be limited to determinations of—(A) whether the petitioner is an alien, (B) whether the petitioner was ordered removed under [§ 235(b)(1)], and (C) whether the petitioner can prove . . . [he] is an alien lawfully admitted for permanent residence” or has been granted refugee status under INA § 207 or asylum under INA § 208.[16] This statutory habeas review includes “no review of whether the alien is actually inadmissible or entitled to any relief from removal.”[17] There is also a provision for challenges to the validity of the system, or written policies and procedures issued under it, to be brought in the U.S. District Court for the District of Columbia within 60 days of the challenged policy or procedure first being implemented.[18] Outside of these limited means of review, INA § 242(a)(2)(A) purports to bar any other federal court jurisdiction over INA § 235 expedited-removal orders.[19] A number of Courts of Appeals, including the Seventh Circuit in Khan, have honored these jurisdictional limitations and refused to review the substantive inadmissibility determinations made by CBP in its expedited removal orders.[20] Some recent cases in other contexts may, however, suggest a potential method to overcome these jurisdictional limitations and demonstrate why the expedited removal procedure is unconstitutional, especially with respect to certain nonimmigrants who are returning to the United States from a relatively brief trip abroad after having spent time here. The first piece of the puzzle is the Supreme Court’s 2008 decision in Boumediene v. Bush.[21] That case stemmed from the Bush Administration’s attempt to detain at Guantanamo Bay certain noncitizens said to be “enemy combatants”. Although Congress had attempted in the Military Commissions Act (“MCA”) to preclude the Guantanamo detainees from challenging their detention by a petition for a writ of habeas corpus, the Court held that this was “an unconstitutional suspension of the writ.”[22] “If the privilege of habeas corpus is to be denied to the detainees now before us,” the Court said, “Congress must act in accordance with the requirements of the Suspension Clause.”[23] That clause of the U.S. Constitution allows suspension of the writ of habeas corpus only “when in Cases of Rebellion or Invasion the public Safety may require it.”[24] Because Congress had not even purported to exercise its Suspension Clause power in enacting the MCA, and the alternate review procedures it had provided were not a constitutionally adequate substitute for habeas review, the Guantanamo detainees were entitled to seek the constitutionally protected writ of habeas corpus.[25] Under Boumediene, it appears that an alien detained during the expedited removal process has a right to the writ of habeas corpus as preserved by the Constitution, since Congress did not exercise its Suspension Clause power in enacting INA § 242. The question then becomes whether the statutory habeas review provided by INA § 242(e) is sufficient to fulfill this constitutional right, or whether the restrictions of INA § 242(e), in combination with the bar on other review of INA § 242(a)(2)(A), violate Boumediene. The Supreme Court held in Boumediene that “the privilege of habeas corpus entitles the prisoner to a meaningful opportunity to demonstrate that he is being held pursuant to ‘the erroneous application or interpretation’ of relevant law.”[26] This is in substantial tension with the provision of INA § 242(e) that habeas review of an expedited removal order should involve “no review of whether the alien is actually inadmissible or entitled to any relief from removal.”[27] And since the Boumediene Court actually drew this portion of its holding from an immigration case, INS v. St. Cyr[28] (which had provided for such legal review as a statutory matter and avoided the constitutional issue on the ground that Congress had in the relevant context not clearly barred habeas review[29]), the holding certainly appears applicable in the immigration context. The Boumediene Court also held that “the necessary scope of habeas review in part depends upon the rigor of any earlier proceedings” and that “[w]here a person is detained by executive order, rather than, say, after being tried and convicted in a court, the need for collateral review is most pressing.”[30] Admittedly, “[t]he intended duration of the detention and the reasons for it bear upon the precise scope of the inquiry.”[31] But the brief and summary nature of the administrative expedited removal process, with a lack of legal representation for the person being removed or any real opportunity for that person to present evidence, calls to mind the Boumediene Court’s observation that “where the underlying detention proceedings lack the necessary adversarial character, the detainee cannot be held responsible for all deficiencies in the record.”[32] Thus, it appears that a habeas court should be entitled under Boumediene to consider new evidence, as well, on review of an expedited removal order. The next logical question is whether there is something about the rights of arriving aliens which might make it less constitutionally unreasonable for Congress to have barred them from meaningful administrative or judicial review. There is a long line of authority – although it may arguably have been undercut by Boumediene – to the effect that “an alien seeking initial admission to the United States requests a privilege and has no constitutional rights regarding his application, for the power to admit or exclude aliens is a sovereign prerogative.”[33] However, as the Supreme Court recognized in the 1982 case of Landon v. Plasencia, “once an alien gains admission to our country and begins to develop the ties that go with permanent residence his constitutional status changes accordingly.”[34] Such an alien has a right to due process of law under the Fifth Amendment to the U.S. Constitution.[35] A January 2011 decision of the Court of Appeals for the Second Circuit, Galluzzo v. Holder, [36] recognizes that due process rights under Plasencia[37] extend beyond those who are admitted as Lawful Permanent Residents (“LPRs”)—that is, who have “green cards”. The petitioner in Galluzzo sought review of an Order of Removal that had been issued against him under INA § 217 without a hearing, based on his admission to the United States under the Visa Waiver Program (“VWP”) and the government’s allegation that he had waived his rights to a hearing under that program.[38] Based primarily on Plasencia and its above-quoted language regarding the rights of an alien who begins to develop the ties that go with permanent residence, the Court of Appeals held that “in the absence of a waiver, Galluzzo has a constitutional right to a pre-removal hearing.”[39] This was so despite the fact that the permanence of Galluzzo’s residence had not been authorized by the government: he “concede[d] that he entered the United States on a ninety-day tourist visa issued through the VWP,” and he had “stayed well beyond the permitted ninety days.”[40] We therefore know from Galluzzo that the due process right to a hearing under Plasencia extends beyond an LPR such as Ms. Plasencia. To see the relevance of this principle to certain arriving aliens and their expedited-removal cases, we must return to Plasencia itself. Ms. Plasencia had left the United States for “a few days” and was in that sense a “returning resident alien” seeking to be let back into the United States, rather than one continuously present in the United States whom the government sought to deport.[41] She was placed in exclusion proceedings, the pre-1997 equivalent of removal proceedings for arriving aliens.[42] The Supreme Court held that it was proper under the statute for the then-INS[43] to have placed Ms. Placencia in exclusion proceedings rather than deportation proceedings in which she would receive more statutory rights, but that it was possible her constitutional right to due process of law as a returning resident alien had been violated by some aspects of the exclusion proceedings. “If the exclusion hearing is to ensure fairness,” the Court held, “it must provide Plasencia an opportunity to present her case effectively, though at the same time it cannot impose an undue burden on the government.”[44] Acknowledging Ms. Plasencia’s concern about the potentially inadequate advance notice provided her of the charges against her, and other aspects of the exclusion proceedings, the Court held that “the other factors relevant to due process analysis-the risk of erroneous deprivation, the efficacy of additional procedural safeguards, and the government’s interest in providing no further procedures-have not been adequately presented to permit us to assess the sufficiency of the hearing.”[45] The Court therefore remanded for further exploration of those issues.[46] Like Ms. Plasencia’s due process right, the constitutional right of a resident nonimmigrant to due process of law before removal should not be affected by a relatively brief departure abroad. While the Plasencia Court spoke of a “returning resident alien,”[47] it did not indicate that such a resident alien necessarily need be a Lawful Permanent Resident (LPR) in the statutory sense, as opposed to a resident of the United States having some other status. There are some references to a “permanent resident alien” in the Plasencia decision as well,[48] but as the Second Circuit implicitly recognized in Galluzzo when it relied on Plasencia to establish the rights of a VWP overstay, a alien who has “beg[un] to develop the ties that go with permanent residence”[49] need not be an alien who has been specifically declared an LPR by statute or regulation. Indeed, given that the case law standing for the proposition that a resident alien is entitled to due process, as cited by the Plasencia Court, predates the enactment of the Immigration and Nationality Act in 1952 and its creation of the statutory rules governing lawful permanent residence as that term is currently used,[50] the constitutional rule explicated in Plasencia logically cannot be dependent on the current statutory status of an LPR. Moreover, if one who has unlawfully overstayed a brief nonimmigrant admission has due process rights under Plasencia, as the Second Circuit held in Galluzzo, then a returning nonimmigrant whose lengthy presence in the United States has been fully authorized by law should have at least as much right to due process, whether or not that lengthy presence was authorized to be “permanent” in a formal sense. If returning resident nonimmigrants who have been absent for a relatively brief period of time can claim due process rights under Plasencia, as it appears from Galluzzo they should be able to, then expedited removal procedures as they currently exist are likely to be deemed a more clearly deficient process than the exclusion proceeding at issue in Plasencia. As suggested by the Court of Appeals for the Seventh Circuit in Khan, the expedited removal process “is fraught with risk of arbitrary, mistaken, or discriminatory behavior” [51] because there is effectively no review of the decision by CBP officials who serve, one might say, as judge, jury and executioner. “When the Constitution requires a hearing,” the Supreme Court has said, “it requires a fair one, one before a tribunal which meets at least prevailing standards of impartiality.”[52] Additional procedural safeguards could be easily provided by allowing representation by counsel at an expeditious hearing before a neutral adjudicator such as an immigration judge—as was often available in exclusion proceedings before IIRIRA created the expedited removal process. Of course, it would likely be futile to seek admission to the United States and claim due process rights on the theory that one was returning to a prior unlawful residence in the U.S., along the lines of the residence conceded by the overstayed petitioner in Galluzzo. Anyone who lacked an immigrant visa, as those subjected to expedited removal generally do, and who sought to return to a prior unlawful residence as, say, a tourist in B-2 nonimmigrant status (in violation of the statutory requirement that such a nonimmigrant “hav[e] a residence in a foreign country which he has no intention of abandoning”[53]), would likely establish by his own assertions the propriety of his removal under INA § 212(a)(7) as an intending immigrant lacking proper documentation. Thus, there would be no prejudice flowing from the challenged process and no cognizable constitutional claim.[54] But not all nonimmigrants are forbidden by law as a condition of their status to abandon their foreign residence and take up residence in the United States, and so it is possible for some nonimmigrants to be returning residents under Plasencia who seek to return to a residence in the United States that it is perfectly lawful for them to have. Although many well-known nonimmigrant categories such as that of B-2 tourist, B-1 business visitor, or F-1 student require that the nonimmigrant have a residence abroad that he or she lacks the intention of abandoning,[55] several nonimmigrant categories do not.[56] H-1B workers in a specialty occupation, L-1A international transferee managers and executives, and L-1B international transferee workers with specialized knowledge, for example, may have what is known as “dual intent”: they are exempt from the presumption of immigrant status under INA § 214(b), and are authorized by regulation to renew their nonimmigrant status while simultaneously pursuing permanent residence.[57] Moreover, although there are ordinarily time limits on total stays in H-1B and L status (6 years in H-1B status,[58] 5 years in L-1B status,[59] or 7 years in L‑1A status,[60] with time in any of the three statuses counted against the total[61]), the American Competitiveness in the 21st Century Act (“AC21”) allows for extensions of time in H-1B status beyond the 6-year limit.[62] As long as it has been more than year since the filing of an application for labor certification, or an I-140 petition, that has not been finally denied, an H-1B nonimmigrant can have her status (and ability to obtain a visa) extended for one year at a time under section 106(a) of AC21; once an I-140 petition is approved but LPR status cannot be sought due to a lack of an available immigrant visa number, the status of the H-1B nonimmigrant (and her ability to obtain an H-1B visa) can be extended for three years at a time under section 104(c) of AC21.[63] In some categories, the wait for an immigrant visa number can last many years: in the subquota for natives of India within the Employment-Based Third Preference for workers filling a job requiring a bachelor’s degree, for example, only an application for labor certification filed before March 15, 2002 will make available an immigrant visa number as of March 2011.[64] In fact, because these cutoff dates are not guaranteed to move forward in anything approaching real time, some individuals within the State Department’s Visa Office suggested in 2009 that the wait for some categories could actually be measured in “decades” and perhaps total 40 years.[65] Even assuming constant forward movement of cutoff dates in real time, however, someone for whom a labor certification was first filed after approximately five years of H-1B time could, pursuant to AC21, end up maintaining H-1B status for a total of roughly fourteen years. E-1 treaty traders, E-2 treaty investors, and O-1 aliens of extraordinary ability, meanwhile, also are not required to maintain a foreign residence which they lack the intention to abandon,[66] although their statuses do not allow pure dual intent and they remain subject to INA § 214(b). By regulation, “[t]he approval of a permanent labor certification or the filing of a preference petition for an alien shall not be a basis for denying an O–1 petition, a request to extend such a petition, or the alien’s application for admission, change of status, or extension of stay.”[67] Similarly, the regulation governing E-1 and E-2 status, although it states that “[a]n alien classified under section 101(a)(15)(E) of the Act shall maintain an intention to depart the United States upon the expiration or termination of E–1 or E–2 status”, nonetheless provides that “an application for initial admission, change of status, or extension of stay in E classification may not be denied solely on the basis of an approved request for permanent labor certification or a filed or approved immigrant visa preference petition.”[68] That is, an E-1, E-2, or O-1 nonimmigrant may proceed towards LPR status so long as he or she intends to depart if refused a further extension of nonimmigrant status, even if the nonimmigrant’s intent is to depart in order to re-enter as soon as possible with an immigrant visa and take up LPR status. Moreover, there is no statutory or regulatory limit on the number of extensions of stay available to an E-1, E-2, or O-1 nonimmigrant, so such a nonimmigrant can lawfully remain resident in the United States for decades at a time. Thus, nonimmigrants in categories such as E-1, E-2, H-1B, L-1 and O-1, who are returning to the United States after lawfully residing here in nonimmigrant status for many years, can plausibly claim that they are returning residents with constitutional rights under Plasencia, and simultaneously maintain the validity of the nonimmigrant status which they seek admission to resume. If subjected to expedited removal based, for example, on an erroneous allegation by CBP that their past and intended future activities in the United States are inconsistent with their status,[69] such returning nonimmigrants should be able to petition for a writ of habeas corpus under Boumediene and thereby challenge the expedited removal procedure as a deprivation of their liberty without due process of law according to Plasencia and Galluzzo. The error-prone and arbitrary expedited removal process, while it may be constitutional as applied to initial entrants with no prior ties to the United States, is not a constitutionally adequate manner in which to deny a previously admitted lawful nonimmigrant resident of the United States the right to return to what may have been their home for many years. The ideal time to file a habeas petition under the theory outlined in this article would be while the petitioner was detained by CBP pending execution of the expedited removal order. Whether such a challenge might be possible following execution of an expedited removal order is a subject for further analysis, but it would at least be substantially more difficult. Classically, a constitutionally protected habeas petition would as a general matter require the petitioner to be in custody at the time the petition was filed, and a petitioner who has already been removed is not in custody, at least in the simplest and most straightforward sense of that term. CBP often allows those subject to expedited removal proceedings to contact a friend while they are detained, but discourages or prevents them from contacting attorneys, presumably on the basis that an applicant for admission lacks the right to legal representation during initial inspection. (The chain of logic between the lack of right to representation and a prohibition on speaking to an attorney strikes this author as a bit strained, but that is an issue for another day.) Therefore, it may be wise for any nonimmigrant who anticipates potential difficulties upon arrival to ensure that the friend or friends whom they would likely attempt to call if detained is in possession of the contact information for an appropriate immigration attorney. If concerned that CBP might not allow any communication, or that a single attempt to call while detained by CBP might not reach anyone, a more cautious alternative would be to make a plan to check in with such a friend by phone immediately after one’s flight lands, before proceeding into the immigration inspection area and the perhaps broader area in which cellphone use is prohibited, and advise that an appropriate immigration attorney should be contacted if the arriving nonimmigrant is not heard from again within a preset amount of time. [2] More precisely, only one who is not a U.S. national can be removed from the U.S., but there are few U.S. nationals who are not U.S. citizens—primarily people from American Samoa. See INA §§ 101(a)(29), 308. References to non-citizens in this article should be read to exclude noncitizen nationals. [3] In this article, references to expedited removal refer solely to the process created by INA § 235(b)(1). There is a different process created by INA § 238(b) for expedited removal of aliens who have been convicted of an aggravated felony, see INA § 237(a)(2)(A)(iii), and either have not been lawfully admitted for permanent residence or have been admitted only on a conditional basis. [5] For a definition of the term “arriving alien”, see 8 C.F.R. § 1.1(q). For purposes of expedited removal, certain persons otherwise deemed arriving aliens who used an advance parole to enter are not considered as such: the regulation states that “an arriving alien who was paroled into the United States before April 1, 1997, or who was paroled into the United States on or after April 1, 1997, pursuant to a grant of advance parole which the alien applied for and obtained in the United States prior to the alien’s departure from and return to the United States, will not be treated, solely by reason of that grant of parole, as an arriving alien under section 235(b)(1)(A)(i) of the Act.” [7] By statute, this authority can also be used, if the Attorney General chooses to issue regulations so providing, against any alien “who has not been admitted or paroled into the United States, and who has not affirmatively shown, to the satisfaction of an immigration officer, that the alien has been physically present in the United States continuously for the 2-year period immediately prior to the date of determination of inadmissibility,” except for “an alien who is a native or citizen of a country in the Western Hemisphere with whose government the United States does not have full diplomatic relations” – that is, Cuba – “and who arrives by aircraft at a port of entry.” INA § 235(b)(1)(A)(iii)(II), (b)(1)(F). Existing regulations empower the “Commissioner” (of the former INS), and now the Secretary of Homeland Security, to exercise the Attorney General’s discretionary authority to designate subclasses of aliens within this broader statutory class who are subject to expedited removal. See 8 C.F.R. § 235.3(b)(1)(ii). One such designation, issued in 2002 at 67 Fed. Reg. 68924 and online at http://www.uscis.gov/ilink/docView/FR/HTML/FR/0-0-0-1/0-0-0-79324/0-0-0-79342/0-0-0-80383.html, designates as subject to expedited removal “aliens who arrive in the United States by sea, either by boat or other means, who are not admitted or paroled, and who have not been physically present in the United States continuously for the two-year period prior to a determination of inadmissibility by a Service officer.” Another designation, issued in August 2004 at 69 Fed. Reg. 48877 and online at http://www.uscis.gov/ilink/docView/FR/HTML/FR/0-0-0-1/0-0-0-94157/0-0-0-94177/0-0-0-94493.html, designates as subject to expedited removal “Aliens determined to be inadmissible under sections 212(a)(6)(C) or (7) of the Immigration and Nationality Act who are present in the U.S. without having been admitted or paroled following inspection by an immigration officer at a designated port-of-entry, who are encountered by an immigration officer within 100 air miles of the U.S. international land border, and who have not established to the satisfaction of an immigration officer that they have been physically present in the U.S. continuously for the fourteen-day (14-day) period immediately prior to the date of encounter.” DHS indicated in the latter designation that it “plans under this designation as a matter of prosecutorial discretion to apply expedited removal only to (1) third-country nationals and (2) to Mexican and Canadian nationals with histories of criminal or immigration violations, such as smugglers or aliens who have made numerous illegal entries.” [13] As explained in the above-referenced January 12 blog post, “Some H-1Bs have been removed because they were working at client work sites, and the position of the Customs and Border Protection officer was that the H-1B petition should have been filed by the client and not by the IT consulting company.” [14] INA § 212(a)(9)(A)(i), 8 U.S.C. § 1182(a)(9)(A)(i). This bar does not apply if DHS (formerly the Attorney General) consents to the alien’s application for readmission, see INA § 212(a)(9)(A)(iii). Such consent is generally sought on Form I-212, although it appears from the statute that in the case of a nonimmigrant, a waiver of inadmissibility under INA § 212(d)(3) should also be a possible means to resolve this issue. [18] INA § 242(e)(3), 8 U.S.C. § 1182(e)(3). The American Immigration Lawyers Association brought such a challenge when the system was first implemented, but the case was dismissed and the dismissal affirmed by the Court of Appeals for the D.C. Circuit. American Immigration Lawyers Ass’n v. Reno, 199 F.3d 1352 (C.A.D.C. 2000); American Immigration Lawyers Ass’n v. Reno, 18 F. Supp. 2d 38 (D.D.C. 1998). [19] INA § 242(a)(2)(A), 8 U.S.C. § 1252(a)(2)(A), provides in part that “Notwithstanding any other provision of law (statutory or nonstatutory), including section 2241 of title 28, or any other habeas corpus provision, and sections 1361 and 1651 of such title, no court shall have jurisdiction to review— (i) except as provided in subsection (e) of this section, any individual determination or to entertain any other cause or claim arising from or relating to the implementation or operation of an order of removal pursuant to section 1225(b)(1) of this title,” and goes on to restate the bar on review to various other aspects of § 235(b)(1). [37] Because Michael Landon, an INS District Director, was a government official, it appears better practice to cite this case in short by reference to Ms. Plasencia, as per Rule 10.9(a)(i) of the Bluebook. [43] The Immigration and Naturalization Service was replaced by components of the Department of Homeland Security (what are now CBP, ICE, that is, Immigration and Customs Enforcement, and USCIS, that is, U.S. Citizenship and Immigration Services) in 2003. [46] On remand, the Court of Appeals for the Ninth Circuit remanded back to the district court. Plasencia v. District Director, INS, 719 F.2d 1425 (9th Cir. 1983). The final disposition of the case is not apparent in the Westlaw database of cases. [54]See Galluzzo, 2011 WL 222343 at *3. The presence or absence of prejudice in Galluzzo was complicated by the potential availability of relief, such as an application for adjustment of status to permanent resident under INA § 245, that cannot be obtained in most instances by an arriving alien who has not been admitted or paroled into the United States. [57]See 8 C.F.R. §§ 204.2(h)(16), 245.2(a)(4)(C); see also INA § 214(h) (stating, somewhat superfluously, that seeking permanent residence in the United States “shall not constitute evidence of an intention to abandon a foreign residence for purposes of obtaining a visa as a nonimmigrant described in subparagraph (H)(i)(b) or (c), (L), or (V) of section 101(a)(15) or otherwise obtaining or maintaining the status of a nonimmigrant described in such subparagraph, if the alien had obtained a change of status under section 248 to a classification as such a nonimmigrant before the alien’s most recent departure from the United States.”) Share this entry http://cyrusmehta.com/CyrusMehta/wp-content/uploads/2016/01/CyrusDMehta_WebLogo_2016.png00David Isaacsonhttp://cyrusmehta.com/CyrusMehta/wp-content/uploads/2016/01/CyrusDMehta_WebLogo_2016.pngDavid Isaacson2011-02-24 02:08:312016-09-13 02:18:42Can Some Returning Nonimmigrants Challenge an Expedited Removal Order in Court? How Recent Case Law May Provide a Window of Opportunity 1reply Trackbacks & Pingbacks […] being anti-Trump might de facto become a new ground of inadmissibility. This is because there are very limited grounds to challenge the decision of a border officer. Similarly, under the recent Supreme Court decision in Kerry v. […]
--- abstract: 'Calibrating the photometric redshifts of $\gtrsim10^{9}$ galaxies for upcoming weak lensing cosmology experiments is a major challenge for the astrophysics community. The path to obtaining the required spectroscopic redshifts for training and calibration is daunting, given the anticipated depths of the surveys and the difficulty in obtaining secure redshifts for some faint galaxy populations. Here we present an analysis of the problem based on the *self-organizing map*, a method of mapping the distribution of data in a high-dimensional space and projecting it onto a lower-dimensional representation. We apply this method to existing photometric data from the COSMOS survey selected to approximate the anticipated *Euclid* weak lensing sample, enabling us to robustly map the empirical distribution of galaxies in the multidimensional color space defined by the expected *Euclid* filters. Mapping this multicolor distribution lets us determine where – in galaxy color space – redshifts from current spectroscopic surveys exist and where they are systematically missing. Crucially, the method lets us determine whether a spectroscopic training sample is representative of the full photometric space occupied by the galaxies in a survey. We explore optimal sampling techniques and estimate the additional spectroscopy needed to map out the color-redshift relation, finding that sampling the galaxy distribution in color space in a systematic way can efficiently meet the calibration requirements. While the analysis presented here focuses on the *Euclid* survey, similar analysis can be applied to other surveys facing the same calibration challenge, such as DES, LSST, and *WFIRST*.' author: - 'Daniel Masters, Peter Capak, Daniel Stern, Olivier Ilbert, Mara Salvato, Samuel Schmidt, Giuseppe Longo, Jason Rhodes, Stephane Paltani, Bahram Mobasher, Henk Hoekstra, Hendrik Hildebrandt, Jean Coupon, Charles Steinhardt Josh Speagle, Andreas Faisst, Adam Kalinich, Mark Brodwin Massimo Brescia, Stefano Cavuoti' bibliography: - 'biblio.bib' title: 'Mapping the Galaxy Color-Redshift Relation: Optimal Photometric Redshift Calibration Strategies for Cosmology Surveys' --- Introduction ============ Upcoming large-scale surveys such as LSST, *Euclid* and *WFIRST* will measure the three-dimensional cosmological weak lensing shear field from broadband imaging of billions of galaxies. Weak lensing is widely considered to be one of the most promising probes of the growth of dark matter structure, as it is sensitive to gravitation alone and requires minimal assumptions about the coupling of dark matter and baryons [@Bartelmann01; @Weinberg13]. Moreover, weak lensing tomography is sensitive to the dark energy equation of state through its impact on the growth of structure with time [@Hu99]. However, it is observationally demanding: in addition to requiring accurately measured shapes for the weak lensing sample, robust redshift estimates to the galaxies are needed in order to reconstruct the three-dimensional matter distribution. Because it is infeasible to obtain spectroscopic redshifts (spec-z’s) for the huge numbers of faint galaxies these studies will detect, photometric redshift (photo-z) estimates derived from imaging in some number of broad filters will be required for nearly all galaxies in the weak lensing samples. Photo-z estimation has become an indispensable tool in extragalactic astronomy, as the pace of galaxy detection in imaging surveys far outstrips the rate at which follow-up spectroscopy can be performed. While photo-z techniques have grown in sophistication in recent years, the requirements for cosmology present novel challenges. In particular, cosmological parameters derived from weak lensing are sensitive to small, systematic errors in the photo-z estimates [@Ma06; @Huterer06]. Such biases are generally much smaller than the random scatter in photo-z estimates [@Dahlen13], and are of little consequence for galaxy evolution studies; however, they can easily dominate all other uncertainties in weak lensing experiments [@Newman15]. In addition to weak lensing cosmology, accurate and well-characterized photo-z’s will be crucial to other cosmological experiments. For example, baryon acoustic oscillation (BAO) experiments that rely on redshifts measured from faint near-infrared grism spectra will often have to resort to photo-z’s in order to determine the correct redshift assignment for galaxies with only a single detected line. Well-characterized photo-z estimates will be needed to correctly account for any errors thus introduced. There are two key requirements placed on the photo-z estimates for weak lensing cosmology. First, redshift estimates for individual objects must have sufficient precision to correct for intrinsic galaxy shape alignments as well as other potential systematics arising from physically associated galaxies that may affect the interpretation of the shear signal. While not trivial, meeting the requirement on the precision of individual photo-z estimates ($\sigz < 0.05(1+z)$ for *Euclid*, [@Laureijs11]) should be achievable [@Hildebrandt10]. The second, more difficult, requirement is that the overall redshift distributions $N(z)$ of galaxies in $\sim$10–20 tomographic bins used for the shear analysis must be known with high accuracy. Specifically, the mean redshift $\meanz$ of the $N(z)$ distribution must be constrained to better than $2\times10^{-3}(1+z)$ in order to interpret the amplitude of the lensing signal and achieve acceptable error levels on the cosmological parameter estimates [@Huterer06; @Amara07; @Laureijs11]. Small biases in the photo-z estimates, or a relatively small number of objects with catastrophically incorrect photo-z’s, can cause unacceptably large errors in the estimated $N(z)$ distribution. Photo-z estimates alone are not sufficient to meet this requirement, and spectroscopic calibration samples will be needed to ensure low bias in the $N(z)$ estimates. The significant difficulties associated with this requirement are summarized by @Newman15. The most straightforward approach to constrain $N(z)$ is to measure it directly by random spectroscopic sampling of galaxies in each tomographic redshift bin [@Abdalla08]. The total number of spectra needed to meet the requirement is then set by the central limit theorem. For upcoming “Stage IV” cosmology surveys (LSST, *Euclid*, and *WFIRST*) it is estimated that direct measurement of $N(z)$ for the tomographic bins would require total spectroscopic samples of $\sim$30,000–100,000 galaxies, fully representative in flux, color, and spatial distribution of the galaxies used to measure the weak lensing shear field (e.g., [@Ma08], [@Hearin12]). Moreover, the spectroscopic redshifts would need to have a very high success rate ($\gtrsim$99.5%), with no subpopulation of galaxies systematically missed in the redshift survey. @Newman15 note that current deep redshift surveys fail to obtain secure redshifts for $\sim$30–60% of the targeted galaxies; given the depths of the planned dark energy surveys, this “direct” method of calibrating the redshifts seems to be unfeasible. Because of the difficulty of direct spectroscopic calibration, @Newman15 argue that the most realistic method of meeting the requirements on $N(z)$ for the dark energy experiments may be some form of spatial cross-correlation of photometric samples with a reference spectroscopic sample, with the idea that the power in the cross-correlation will be highest when the samples match in redshift [@Newman08; @Schmidt13; @Rahman15]. This approach shows significant promise, but is not without uncertainties and potential systematics. For example, it requires assumptions regarding the growth of structure and galaxy bias with redshift, which may be covariant with the cosmological inferences drawn from the weak lensing analysis itself. Further work may clarify these issues and show that the technique is indeed viable for upcoming cosmological surveys. However, it seems safe to say that this method cannot *solely* be relied on for the weak lensing missions, particularly as at least two approaches will be needed: one to calibrate $N(z)$ for the tomographic bins, and another to test and validate the calibration. In light of these arguments, it is clear that targeted spectroscopic training and calibration samples will have to be obtained to achieve the accuracy in the $\meanz$ estimates of tomographic bins required by the weak lensing missions. Moreover, careful optimization of these efforts will be required to make the problem tractable. Here we present a technique, based on the simple but powerful *self-organizing map* [@Kohonen82; @Kohonen90], to map the empirical distribution of galaxies in the multidimensional color space defined by a photometric survey. Importantly, this technique provides us with a completely data-driven understanding of what constitutes a representative photometric galaxy sample. We can thereby evaluate whether a spectroscopic sample used for training and calibration spans the full photometric parameter space; if it does not, there will be regions where the photo-z results are untested and untrained. Machine learning–based photo-z algorithms, in particular, depend critically on representative spectroscopic training sets, and their performance will be degraded in regions of color space without spectroscopic coverage [@Collister04; @Hoyle15]. We show that the empirical color mapping described here can be used to optimize the training and calibration effort by focusing spectroscopic effort on regions of galaxy parameter space that are currently poorly explored, as well as regions with a less certain mapping to redshift. Alternatively, we can use the technique to identify and discard specific regions of color space for which spectroscopy will prove to be too expensive, or for which the redshift uncertainty is too large. In effect, the method lets us systematize our understanding of the mapping from color to redshift. By doing so, the number of spectroscopic redshifts needed to calibrate $N(z)$ for the weak lensing tomographic bins can be minimized. This approach will also naturally produce a “gold standard” training sample for machine learning algorithms. The technique we adopt also provides insight into the nature of catastrophic photo-z failures by illustrating regions of color space in which the mapping between color and redshift becomes degenerate. This is possible because the self-organized map is topological, with nearby regions representing similar objects, and widely separated regions representing dissimilar ones. In addition, template-fitting photo-z codes can potentially be refined with the map, particularly through the development of data-based priors and by using the empirical color mapping to test and refine the galaxy template sets used for fitting. Here our focus is on the *Euclid* survey, one of the three Stage IV dark energy surveys planned for the next decade, the other two being LSST and *WFIRST*. *Euclid* will consist of a 1.2 meter space telescope operating at L2, which will be used to measure accurate shapes of galaxies out to $z$$\sim$2 over $\sim$15,000 deg$^2$ with a single, broad (*riz*) filter. These observations will reach an AB magnitude of $\simeq$24.5 (10$\sigma$). In addition to these observations, a near-infrared camera on *Euclid* will obtain *Y*, *J*, and *H* band photometry to AB magnitude $\simeq$24 (5$\sigma$), which, together with complementary ground-based optical data, will be used for photo-z determination. The mission will also constrain cosmological parameters using BAO and redshift space distortions (RSD), using redshifts obtained with a low-resolution grism on the near-infrared camera. A more detailed description of the survey can be found in @Laureijs11. For this work, we assume that *Euclid* will obtain $ugrizYJH$ photometry for photo-z estimation. We select galaxies from the COSMOS survey [@Scoville07] that closely approximate the *Euclid* weak lensing sample, with photometry in similar bands and at similar depths as the planned *Euclid* survey. While our focus is on *Euclid*, the method we present is general and directly applicable to other weak lensing surveys facing the same calibration problem. This paper is organized as follows. In §2 we give an overview of the methodology used to map the galaxy multicolor space. In §3 we discuss the galaxy sample from the COSMOS survey used to approximate the anticipated *Euclid* weak lensing sample. In §4 we describe the self-organizing map algorithm and its implementation for this application. In §5 we discuss the map in detail, including what it reveals about the current extent of spectroscopic coverage in galaxy multicolor space. In §6 we address the problem of determining the spectroscopic sample needed to meet the weak lensing requirement, and in §7 we conclude with a discussion. Overview: Quantifying the Empirical Distribution of Galaxies in Color Space =========================================================================== Galaxies with imaging in a set of $N$ filters will follow some distribution in the multidimensional space (of dimension $N-1$) defined by the unique colors measured by the filters. These colors together determine the shape of the low-resolution spectral energy distribution (SED) measured by the filters. Henceforth, we will call the position a galaxy occupies in color space simply its color, or $\vec{C}$. For example, the *Euclid* survey is expected to have eight bands of photometry ($ugrizYJH$), and therefore a galaxy’s position in color space is uniquely determined by seven colors: $u-g, g-r, ..., J-H$. Galaxy color is the primary driver of photometric redshift estimates: template-based methods predict $\vec{C}$ for different template/redshift/reddening combinations and assign redshifts to galaxies based on where the models best fit the observed photometry, while machine learning methods assume the existence of a mapping from $\vec{C}$ to redshift, and attempt to discover it using spectroscopic training samples. Our goal here is to empirically map the distribution of galaxies in the color space defined by the anticipated *Euclid* broadband filters. We refer to this distribution as $\rho{(\vec{C})}$. Once we understand how galaxies are distributed in color space, optimal methods of sampling the distribution with spectroscopy can be developed to make an informed calibration of the color-redshift relation. The general problem of mapping a high-dimensional data distribution arises in many fields. Because the volume of the data space grows exponentially with the number of dimensions, data rapidly becomes sparse as the dimensionality increases. This effect – the so-called “curse of dimensionality” [@Bellman57] – makes normal data sorting strategies impractical. A number of algorithms, collectively referred to as nonlinear dimensionality reduction (NLDR), have been developed to address this problem by projecting high-dimensional data onto a lower-dimensional representation, thus facilitating visualization and analysis of relationships that exist in the data. We adopt the self-organizing map algorithm, described in more detail in §4. As emphasized by @Geach12, self-organized mapping is a powerful, empirical method to understand the multidimensional distributions common in modern astronomical surveys. Two primary motivations for choosing this technique over others are the relative simplicity of the algorithm and the highly visual nature of the resulting map, which facilitates human understanding of the data. Approximating the *Euclid* Weak Lensing Sample with COSMOS Data =============================================================== We use multiwaveband data from the COSMOS survey [@Capak07] to provide a close approximation to the expected *Euclid* weak lensing data. Photo-z estimates for the *Euclid* sample will rely on three near-infrared filters on the telescope ($YJH$), reaching an AB depth of 24 mag (5$\sigma$) for point sources, as well as complementary ground-based imaging in the optical, which we assume will consist of $ugriz$ imaging with LSST (in the northern sky the ground-based imaging data may be restricted to $griz$, affecting the analysis somewhat but not changing the overall conclusions). To provide a close analog to the expected *Euclid* data, we use COSMOS $u$ band imaging from CFHT, $griz$ imaging from Subaru Suprime Cam, and $YJH$ imaging from the UltraVista survey [@McCracken12], spanning a 1.44 deg$^{2}$ patch of COSMOS with highly uniform depth. We apply a flux cut to the average flux measured across the Subaru $r$, $i$ and $z$ bands to match the expected depth limit of the single, broad visible filter *Euclid* will use for the weak lensing shear measurement. The resulting “Euclid analog” sample consists of 131,609 objects from COSMOS. ![image](fig1){width="\linewidth"} Mapping Galaxy Color Space with the Self-Organizing Map ======================================================= The self-organizing map (SOM, [@Kohonen82; @Kohonen90]) is a neural network model widely used to map and identify correlations in high-dimensional data. Its use for some astronomical applications has been explored previously (see, e.g., [@Naim97; @Brett04; @Way12; @Fustes13; @Kind14]). The algorithm uses unsupervised, competitive learning of “neurons” to project high-dimensional data onto a lower-dimensional grid. The SOM algorithm can be thought of as a type of nonlinear principal component analysis, and is also similar in some respects to the k-means clustering algorithm [@MacQueen67]. In contrast to these and other methods, the SOM preserves the topology of the high-dimensional data in the low-dimension representation. Similar objects are thus grouped together on the self-organized map, and clusters that exist in the high-dimensional data space are reflected in the lower-dimensional representation. This feature makes the maps visually understandable and thus useful for identifying correlations that exist in high-dimensional data. More detailed descriptions of the algorithm and its variants can be found in a number of references (see, e.g., [@Vesanto02; @Kind14]). ![image](fig2){width="\linewidth"} \[figure:colors\] The SOM consists of a fixed number of cells arranged on a grid. The grid can be of arbitrary dimension, although two-dimensional grids are most common as they are the easiest to visualize. Each cell in the grid is assigned a weight vector $\vec{w}$ having the same number of dimensions as the training data. This vector can be thought of as pointing to a particular region of the multidimensional parameter space occupied by the data. The weight vectors are initialized prior to training, either randomly or by sampling from the input data. The training of the map is unsupervised, in the sense that the output variable of interest (here, redshift) is not considered. Only the input attributes (galaxy photometry) drive the training. We note that any measured galaxy property (size, magnitude, shape, environment, surface brightness, etc.) could be used in the training. We consider only colors here, as these are the primary drivers of the photo-z estimates, and the quantities most physically tied to redshift. The other properties mentioned can still be used after the map has been created to identify and help break redshift degeneracies within particular regions of galaxy color space. Training proceeds by presenting the map with a random galaxy from the training sample, which the cells “compete” for. The cell whose weight vector most closely resembles the training galaxy is considered the winner, and is called the Best Matching Unit, or BMU. The BMU as well as cells in its neighborhood on the map are then modified to more closely resemble the training galaxy. This pattern is repeated for many training iterations, over which the responsiveness of the map to new data gradually decreases, through what is known as the learning rate function. Additionally, the extent of the neighborhood around the BMU affected by new training data shrinks with iteration number as well, through what is known as the neighborhood function. These effects cause the map to settle to a stable solution by the end of the training iterations. To compute the winning cell for a given training object, a distance metric must be chosen. Most often, the Euclidean distance between the training object $\vec{x}$ and the cell weight vector $\vec{w_{k}}$ is used. With data of dimension $m$, this distance is given by: $$d^{2}_{k} = d^{2}_{k}(\vec{x},\vec{w_{k}}) = \sum\limits_{i=1}^{m}(x_{i}-w_{k,i})^{2}$$ However, dimensions with intrinsically larger error than others will be overweighted in this distance metric. To account for this, we instead use the reduced $\chi^{2}$ distance between the training object and the cell weight vector. With $\sigma_{x_{i}}$ representing the uncertainty in the $i^{\mathrm{th}}$ component of $\vec{x}$, this becomes: $$d^{2}_{k} = d^{2}_{k}(\vec{x},\vec{w_{k}}) = \frac{1}{m}\sum\limits_{i=1}^{m}\frac{ (x_{i}-w_{k,i})^{2}}{\sigma_{x_{i}}^{2}}$$ The BMU is the cell minimizing the $\chi^{2}$ distance. Once the BMU has been identified, the weight vectors of cells in the map are updated with the relation: $$\vec{w_{k}}(t+1)=\vec{w_{k}}(t)+a(t)H_{b,k}(t)[\vec{x}(t)-\vec{w_{k}}(t)]$$ Here $t$ represents the current timestep in the training. The learning rate function $a(t)$ is a monotonically decreasing function of the timestep (with $a(t)\leq1$), such that the SOM becomes progressively less responsive to new training data. With $N_{iter}$ representing the total number of training iterations, we adopt the following functional form for $a(t)$: $$a(t) = 0.5^{(t/N_{iter})}$$ The term $H_{b,k}$(t) is the value of the neighborhood function at the current timestep for cell $k$, given that the current BMU is cell $b$. This function is encoded as a normalized Gaussian kernel centered on the BMU: $$H_{b,k}(t) = e^{-D^{2}_{b,k}/\sigma^{2}(t)}$$ Here $D_{b,k} $ is the Euclidean distance on the map separating the $k^{\mathrm{th}}$ cell and the current BMU. The width of the Gaussian neighborhood function is set by $\sigma(t)$ and is given by $$\sigma(t) = \sigma_{s} (1/\sigma_{s})^{(t/N_{iter})}$$ The starting value, $\sigma_{s}$, is large enough that the neighborhood function initially encompasses most of the map. In practice, we set $\sigma_{s}$ equal to the the size (in pixels) of the smaller dimension of the rectangular map. The width of the neighborhood function shrinks by the end of training such that only the BMU and cells directly adjacent to it are significantly affected by new data. Optimizing the map for the photo-z problem ------------------------------------------ There is significant flexibility in choosing the parameters of the SOM. Parameters that can be modified include the number of cells, the topology of the map, the number of training iterations, and the form and evolution of the learning rate and neighborhood functions. Perhaps most influential is the number of cells. The representative power of the map increases with more cells; however, if too many cells are used the map will overfit the data, modeling noise that does not reflect the true data distribution. Moreover, there is a significant computational cost to increasing the number of cells. On the other hand, if too few cells are used, individual cells will be forced to represent larger volumes of color space, in which the mapping of color to redshift is less well defined. We explored a range of alternatives prior to settling on the map shown throughout this work. A rectangular map was chosen because this gives any principal component in the data a preferred dimension along which to align. Our general guideline in setting the number of cells was that the map should have sufficient resolution such that the individual cells map cleanly to redshift using standard photo-z codes. With 11,250 cells, the map bins galaxies into volumes, or “voxels”, of color space of comparable size as the photometric error on the data, with the result that variations within each color cell generally do not result in significant change in photo-z estimates. As we discuss in §6, the true spread in galaxy redshifts within each color cell is an important quantity to understand for the calibration of $N(z)$. Algorithm implementation ------------------------ We implemented the SOM algorithm in C for computational efficiency. The number of computations required is sizable and scales with both the total number of cells and the number of training iterations. Optimizations are certainly possible, and may be necessary if this algorithm is to be applied to much larger photometric datasets. We initialized the values of the cell weight vectors with random numbers drawn from a standard normal distribution. The number of training iterations used was $2\times10^{6}$, as only minimal improvements in the map were observed for larger numbers of iterations. At each iteration, a random galaxy was selected (with replacement) from the training sample to update the map. We applied the algorithm based on seven galaxy colors: $u-g$, $g-r$, $r-i$, $i-z$, $z-Y$, $Y-J$, and $J-H$, which are analogous to the colors that will be measured by *Euclid* and used for photo-z estimation. The errors in the colors are computed as the quadrature error of the photometric errors in the individual bands. If a training object has a color that is not constrained due to bad photometry in one or both of the relevant bands, we ignore that color in the training iteration. Only the well-measured colors for that object are used both to find the BMU and update the corresponding colors of the cell weight vectors. If a color represents an upper/lower limit, we penalize the $\chi^{2}$ distance for cells that violate the limit when computing the BMU, with a penalty that varies depending on the size of the discrepancy between the limit and the cell color value. ![The SOM colored by the number of galaxies in the overall sample associating with each color cell. The coloration is effectively our estimate of $\rho{(\vec{C})}$, or the density of galaxies as a function of position in color space. 0.1cm[]{data-label="figure:occupation"}](fig3){width="0.95\linewidth"} Assessing map quality --------------------- Ideally, the SOM should be highly representative of the data, in the sense that the SEDs of most galaxies in the sample are well-approximated by some cell in the map. To assess the representativeness of the map we calculate what is known as the average quantization error over the entire training sample of $N$ objects: $$\epsilon_{q} = \frac{1}{N}\sum\limits_{i=1}^N ||\bold{x_{i}}-\bold{b_{i}}||$$ Here $\bold{b_{i}}$ is the best matching cell for the *i*$^{\mathrm{th}}$ training object. We find that the average quantization error is 0.2 for the sample. The quantization error is the average vector distance between an object and its best-matching cell in the map. Therefore, with seven colors used to generate the map, the average offset of a particular color (e.g., $g-r$) of a given galaxy from its corresponding cell in the map is $0.2/\sqrt{7}=0.08$ mag. Note that the map provides a straightforward way of identifying unusual or anomalous sources. Such objects will be poorly represented by the map due to their rarity – in effect, they are unable to train their properties into the SOM. Simply checking whether an object is well represented by some cell in the map is therefore a way of testing whether it is “normal”, and may be useful for flagging, for example, blended objects, contaminated photometry, or truly rare sources. Analyzing the Color Space Map ============================= Figure \[figure:som\] provides an overview of the SOM generated from COSMOS galaxies, which encodes the 8-band SEDs that appear in the data with non-negligible frequency. Note that the final structure of the map is to some extent random and depends on the initial conditions combined with the order in which training objects are presented, but the overall topological structure will be similar from run to run; this was verified by generating and comparing a number of maps. Figure 2 illustrates the variation of two colors ($u-g$ and $g-r$) across the map, demonstrating how these features help drive the overall structure. In the following analysis we probe the map by analyzing the characteristics of the galaxies that associate best with each cell in color space. ![image](fig4){width="\linewidth"} \[figure:photz\] The distribution of galaxies in color space, $\rho{(\vec{C})}$ -------------------------------------------------------------- In Figure \[figure:occupation\] we show the self-organized map colored by the number of galaxies associating best with each cell. This coloration is effectively our estimate of $\rho{(\vec{C})}$, the density of galaxies as a function of position in color space. An important caveat is that the density estimate derived from the COSMOS survey data is likely to be affected to some degree by cosmic variance (and perhaps, to a lesser extent, by shot noise). The true $\rho{(\vec{C})}$ can ultimately be constrained firmly with the wide-area survey data from LSST, *Euclid*, and *WFIRST*. However, the COSMOS-based $\rho{(\vec{C})}$ should be a close approximation of what the full surveys will find. Photometric redshift estimates across the map --------------------------------------------- Because the cells in the self-organizing map represent galaxy SEDs that appear in the data, we can compute photometric redshifts for them to see how they are distributed in redshift. We used the *Le Phare* template fitting code [@Arnouts99; @Ilbert06] to compute cell photo-z’s. We used the cell weight vectors (converting the colors to photometric magnitudes normalized in $i$-band) as inputs for *Le Phare*, assigning realistic error bars to these model SEDs based on the scatter in the photometry of galaxies associated with each cell. The result of the photo-z fitting is shown on the left side of Figure 4. We also estimate redshifts on the map by computing the median photo-z of the galaxies associated with each cell, using the 30-band photo-z estimates provided by the COSMOS survey [@Ilbert09]. These photo-z estimates take advantage of more photometric information than is contained in the eight *Euclid*-like filters used to generate the map. Nevertheless, as can be seen on the right side of Figure 4, the resulting map is quite smooth, indicating that the eight *Euclid* bands capture much of the relevant information for photo-z estimation contained in the 30-band data. Redshift probability density functions (PDFs) generated by the *Le Phare* template fitting can be used to estimate redshift uncertainty across the map, letting us identify cells that have high redshift variance or multiple redshift solutions, as well as cells with a well-defined mapping to redshift. In Figure \[figure:photz\_disp\] we show the photo-z dispersion results from the *Le Phare* code. The dispersion is the modeled uncertainty in the redshift assigned to each cell, based on the spread in the cell’s redshift PDF. Figure \[figure:photz\_disp\] shows that there are well-defined regions in which the modeled uncertainties are much higher, and that these regions tend to cluster around sharp boundaries between low- and high-redshift galaxies. Note that these boundaries are inherent to the data and indicate regions of significant redshift degeneracy. A possible improvement in this analysis is to more rigorously estimate the photometric uncertainty for each cell using a metric for the volume of color space it represents; we defer this more detailed analysis to future work. Current spectroscopic coverage in COSMOS ---------------------------------------- One of the most important results of the mapping is that it lets us *directly* test the representativeness of existing spectroscopic coverage. To do so, we used the master spectroscopic catalog from the COSMOS collaboration (Salvato et al. 2016, in prep). The catalog includes redshifts from VLT VIMOS (zCOSMOS, [@Lilly07]; VUDS, [@LeFevre15]), Keck MOSFIRE (Scoville et al. 2015 in prep; MOSDEF, [@Kriek15]), Keck DEIMOS ([@Kartaltepe10], Hasinger et al. 2015, in prep), Magellan IMACS [@Trump07], Gemini-S [@Balogh14], Subaru FMOS [@Silverman15], as well as a non-negligible fraction of sources provided by a number of smaller programs. It is important to note that the spectroscopic coverage of the COSMOS field is not representative of the typical coverage for surveys. Multiple instruments with different wavelength coverages and resolutions were employed. Moreover, the spectroscopic programs targeted different types of sources: from AGN to flux-limited samples, from group and cluster members to high-redshift candidates, etc., providing an exceptional coverage in parameter space. In the left panel of Figure \[figure:specz\], we show the map colored by the median spectroscopic redshift of galaxies associated with each cell, using only galaxies with the highest confidence redshift assignments (corresponding to $\sim$100% certainty). The gray regions on the map correspond to cells of color space for which no galaxies have such high confidence spectrosopic redshifts; 64% of cells fall in this category. In the right panel of Figure \[figure:specz\] we show the same plot, but using all confidence $\gtrsim$95% redshifts in the master catalog. Significantly more of the galaxy color space is covered with spectroscopy when the requirement on the quality of the redshifts is relaxed, with only 51% of color cells remaining gray. However, for calibration purposes very high confidence redshifts will be needed, so that the right-hand panel may be overly optimistic. As can be seen in both panels, large and often continuous regions of galaxy color space remain unexplored with spectroscopy. It should be noted that Figure \[figure:specz\] is entirely data-driven, demonstrating the direct association of observed SED with observed redshift. An interesting possibility suggested by this figure is that the color-redshift relation may be smoother than expected from photo-z variance estimates from template fitting (e.g., Figure \[figure:photz\_disp\]). High intrinsic variance in the color-redshift mapping should result in large cell-to-cell variation in median spec-z, whereas the actual distribution appears to be rather smooth overall. ![The dispersion in the photo-z computed with the *Le Phare* template fitting code as a function of color cell. As can be seen, high dispersion regions predominantly fall in localized areas of color space near the boundary separating high and low redshift galaxies.[]{data-label="figure:photz_disp"}](fig5){width="0.95\linewidth"} ![image](fig6){width="\linewidth"} ![The map colored by the median *i*-band magnitude (AB) of galaxies associating with each cell. The strong variation of magnitude with color is not unexpected, and largely explains the absence of spectra in particular regions of galaxy color space.[]{data-label="figure:magnitude"}](fig7){width="0.95\linewidth"} Magnitude variation across color space -------------------------------------- Not surprisingly, the median galaxy magnitude varies strongly with location in color space, as illustrated in Figure \[figure:magnitude\]. This variation largely determines the regions of color space that have been explored with spectroscopy, with intrinsically fainter galaxies less likely to have been observed. In fact, as we will discuss further in §6.6, the majority of galaxies in unexplored regions of color space are faint, star-forming galaxies at $z\sim0.2-1.5$, which are simply too “uninteresting” (from a galaxy evolution standpoint) to have been targeted in current spectroscopic surveys. Such sources will, however, be critically important for weak lensing cosmology. Toward Optimal Spectroscopic Sampling Strategies for Photo-z Calibration ======================================================================== We have demonstrated that the self-organizing map, when applied to a large photometric dataset, efficiently characterizes the distribution of galaxies in the parameter space relevant for photo-z estimation. We now consider the problem of determining the spectroscopic sample needed to calibrate the $\meanz$ of the tomographic redshift bins to the required level for weak lensing cosmology. We show that allocating spectroscopic efforts using the color space mapping can minimize the spectroscopy needed to reach the requirement on the calibration of $N(z)$. Estimating the spectroscopic sample needed for calibration ---------------------------------------------------------- Obtaining spectroscopic redshifts over the full color space of galaxies is obviously beneficial, but the question arises: precisely how many spectra are needed in different regions of color space in order to meet the dark energy requirement? Here we provide a framework for understanding this question in terms of the color space mapping. First we note that each color cell has some subset of galaxies that associate best with it; let the total number of galaxies associating with the $i^{\mathrm{th}}$ cell be $n_{i}$. We refer to the true redshift probability distribution of these galaxies as $P_{i}(z)$. For the sake of this argument we assume that a tomographic redshift bin for weak lensing will be constructed by selecting all galaxies associating with some subset of the cells in the SOM. Let the total number of cells used in that tomographic bin be $c$. Then the true $N(z)$ distribution for galaxies in the resulting tomographic redshift bin is: $$N(z) = \sum\limits_{i=1}^cn_{i}P_{i}(z)$$ The mean of the $N(z)$ distribution is given by: $$\langle z \rangle = \frac{\int z N(z) dz} { N_{T}}$$ where the integral is taken over all redshifts and $N_{T}$ is the total number of galaxies in the redshift bin. Inserting Equation (8) into Equation (9), we find that the mean redshift of the bin can be expressed as $$\begin{split} \langle z \rangle = \frac{1}{N_{T}}\int z [n_{1}P_{1}(z) + ... + n_{c}P_{c}(z)]dz \\ = \frac{1}{N_{T}} [ n_{1}\langle{z_{1}}\rangle + ... + n_{c}\langle{z_{c}}\rangle ] \end{split}$$ Equation (10) is the straightforward result that the mean redshift of the full $N(z)$ distribution is proportional to the sum of the mean redshifts of each color cell, weighted by the number of galaxies per cell. The uncertainty in $\langle z \rangle$ depends on the uncertainty of the mean redshift of each cell, and is expressed as: $$\Delta \langle z \rangle = \frac{1}{N_{T}}\sqrt{\sum\limits_{i=1}^cn_{i}^2\sigma_{\langle z_{i} \rangle}^{2} }$$ Equation (11) shows quantitatively what is intuitively clear, namely that the uncertainty in $\langle z \rangle$ is influenced more strongly by cells with both high uncertainty in their mean redshift and a significant number of galaxies associating with them. This indicates that the largest gain can be realized by sampling more heavily in denser regions of galaxy color space, as well as those regions with higher redshift uncertainty. Conversely, cells with very high redshift dispersion could simply be excluded from the weak lensing sample (although caution would be needed to ensure that no systematic errors are introduced by doing so). If we assume that the $c$ color cells have roughly equal numbers of galaxies and that $\sigma_{\langle z_{i} \rangle}$ is roughly constant across cells, then Equation (11) becomes: $$\Delta \langle z \rangle = \sigma_{\langle z_{i} \rangle} / \sqrt{c}$$ With $\sigma_{\langle z_{i} \rangle} \sim 0.05(1+\langle z \rangle)$, we find $\sim$600 color cells with this level of uncertainty would be needed to reach the *Euclid* calibration requirement for the redshift bin. With one spectrum per cell required to reach this level of uncertainty in $\sigma_{\langle z_{i} \rangle}$, this estimate of the number of spectra needed is in rough agreement with that of @Bordoloi10, and much lower than estimates for direct calibration through random sampling. Note that the mean redshifts $\langle z_{i} \rangle$ for each color cell used in Equation (10) should be based on spectroscopic redshifts, to ensure that the estimates are not systematically biased. The error in a cell’s mean redshift estimate, $\sigma_{\langle z_{i} \rangle}$, will depend on the dispersion in the $P_{i}(z)$ distribution for the cell, and will scale inversely with the square root of the number of spectra obtained to estimate it. The preceding analysis treats the photo-z calibration as a stratified sampling problem, in which the overall statistics of a population are inferred through targeted sampling from relatively homogeneous subpopulations. The gain in statistical precision from using Equation (10) to estimate $\langle z \rangle$ can be attributed to the systematic way in which the full color space is sampled, relative to blind direct sampling. However, stratified sampling will only outperform random sampling in the case that the subpopulations being sampled do, in fact, have lower dispersion than the overall distribution–i.e., in the case that the $P_{i}(z)$ distributions for the color cells have lower redshift dispersion than the $N(z)$ distribution of all the galaxies in a tomographic bin. ![image](fig8){width="0.8\linewidth"} Simulating different sampling strategies ---------------------------------------- Now we attempt to more realistically estimate the spectroscopic coverage needed to achieve the requirement in our knowledge of $\langle z \rangle$. To begin, we assume that the cell redshift PDFs from *Le Phare* are reasonably accurate, and can be taken to represent the true $P_{i}(z)$ distributions for galaxies in each color cell. (This assumption is, of course, far from certain, and simply serves as a first approximation). With the known occupation density of cells of the map (Figure 3), we can then use Equation (8) to generate realistic $N(z)$ distributions for different tomographic bins. For this illustration, we break the map up into photo-z-derived tomographic bins of width $\Delta z = 0.2$ over $0<z<2$ (although $Euclid$ will most likely use somewhat different bins in practice). An example of one of the $N(z)$ distributions modeled in this way is shown in Figure \[figure:Nz\]. The uncertainty in the estimated $\meanz$ of these $N(z)$ distributions can then be tested for different spectroscopic sampling strategies through Monte Carlo simulations, in which spectroscopy is simulated by randomly drawing from the $P_{i}(z)$ distributions. (Alternatively, given our knowledge of the individual $\sigma_{\langle z_{i} \rangle}$ uncertainties, Equation (11) can be used directly. In fact, the results were checked in both ways and found to be in agreement). The results of three possible sampling strategies are given in Table 1. The simplest strategy tested (“Strategy 1”) is to obtain one spectrum per color cell in order to estimate the cell mean redshifts. Equation (10) is then used to compute the overall mean of the tomographic bin. We expect to meet the $Euclid$ requirement, $\Delta \langle z \rangle \leq 0.002(1+\langle z \rangle)$, for 3/10 bins (and come close in the others) with this approach, which would require $\sim$11k spectra in total. The second strategy tested is similar to the first, in that one spectrum per cell is obtained. However, galaxies associated with the 5% of the cells in each bin with the highest redshift uncertainty are rejected from the weak lensing sample, and these cells are ignored in the sampling. This significantly reduces the uncertainty in the $\meanz$ estimates, with 6/10 bins meeting the requirement; moreover, it reduces the total number of spectra needed by 5%. However, it comes at the cost of reducing the number of galaxies in the weak lensing sample. The third strategy is to sample the 5% of the cells with the highest redshift uncertainty with three spectra each in order to estimate their mean redshifts with greater accuracy, again obtaining one spectrum for the other 95% of the cells. This strategy again lowers the uncertainty in the $\meanz$ estimates substantially, but at the cost of increased spectroscopic effort, requiring $\sim$12k spectra in total. The additional spectra needed may also prove to be the more difficult ones to obtain, so the effort needed cannot be assumed to scale linearly with the number of spectra. These examples are simply meant to be illustrative of the possible strategies that can be adopted for the spectroscopic calibration. More refined strategies are possible – for example, an optimal allocation of spectroscopic effort could be devised that scales the number of spectra in a given region of color space proportionately to the redshift uncertainty in that region, while rejecting limited regions of color space that are both highly uncertain and difficult for spectroscopy. Additional spectroscopy may need to be allocated to the higher redshift bins, for which there tend to be fewer cells overall as well as higher dispersion within cells. Tomographic bins could also be intentionally generated to minimize the uncertainty in $\meanz$. The simpler examples shown here do illustrate that, if we believe the cell $P_{i}(z)$ estimates from template fitting, the $Euclid$ calibration requirement $\Delta \langle z \rangle \leq 0.002(1+\langle z \rangle)$ is achievable with $\sim$10-15k spectra in total (roughly half of which already exist). ### Is filling the map with spectroscopy necessary? The number of spectra needed derived above assumes that at least one spectrum per SOM color cell is necessary to estimate the $\langle z_{i} \rangle$ for that cell. However, if a particular region of color space is very well understood and maps smoothly to redshift, sparser spectroscopic sampling in that region together with interpolation across cells might be sufficient. Equivalently, groups of neighboring cells with low redshift uncertainty that map to roughly the same redshift could potentially be merged using a secondary clustering procedure, thus lowering the overall number of cells and the number of spectra required. These considerations suggest that, while the exact number of spectra required to meet the calibration requirement is uncertain, the results presented above are likely to represent upper limits. [c|cc|ccc|cc]{} 0.0-0.2 & 659 & 0.0034 & 627 & 4.2 & 0.0024 & 723 & 0.0028\ 0.2-0.4 & 1383 & 0.0028 & 1314 & 4.6 & 0.0015 & 1521 & 0.0020\ 0.4-0.6 & 2226 & 0.0014 & 2115 & 3.9 & 0.0007 & 2448 & 0.0010\ 0.6-0.8 & 2027 & 0.0018 & 1926 & 4.3 & 0.0005 & 2229 & 0.0012\ 0.8-1.0 & 1357 & 0.0021 & 1290 & 4.4 & 0.0009 & 1491 & 0.0013\ 1.0-1.2 & 1705 & 0.0011 & 1620 & 4.6 & 0.0005 & 1875 & 0.0008\ 1.2-1.4 & 559 & 0.0029 & 532 & 4.4 & 0.0015 & 613 & 0.0021\ 1.4-1.6 & 391 & 0.0044 & 372 & 3.3 & 0.0021 & 429 & 0.0031\ 1.6-1.8 & 268 & 0.0064 & 255 & 2.7 & 0.0050 & 294 & 0.0055\ 1.8-2.0 & 164 & 0.0093 & 156 & 2.1 & 0.0085 & 180 & 0.0088\ \[0.1cm\] Total \#spectra: & & & \[table:sigz\] Estimating the true uncertainty in the color-redshift mapping ------------------------------------------------------------- ![image](fig9){width="0.95\linewidth"} The analysis above highlights the important role played by the true uncertainty in the mapping from color to redshift for some number of broadband filters. A single spectroscopic redshift gives us an estimate of a cell’s mean redshift with an uncertainty that depends on the true dispersion in $P_{i}(z)$ for the cell. Unfortunately, we cannot know this distribution precisely without heavily sampling the cell with spectroscopy, which is impractical (we can, however, model it with different photo-z codes). Given the importance of the uncertainty in the mapping of color to redshift in different parts of color space, strategies to constrain this uncertainty efficiently should be considered. One possibility is that a limited amount of ancillary photometry can effectively identify the redshift variation within cells. The reason this could work is that objects with very different redshifts but similar *Euclid* colors are likely to be distinguishable in other bands (e.g., IR or FUV). Moreover, well-defined and distinct magnitude distributions for objects in the same region of color space could indicate and help break a color-redshift degeneracy. Another interesting possibility is that the uncertainty in $P_{i}(z)$ in different parts of color space can be constrained *from the map itself*, as it is filled in with spectroscopy. This is because the cell-to-cell redshifts would be expected to show high variation in parts of color space where the relation has high intrinsic variation, and vary more smoothly in regions where the relation is well-defined. We defer a detailed analysis of this possibility to future work. Effect of photometric error on localization in color space ---------------------------------------------------------- Photo-z uncertainty is due both to the inherent uncertainty in the mapping from some number of broadband colors to redshift, as well as the uncertainty in the colors themselves due to photometric error. It is well-known that photometric redshift performance degrades rapidly at low signal-to-noise for the latter reason. *Euclid* and other dark energy surveys will also observe deep calibration fields, in which the survey depth is $\sim$2 magnitudes deeper than the main survey. These will preferentially be the fields with spectroscopic redshifts used for training and calibration. Because of the photometric depth, the photometric error will be negligible in these fields, and the uncertainty in mapping color to redshift will be due to inherent uncertainty in the relation. Even if the relation between color and redshift is mapped as fully as possible in the deep fields, photometric error in the shallower full survey will introduce uncertainties by allowing galaxies to scatter from one part of color space to another. The errors thus introduced to the tomographic redshift bins can be well characterized using the multiple observations of the deep fields, and folded into the estimates of $\sigma_{\langle z_{i} \rangle}$. The ultimate effect on the $N(z)$ estimates will depend on the S/N cut used for the weak lensing sample. Cosmic variance --------------- One of the primary difficulties with direct measurement of the $N(z)$ distribution for tomographic redshift bins is the need for multiple independent sightlines in order to avoid cosmic variance-induced bias in the $N(z)$ estimates. Systematically measuring the color-redshift relation as described here, however, largely sidesteps the problem posed by cosmic variance. This is because the true $\rho{(\vec{C})}$ distribution can be inferred from the full survey (which will be unaffected by cosmic variance or shot noise), while the calibration of $P(z|\vec{C})$ can be performed on data from a small number of fields, as long as galaxies in those fields span the overall galaxy color space sufficiently. Galaxies in under-sampled regions of color space ------------------------------------------------ From the preceding analysis, a reasonable step toward calibration of the photo-z’s for cosmology is to target the regions of multicolor space currently lacking spectroscopy (the gray regions in Figure \[figure:specz\]). It is therefore important to understand the nature of the galaxies in these regions, in order to predict the spectroscopic effort needed. Of the 11,250 cells in the SOM presented here, roughly half currently have no objects with high-confidence spectroscopic redshifts. The distribution of these cells on the map, as well as their photometric redshift estimates, are displayed on the left side of Figure \[figure:no\_specz\]. The right side of Figure \[figure:no\_specz\] shows the overall magnitude and photometric redshift distribution of the unsampled cells of color space. Most unsampled cells represent galaxies fainter than $i=23$ (AB) at redshifts $\mathrm{z}\sim0.2-1.5$, and $\sim$83% of these are classified as star-forming by template fitting. These magnitude, redshift, and galaxy type estimates directly inform our prediction of the spectroscopic effort that will be required to calibrate the unsampled regions of galaxy color space. Generally speaking, these galaxies have not been targeted in existing spectroscopic surveys because they are faint and not considered critical for galaxy evolution studies. However, they are abundant and thus important for weak lensing cosmology. In Appendix A we give a detailed estimate of the observing time that would be needed to fill in the empty parts of color space with a fiducial survey with Keck, making use of LRIS, DEIMOS and MOSFIRE. We find that $\sim$40 nights would be required if we reject the 1% most difficult cells – a large time allocation, but not unprecedented in comparison with other large spectroscopic surveys. This is significantly less than the $\sim$100 nights needed to obtain a truly representative sample without prior knowledge of the color distribution [@Newman15]. For both LSST and $WFIRST$ the calibration sample required is likely to be significantly larger, due to the greater photometric depths of these surveys in comparison with $Euclid$. Therefore, methods to improve the sampling as proposed here will be even more important to make the problem tractable for those surveys. Discussion ========== Statistically well-understood photometric redshift estimates for billions of galaxies will be critical to the success of upcoming Stage IV dark energy surveys. We have demonstrated that self-organized mapping of the multidimensional color distribution of galaxies in a broadband survey such as *Euclid* has significant benefits for redshift calibration. Importantly, this technique lets us identify regions of the photometric parameter space in which the density of galaxies $\rho{(\vec{C})}$ is non-negligible, but spectroscopic redshifts do not currently exist. These unexplored regions will be of primary interest for spectroscopic training and calibration efforts. Applying our SOM-based analysis to the COSMOS field, we show that the regions of galaxy parameter space currently lacking spectroscopic coverage generally correspond to faint (*i*-band magnitude (AB) $\gtrsim$ 23), star-forming galaxies at $z<2$. We estimated the spectroscopy required to fill the color space map with one spectrum per cell (which would come close to or achieve the required precision for calibration) and found that a targeted, $\sim$40 night campaign with Keck (making use of LRIS, DEIMOS and MOSFIRE) would be sufficient (Appendix A). It should be noted that this analysis is specific to the $Euclid$ survey. The calibration needs of both LSST and *WFIRST* are likely to be greater, due to the deeper photometry that will be obtained by those surveys. We demonstrated that systematically sampling the color space occupied by galaxies with spectroscopy can efficiently constrain the $N(z)$ distribution of galaxies in tomographic bins. The precise number of spectra needed to meet the bias requirement in $\meanz$ for cosmology depends sensitively on the uncertainty in the color-redshift mapping. Template-based estimates suggest that this uncertainty is rather high in some regions of *Euclid*-like color space. However, the smoothness of the spectroscopic redshift distribution on the map suggests that the template-based uncertainties may be overestimated, which would reduce the total number of spectra needed for calibration. Assuming that the uncertainties in $P(z|\vec{C)}$ from template fitting are accurate, we demonstrate that the $Euclid$ requirement on $\Delta \langle z \rangle$ should be achievable with $\sim$10-15k total spectra, about half of which already exist from various spectroscopic surveys that have targeted the COSMOS field. Understanding the true uncertainty in $P(z|\vec{C})$ will likely prove critical to constraining the uncertainty in $\langle z \rangle$ for the tomographic bins, and we suggest that developing efficient ways of constraining this uncertainty should be prioritized. The topological nature of the self-organizing map technique suggests other possible uses. For example, a potentially very useful aspect of the SOM is that it lets us quantify the “normality” of an object by how well-represented it is by some cell in the map. Rare objects, such as AGN, blended sources, or objects with otherwise contaminated photometry could possibly be identified in this way. We also note that the mapping, by empirically constraining the galaxy colors that appear in the data, can be used both to generate consistent priors for template fitting codes as well as test the representativeness of galaxy template sets. These applications will be explored in future work. We thank the anonymous referee for constructive comments that significantly improved this work. We thank Dr. Ranga Ram Chary, Dr. Ciro Donalek, and Dr. Mattias Carrasco-Kind for useful discussions. D.M., P.C., D.S., and J.R., acknowledge support by NASA ROSES grant 12-EUCLID12- 0004. J.R. is supported by JPL, run by Caltech for NASA. H.Ho. is supported by the DFG Emmy Noether grant Hi 1495/2-1. S.S. was supported by Department of Energy Grant DESC0009999. Data from the VUDS survey based on data obtained with the European Southern Observatory Very Large Telescope, Paranal, Chile, under Large Program 185.A-0791. This work is based in part on data products made available at the CESAM data center, Laboratoire d’Astrophysique de Marseille. A. Estimating the observing time required for the *Euclid* calibration ====================================================================== Given the $<$0.2% accuracy in $\meanz$ required for the *Euclid* tomographic bins, and following the analysis presented above, a nearly optimal approach would be to obtain one spectrum per SOM cell, while rejecting $\sim$1% of the cells requiring the longest spectroscopic observations. Taking existing spectroscopy into account, a total of $\sim$5k new spectroscopic redshifts would be needed. We estimate that these spectra could be obtained in $\sim$40 nights with Keck, as outlined below. To quantify the required exposure time, we constructed a fiducial survey on the Keck telescope with the Low Resolution Imaging Spectrograph (LRIS) [@Oke95], the Deep Extragalactic Imaging Multi-Object Spectrograph (DEIMOS) [@Faber03], and the Multi-Object Spectrograph for Infrared Exploration (MOSFIRE) [@McLean12] instruments. This telescope/instrument combination was chosen because the full redshift range of the calibration sample can be optimally probed with these instruments, and their performance in obtaining redshifts for *i*$\sim$24.5 galaxies has been demonstrated in numerous publications (e.g., [@Steidel04; @Newman13; @Kriek15]). For LRIS we follow @Steidel04 and assume the 300 groove mm$^{-1}$ grism blazed at 5000Å on the blue side and the 600 groove mm$^{-1}$ grating blazed at 10,000Å on the red side with the D560 dichroic. With DEIMOS the 600 grove mm$^{-1}$ grating tilted to 7000Å was assumed. MOSFIRE was assumed to be in its default configuration. Sensitivities were estimated using the official exposure time calculators (ETCs) provided by Keck by scaling from a 24th magnitude flat spectrum object. We assume 1$\arcsec$ seeing, a 1$\arcsec$ wide slit, an airmass of 1.3, and we include appropriate slit losses. For all instruments we scaled the SNR to a binning of *R*$\sim$1500, the minimum required resolution for calibration redshifts. The assumed SNRs in a one hour exposure at 24th magnitude (AB) are given in Table \[table:sens\]. We assume that the galaxies in the cells needing spectroscopy have the redshifts, galaxy spectral types, and reddenings derived from template fitting with *Le Phare*. The modeled galaxy spectral types, redshifts, and observed magnitudes were used to determine the required SNR and the instrument such that a $>$99% reliable redshift can be obtained. For star forming galaxies at $z<2.7$ we require SNR = 2 on the continuum because bright rest-frame optical emission lines will be used to determine the redshift. For star forming galaxies at $z>=2.7$ we require SNR = 3 on the continuum to clearly detect the Lyman break and the rest frame ultraviolet (UV) absorption features with LRIS or DEIMOS (e.g., [@Steidel03]). For galaxies classified as passive, we require SNR = 5 on the continuum (e.g., [@Kriek09; @Onodera12]), while objects intermediate between passive and star-forming were allowed to linearly scale between an SNR of 5 and 2 with increasingly star-forming spectral template, because the spectral feature strength increases with star formation rate. The magnitude measured in the band closest to the most prominent spectral feature was assumed for the SNR calculation, and the instrument with the highest sensitivity at that feature was assumed. For passive galaxies it was assumed that the 4000Åbreak must be targeted at $z<2.3$ and the 1216Å Lyman forest break at higher redshifts, with DEIMOS used at $z<1.3$, LRIS at $1.3<z<1.4$, MOSFIRE at $1.4<z<2.3$, LRIS at $2.3<z<3.5$ and DEIMOS at $z>3.5$. For other galaxies, the strongest of H$\alpha$, H$\beta$, O\[III\] and O\[II\] was targeted at $z<2.7$, with DEIMOS at $z<1.5$ and MOSFIRE at $1.5<z<2.7$. The 1216ÅLyman forest break was targeted at higher redshifts, with LRIS at $2.7<z<3.5$ and DEIMOS at $z>3.5$. Objects were then grouped into masks by instrument and exposure time, assuming a multiplexing of 70 for DEIMOS and 20 for LRIS and MOSFIRE, making the assumption that deep observations could be obtained for rare faint objects by observing them in multiple masks. Assuming nights are 10h long, overheads are 10%, 20% of the objects need to be observed by more than one instrument to confirm the redshift, and 30% losses due to weather, we obtain the estimate of required observing time given in Table \[table:sens\]. An exploratory program in early 2015 used samples from poorly sampled regions of color space as fillers on 2-4 hr Keck DEIMOS slit masks, finding that $>$98% of sources were readily identified from strong \[OII\], \[OIII\], and/or H$\alpha$ emission, while the non-detected sources had photometric redshifts for which no line detection was expected by DEIMOS. We note that an additional $\sim$12 nights would be required to get to 99.8% completeness in color cells, and $\sim$49 (for a total of $\sim$100) more nights to reach 99.9% completeness. This confirms the difficulty in obtaining truly complete samples noted by previous work, as well as the importance of systematically rejecting sources [@Newman15]. [lccc]{} LRIS & I& 1.5 & 7\ DEIMOS & I & 2.0 & 19\ MOSFIRE & Y & 0.7 & 4\ MOSFIRE & J & 0.6 & 1\ MOSFIRE & H & 0.5 & 7\ MOSFIRE & K & 0.4 & 1\ \[table:sens\] B. Alternate SOM Examples ========================= Figure \[figure:alt\_som\] shows two alternate maps generated with the same COSMOS data, but with different starting conditions and training orders. Note that the overall topological features are the same. The representativeness of these maps (in the sense described in §4.3) are essentially identical to each other and the map shown throughout the paper. However, the positions and orientations of different photometric clusters are random. ![image](fig10){width="\linewidth"}
Those who have a relationship of use with design solutions—defined as “users”—play an important role in engineering design projects. User needs form the foundation of engineering design problems, and user requirements outline the functional and physical characteristics that potential solutions must have. Previous literature has shown that access to users substantially influences how designers think about design problems and how well their proposed solutions align with user needs. In addition, other studies have indicated that novice designers vary substantially in how they perceive the role of users and integrate user information into their designs. Few studies, however, have explored in detail 1) the factors which motivate novice designers to incorporate user feedback into design projects, and 2) how novice designers solicit user feedback in authentic design situations. Thus, this study explored how novice design teams interacted with users in practice as part of a capstone design course. Nine students across 3 different design teams participated in this study. Each team was required to develop an assistive device for a specific individual user as part of an on-going multi-semester project. Data included semi-structured interviews with the teams (10 hours) and recordings of meetings that teams conducted with their user or other individuals who knew the user personally (8 hours). Meeting recordings were analyzed to identify different ways that teams interacted with stakeholders. Similar interactions were then thematically grouped into specific behaviors to allow for comparison across teams. These behaviors represent successes and challenges that teams exhibited when building relationships, involving stakeholders in design decisions, exploring stakeholder perspectives and developing mutual understanding. Despite strong similarities in initial project goals across teams, each team demonstrated a different approach to interacting with users and incorporating user feedback into their designs. One team met with their user regularly throughout the semester and consistently sought to build connections and solicit genuine feedback. This team recognized in retrospective interviews that involving their user was vital to the success of their project. Another team met with their user at the beginning of the semester to evaluate the user’s physical capabilities and develop user requirements. This team primarily focused on the technical details of the project and did not meet with their user again until they were ready to validate their final concept. The last team never met with their user, although they did solicit some feedback from their project sponsor. Rather, this team trusted the user requirements developed during the previous semester and evaluated success based upon how well they met these requirements. These cases illustrate three distinct ways that novice designers view the role of users in design projects, as well as how these perspectives translated into design process and outcomes decisions. Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.
https://www.asee.org/public/conferences/140/papers/25209/view
Chandra has 2 liters of a 14% solution of sodium hydroxide in a container. What is the amount and concentration of sodium hydroxide solution she must add to this in order to end up with 7 liters of a 34% solution? - Chemistry A weighed amount of sodium chloride is completely dissolved in a measured volume of 4.00 M ammonia solution at ice temperature, and carbon dioxide is bubbled in. Assume that sodium bicarbonate is formed until the limiting reagent - Chem Solutions of sodium carbonate and silver nitrate react to form solid silver carbonate and a solution of sodium nitrate. A solution containing 3.30 g of sodium carbonate is mixed with one containing 3.00 g of silver nitrate. How - Science When excess carbon dioxide passes into a sodium hydroxide solution it forms sodium carbonate solution calculate the mass of crystals that is produced from 5g of sodium hydroxide in excess water - Chemistry Write the following into a balanced equation. a) When a mixture of copper (II) oxide and carbon is heated, elemental copper forms and carbon monoxide evolves into the atmosphere. b) When a concentrated solution of sodium hydroxide - Math A chemist working on a flu vaccine needs to mix a 10% sodium-iodine solution with a 60% sodium-iodine solution to obtain a 50-milliliter mixture. Write the amount of sodium iodine in the mixture, S, in milliliters, as a function - chemistry ihave two solutions. in the first solution , 1.0 moles of sodium chloride is disslved to make 1.0 liters of solution . in the second one, 1.0 moles of sodium chlorine is added to 1.0 liters of water.is the molarity of each - Chemistry Aqueous sulfurous acid (H2SO3) was made by dissolving .200L of sulfur dioxide gas at 19 degrees C and 745 mmHg in water to yield 500mL of solution. the acid solution required 12mL of sodium hydroxide solution to reach the You can view more similar questions or ask a new question.
https://www.jiskha.com/questions/513040/what-is-the-ph-of-0-2-m-solution-of-sodium-propanoate
The debut novel from an award-winning short story writer: a multigenerational saga spanning Lebanon, Iraq, India, the United States, and Kuwait that brings to life the triumphs and failures of three generations of Arab women. In 2013, Sara is a philosophy professor at Kuwait University, having returned to Kuwait from Berkeley in the wake of her mother's sudden death eleven years earlier. Her main companions are her grandmother's talking parrot, Bebe Mitu; the family cook, Aasif; and Maria, her childhood ayah and the one person who has always been there for her. Sara's relationship with Kuwait is complicated; it is a country she always thought she would leave, and a country she recognizes less and less, and yet a certain inertia keeps her there. But when teaching Nietzsche in her Intro to Philosophy course leads to an accusation of blasphemy, which carries with it the threat of execution, Sara realizes she must reconcile her feelings and her place in the world once and for all. Interspersed with Sara's narrative are the stories of her grandmothers: beautiful and stubborn Yasmine, who marries the son of the Pasha of Basra and lives to regret it, and Lulwa, born poor in the old town of Kuwait, swept off her feet to an estate in India by the son of a successful merchant family; and her two mothers: Noura, who dreams of building a life in America and helping to shape its Mid-East policies, and Maria, who leaves her own children behind in Pune to raise Sara and her brother Karim and, in so doing, transforms many lives. Ranging from the 1920s to the near present, An Unlasting Home traces Kuwait's rise from a pearl-diving backwater to its reign as a thriving cosmopolitan city to the aftermath of the Iraqi invasion. At once intimate and sweeping, personal and political, it is an unforgettable epic and a spellbinding family saga. Product Details BISAC Categories: Earn by promoting books Earn money by sharing your favorite books through our Affiliate program.Become an affiliate Reviews "The Hidden Light of Objects marks the emergence of an author already confident in her craft and her ability to give voice to the emotions and yearnings of her characters."--New Internationalist "The old world and the new. The strife in the Gulf, once peaceful and reflective. East and West, Arabic and English, the poetry of the heart, the eye of the hawk; all these elements produce the lustrous pearls of Mai Al-Nakib's short stories."--Hanan al-Shaykh, author of Beirut Blues, on The Hidden Light of Objects "The Hidden Light of Objects brings forth both the light and the shadows of the contemporary Middle East in clean-edged prose that startles us, not with sudden violence or polemic, but with the ineluctable force of human desire. Kuwait itself becomes a character, full of contradictions, in this multifaceted set of stories and vignettes. Superb."--Lucy Ferriss, author of The Lost Daughter "An Unlasting Home is an unforgettable story of people making choices for love, family, freedom, and identity against the tidal forces of history in the Arab region. Shimmering with poetic prose, and as pressingly real as the white heat of August in Baghdad, this poignant debut will keep you in its thrall." --Juhea Kim, author of Beasts of a Little Land A spellbinding family history unfolds as a Kuwaiti woman goes on trial for blasphemy in a world gone mad. Deftly written, structurally brilliant, Mai Al-Nakib's An Unlasting Home is a lasting novel that splits open time, leaps across continents, and creates the sort of characters we carry forward into our hearts and lives. I absolutely loved this book.
https://bookshop.org/books/an-unlasting-home-9798200858866/9780063135093
Resilience – bouncing back from adversity – is a complex, dynamic skill set developed over years of dealing with the hard things in life: losing one’s health, losing a relationship, losing a job, losing one’s purpose. Clients become resilient, and come to know that they are, by learning to meet the inevitable disappointments, difficulties and even disasters of life skilfully, gracefully and effectively, and finding the lessons in the losses and the growth in their sense of self. As the American Psychological Association puts it: ‘Resilience is the process of adapting well in the face of adversity, trauma, tragedy, threats or significant sources of stress.’1 Cultivating or recovering resilience is at the heart of every therapeutic process. Whatever the presenting issues – illness or injury, infertility or infidelity; whatever the presenting symptoms – anxiety, depression, shame, despair; whatever theoretical orientation or modality therapists bring to bear – cognitive-behavioural, psychodynamic or somatic experiencing, clients come to therapy for guidance and support in recovering (or discovering) their strengths, their competencies and their wellbeing. Resilience becomes a mindset – an ongoing approach to the hiccups and the hurricanes inevitable in the human condition. With this mindset, we learn how to perceive what’s happening accurately, bring self-awareness, self-acceptance and self-compassion to our reactions to what’s happening, and consciously discern options and make wise choices. We also learn that we can make those choices, and to deeply trust that we can, no matter what happens. Any adverse change in circumstances, any potentially traumatising event, becomes an opportunity, a cue to practise strengthening resilience. I find this view from Kelly McGonigal, health psychologist at Stanford University, helpful: ‘Resilience is not about being untouched by adversity or unruffled by difficulties. It’s about allowing stress to awaken in you these core human strengths of courage, connection, and growth. Whether you are looking at resilience in over-worked executives or war-torn communities, people in military deployment, immigrants or refugees, people living in poverty, battling cancer, or raising a child with autism, the same themes emerge. ‘People who are resilient allow themselves to be changed by the experience of adversity. They maintain a basic sense of trust in themselves and a connection to something bigger than themselves. They also find ways to make meaning out of suffering. To be resilient is not to avoid difficulty but to play an active role in how difficulty transforms you.’ 2 Resilience is learnable, trainable, because capacities for resilience are innate in the human brain. They are functions of the prefrontal cortex, the centre of executive functioning in the higher brain, that develop as the brain matures from experience. Dan Siegel, psychiatrist at the University of California, Los Angeles, identifies these major functions of the prefrontal cortex as follows:3 - to regulate the body and nervous system - to quell the fear response of the amygdala (the fear centre) - to manage a broad range of emotions - attunement – the felt sense of feelings, one’s own or another’s - empathy – making sense of experience - insight and self-knowing - response flexibility - planning, judgment, decision-making. Because of these functions, especially the function of response flexibility, Canadian psychiatrist Gabor Maté calls the prefrontal cortex ‘the CEO of resilience’.4 We can strengthen the functioning of the prefrontal cortex because of the lifelong neuroplasticity of the brain. Any experience at all, positive or negative, causes neurons in the brain to fire. If you repeat the experiences, the neural firing repeats. With enough experience, the brain creates new neural circuitry, new neural pathways, new ways of responding to life events. When we teach clients to choose and repeat the experiences that especially cultivate the capacity of response flexibility, we are teaching them how to strengthen their resilience. In fact, we are transforming any adversity into learning and growth. Richard J Davidson, founder-director of the Center for Investigating Healthy Minds at the University of Wisconsin-Madison, notes:5 ‘The brain is shaped by experience. And, based upon everything we know about the brain in neuroscience, change is not only possible but is actually the rule rather than the exception. It’s really just a question of which influences we’re going to choose for the brain. And because we have a choice about what experiences we want to use to shape our brain, we have a responsibility to choose the experiences that will shape the brain toward the wise and the wholesome.’ One of those wholesome choices would be to cultivate the mindset that we do have choices. Carol Dweck, psychology professor at Columbia University, notes in her research on shifting from a fixed mindset (giving up in the face of failure or setback) to a growth mindset (persevering in the face of a failure or setback):2 ‘The fixed mindset robs people of capacities to cope. The growth mindset fosters curiosity and a passion for learning through effort and experience. People with growth mindsets respond especially well when things are not going well; they tend to stretch themselves, confront obstacles, embrace risk, and stick through the hard times. Rather than being embarrassed or blocked by a sense of deficiency, they can acknowledge what skill or capacity is missing and set to work to cultivate it. They take direct, wise and compassionate action.’ Edith Chen, psychology professor at Northwestern University, noted something similar in her ‘shift-and-persist’ mindset.2 Shifting is accepting that the stressor is real and changing the way you think about it; persisting is maintaining the optimism needed to pursue meaning, even in the face of adversity. My colleague Frankie Perez sums it up as: ‘How you respond to the issue… is the issue.’6 I teach my clients very specific tools for choosing to learn how to transform adversity into learning and growth. Body-based tools Our most basic responses to all of life’s challenges and adversities begin in our bodies, where trauma memories of when we couldn’t cope so well are also stored implicitly. I teach clients tools of breath, touch and movement to regulate their nervous system’s response to stress or danger (the first function of the prefrontal cortex) and return the functioning of the brain to its innate safety zone, its innate range of resilience. In fact, the rapidity and reliability with which we can return the nervous system to baseline, calm and engaged, is an objective measure of resilience. In his book Resilient: how to grow an unshakable core of calm, strength and happiness, Rick Hanson calls this equilibrium the ‘Green Zone’.7 Neurophysiologists call it the zone of safety. Psychotherapists call it the range of resilience, as I do here. Trauma therapists call it the window of tolerance. We use empirically based tools to recover the brain’s internal perception that it’s safe to function. That neuroception of safety primes the neuroplasticity of the brain for learning. Movement Any time you move your body and shift your posture, you shift your physiology. Any time you shift your physiology, you shift the activity of your autonomic nervous system and its state of excitement-stress, calm or shut-down collapse. Clients can intentionally use movement to shift their emotions and their mood. Here is an example from my own clinical practice. My client, Marian, was an educational psychologist, acting as a consultant to parents who needed to place their son or daughter in a residential treatment facility for young people with a dual diagnosis. Although very competent and successful, Marian experienced severe anxiety and a fair amount of self-doubt and shame whenever she first met the parents of such children. Would she be good enough? Would they find her adequate and trustworthy? We began practising a form of power posing, based on Amy Cuddy’s research at Harvard Business School.8 Marian would practise how she stood in her own office before meeting the parents. She would stand tall, proud and strong, pulling the energy up from her feet, which were planted firmly on the floor, up through her torso, up through her spine and neck, so she stood erect and empowered. Marian had so much success with that experiment in using movement to shift her emotional state that we changed to beginning the practice with her feeling the anxiety in her body, embodying the self-doubt, and then shifting into the posture of strength and confidence. The shifting all the way through from negative to positive was even more effective in changing Marian’s emotional state. She noticed that, over time, she hardly felt the anxiety and self-doubt any more at all. She could embody the strength and confidence the moment she needed to. Power posing helps clients access an inner, body-based sense of strength and confidence. Shifting from an embodied negative state (anxiety, shame) to a more positive embodied state helps clients learn they can use the movement of their body to change their internal state, turning a dreaded situation into learning and growth. Effects of positive emotions Just as daily living triggers the stress response, so simply living evokes emotions; clients experience some kind of emotion every single moment of their day. Whether they like having these emotions or not, whether they trust them or know what to do with them or not, their feelings constantly filter their perceptions and guide (sometimes misguide) their responses to all of their experiences, and so play an integral role in how well or poorly they bounce back from any adversity. Learning to manage their feelings, their emotions, rather than be hijacked or shut down by them, is essential to flexibility, resilience, to creating wise choices, learning and growth. Data from 25 years of neuroscience research and 25 years of behavioural science research9 are now dovetailing nicely to demonstrate the benefits of cultivating positive emotions – gratitude, kindness, compassion, joy, awe, delight, serenity – to help antidote the innate negativity bias of the human brain and reliably shift the functioning of the brain from negativity, reactivity and contraction to receptivity, openness to learning, and a more optimistic perspective. A direct, measurable cause-and-effect outcome of cultivating positive emotions is resilience. Here are two examples from my clinical practice. Both these clients were retired lawyers battling cancer. We were already working with Noah’s lifelong patterns of negativity, his ‘poor me’, complaining and rumination, when he was diagnosed with lung cancer. His mood plunged into despair. It was hard to sit with Noah in that overwhelming darkness, and it became very hard to get any traction in the therapy sessions. Almost out of desperation, yet knowing how powerful a positive emotion practice could be in shifting mood, I suggested we begin each session with a five-minute gratitude free-write.10 Each of us would write down, in silence, whatever we had to be grateful for in that moment. For Noah, being alive, being able to drive his car to his medical appointments, a good friend bringing soup and salad for lunch, a neighbour helping to fix a blocked kitchen drain... In just three sessions of focusing on gratitude, Noah shifted his attitude toward more optimism and hope; he became much more compliant with his treatment protocol. And, in fact, surgery and radiation proved completely effective. The cancer went into remission, and so did much of Noah’s pessimism and rumination. Kate, already a much more people-oriented person, developed a very steady mindful self-compassion practice when diagnosed with breast cancer. ‘This really sucks. My body aches and I’m scared to death. May I be kind to myself in this moment, in any moment, in every moment. May I accept this moment exactly as it is. May I accept myself exactly as I am in this moment.’ That self-acceptance, no matter how discouraged and grumpy she felt, together with the support of loyal and caring friends, proved to be a powerful ally in tolerating the rigours of many rounds of chemotherapy. Kate even created a T-shirt with the logo of a crab for the zodiac sign of cancer: ‘I’m crabby because I have cancer.’ Giving herself permission to be crabby allowed Kate to keep the larger picture in mind; she was undergoing the rigours of treatment to regain her health. She learned to practise mindful self-compassion in order not just to feel better but to do better in battling the disease.11 Conscious awareness and choices We all have unconscious patterns in our language that filter how we perceive our experiences and thus shape how we respond to them. ‘Should’ is one of them. ‘I have to’ is another. Clients can react to these unconscious messages quite unconsciously and automatically. ‘Should’ and ‘have to’ imply obligation, duty, even right or wrong, and the mind contracts. ‘Should’ creates an unconscious expectation or command for performance and sets clients up for criticism if they ‘fail’ to perform. ‘Could’ creates an unconscious perception of possibility and sets clients up for pride in their learning and growth. Here’s another example from my clinical practice. Maude was the sometimes overwhelmed, often exasperated mother of three boys aged five, four and two. The four-year-old was already diagnosed as having special needs. Maude worked hard to be consciously patient and loving with her boys, but she was unconsciously, unfailingly critical of herself – for making the wrong decision, for losing her temper, for not being a ‘good enough’, let alone perfect, mum. She spent much of her day feeling like a failure, not meeting the ‘shoulds’ of being the model parent. Maude had to practise becoming more mindfully aware of her thought patterns when they derailed her in the moment. Then she had to learn the simple practice of ‘Change every should to a could’. When she caught herself saying ‘should’, she repeated the phrase ‘Change every should to a could’ and noticed the shifts in her thinking. Changing every ‘should’ to ‘could’ opened up possibilities and choice, and thus strengthened her response flexibility. She also found that shifting from ‘I have to’ to ‘I get to’ similarly shifted her thinking from burden to privilege. There is a critical difference between ‘I have to...’ and ‘I get to take the kids to school every morning this week’. This one practice of consciously shifting her self-talk changed how Maude perceived herself as a mother and strengthened her resilience. As Diana Fosha, developer of Accelerated Experiential Dynamic Psychotherapy, writes:12 ‘The roots of resilience are to be found in the felt sense of being held in the mind and heart of an empathic, attuned and self-possessed other.’ Therapists bring their own resilience mindset to the work of helping clients develop theirs. They need to be able to hold faith in the client’s potential and in the process; to share the knowledge that any adversity provides an opportunity for learning more resilient behaviours; to lead the client in learning the skills that help them cope with the inevitable ups and downs of life more skilfully, in ways that are safe, efficient and effective. Clients become more resilient; they learn that they can become more resilient. Clients experience themselves as more resilient; they experience themselves as someone who can learn and grow. Linda Graham MFT is an experienced psychotherapist and mindful self-compassion teacher in the San Francisco Bay area, USA. She is the author of the award-winning book Bouncing Back: rewiring your brain for maximum resilience and well-being, and a new book, Resilience: powerful practices for bouncing back from disappointment, difficulty, and even disaster. She integrates modern neuroscience, mindfulness and relational psychology in her national and international trainings. Her weekly Resources for Recovering Resilience are archived on her website at www.lindagraham-mft.net References 1. American Psychological Association. The road to resilience. [Online.] www.apa.org/helpcenter/road-resilience 2. McGonigal K. The upside of stress: why stress is good for you and how to get good at it. New York, NY: Avery; 2015. 3. Siegel DJ. The developing mind: how relationships and the brain interact to shape who we are. New York, NY: Guilford Press; 1999. 4. Maté G. In the realm of hungry ghosts: close encounters with addiction. Berkeley, CA: North Atlantic Books; 2010. 5. Cited in Beck C. Project happiness. Common Ground 2012; August: 26. http://commongroundmag.com/main-page.html (accessed 15 October 2019). 6. Personal communication. 7. Hanson R. Resilient: how to grow an unshakable core of calm, strength, and happiness. New York, NY: Harmony Books; 2018. 8. Cuddy A. Your body language may shape who you are. TEDGlobal2012; June, 2012. 9. Frederickson B. Positivity: groundbreaking research reveals how to embrace the hidden strength of positive emotions, overcome negativity, and thrive. New York, NY: Crown Publishers; 2009. 10. Emmons R. Gratitude works!: a 21-Day program for creating emotional prosperity. San Francisco, CA: Jossey Bass; 2013. 11. Neff K, Germer C. The mindful self-compassion workbook: a proven way to accept yourself, build inner strength, and thrive. New York, NY: Guilford Press; 2018. 12. Fosha D. The transformative power of affect: a model for accelerated change. New York, NY: Basic Books; 2000.
https://www.bacp.co.uk/bacp-journals/therapy-today/2019/december-2019/transforming-adversity-into-learning-and-growth/
Types of Community Music Music Australia has identified four broad approaches to community music, drawing on the work of researchers Lee Higgins, Gillian Howell and Brydie Leigh-Bartleet: Amateur music: This includes community orchestras, brass bands, and choirs. It provide participants with an experience similar to that of a professional ensemble with scores and parts, led by skilled conductors, and often has auditions. The music performed is often from the Western canon. Communal music: This may involve community groups playing music from diverse cultures and traditions, ukulele groups, community choral groups and jam sessions. It is driven by participation, open access, shared connections, eclectic repertoire, relaxed views on musicianship and standards. Music of a Community: This can be music of a particular ethnic community, or community of interest, and part of the expression of those people’s identity. It may be open only to members of that community, and knowledge of the group may be within the community. Music Interventions: Activities that address community needs through music by fostering cultural engagement. Objectives may be to increase civic pride, community well-being, or social cohesion. They can offer learning and participation opportunities to marginalised groups, address issues of disadvantage or disengagement. The first, second and third approaches are based around organically occurring activities, or the initiative of a committed individual, generally involving group music making. The fourth occurs when a proactive approach is taken with music in a community to achieve a broader social or cultural development outcome. There may be organisational and financial support from a third party, and activities are often free of charge to participants. These categories are not values based, more a tool to understand differing approaches, and are generally determined by the goals, values and reason for existences that underpin a group. Groups can span more than one approach. All approaches can be valued expressions of a communities’ identity, traditions, beliefs, creativity and interests. Examples of community music include: Examples of Amateur music - Community orchestras - Classical choirs - Brass bands - Pipe bands Examples of Communal music - Culturally diverse groups (e.g. Djembe drum group or multicultural choir) - ‘Gypsy’ and Balkan Orchestras - Ukulele groups - Community jam sessions - Community choirs and informal choral groups Examples of Music of a Community - Ethnic choirs (e.g. Croatian Choir, Korean Choir or Indigenous Choir) - Non-western music groups (e.g. Chinese orchestra) - Church choirs Examples of Music Interventions - Community cultural development projects - Choirs with disadvantaged communities - Music programs for people with a disability - Informal music participation and learning programs Music Australia is currently updating our community music information. You can still access the Music in Communities Network site for further information:
https://musicaustralia.org.au/discover/music-in-community/types-of-community-music/
Going to court is expensive, disruptive and a lengthy process, but equally, it’s not viable to let debts go unpaid or to pay for work that doesn’t meet the brief. That’s why methods of Alternative Dispute Resolution (ADR) are so important in the construction sector, where the scope for disputes to arise is generally higher than is the case in other sectors, given the complexity of the work and the risks of problems arising. In recent blogs, we’ve discussed ADR in general terms as well as focussing in more detail on two of its most common forms – mediation and adjudication, which offer the obvious benefits of reducing the cost and disruption involved in resolving a dispute. This time, we’ll be looking at a slightly less common form of ADR – arbitration. What is arbitration? Arbitration is a more formal process than other forms of ADR, involving hearings that are in many ways similar to litigation proceedings in court. The dispute will be heard by an independent arbitrator, whose ruling will be binding. The arbitrator will normally possess technical expertise in the subject area and the parties can jointly appoint the arbitrator. An arbitrator’s ruling can generally only be appealed on the basis of procedural issues or the misapplication of the law. As with litigation proceedings in court, you will usually be represented at the hearing by a barrister who will put forward arguments on your behalf. However, the crucial difference between arbitration and a court hearing is that arbitration is private, whereas a court case is a matter of public record unless the judge makes an order to the contrary, which would be unusual in a commercial dispute. When might you want to use arbitration? Arbitration can be provided for in a construction contract as the required process for resolving any disputes that arise. In these instances, you will have no choice but to pursue arbitration. It can also be used voluntarily where the parties agree, even if it is not provided for in the contract in question. You might want to do this if the dispute is particularly technical or complex, or if you want to keep it private. There are some practical benefits to arbitration too. You will generally have more say over timescales and schedules than you would in a court hearing, meaning less disruption to your business. Arbitration is often cheaper than court hearings, although it is usually more costly than other forms of ADR. So what should I do? When a construction dispute arises, the best route to resolving it will depend on the specific facts of the case as well as your circumstances, attitude to risk and relationship with the other parties involved. By seeking expert legal advice at an early stage, you can benefit from a strategic approach to dispute resolution based on the needs of your business in order to bring about a pragmatic outcome. Contact us today to find out more.
https://www.construction-legal-services.com/construction-and-engineering-news/construction-arbitration-might-want-use/
Looking to take a modern trek along the streets where Paul Revere took his famous midnight ride? Curious what more there is to do in Boston than visit the "Cheers" bar and catch a Red Sox game? Stacker tapped a Boston native to lead a tour through the Massachusetts capital with 30 must-see stops. The comprehensive list, designed for history buffs and tourists alike, shows off the city's rich past, natural beauty, expansive culture, and world-class cuisine. Visitors can stroll along the Freedom Trail and learn about key events of the American Revolution, before stopping at Paul Revere’s House or visiting old souls at the Granary Burial grounds. The tour also offers great activities for lazy afternoons, whether it's hopping aboard the Swan Boats in the Public Garden's lagoon or strolling through Beacon Hill or South Boston neighborhoods. There is plenty here to keep travelers busy, and a few fun places that may even be new for lifelong Bostonians. Read on for inspiration for your next visit to Bean Town. - 2/ Ingfbruno // Wikimedia Commons Learn about 250 years of history and key events of the American Revolution on the Freedom Trail. The trail is a 2.5-mile red line that ribbons around Boston and leads to 16 historic sites. Highlights include Boston Common, the USS Constitution, and Paul Revere’s House. - 3/ Shutterstock Paul Revere might have taken his midnight ride on April 18, 1775, but visitors can stop by Revere’s House in Boston’s North End any time. The national historic landmark is part of the Paul Revere Memorial Association’s Education and Visitor Center. - 4/ Goodfreephotos Boston’s North End is a multicultural neighborhood best known for its narrow streets, rich history, and plethora of restaurants, cafes, and bakeries. The North End is a popular spot on Boston’s Freedom Trail enjoyed by 3.2 million visitors each year. Lucca is just one of the Italian restaurants lining Hanover Street, the heart of Boston's Little Italy. - 5/ Pixabay Faneuil Hall is a vibrant marketplace where locals and tourists have been enjoying music, restaurants, boutiques, and pubs since its revitalization in 1976. Street performers and musicians entertain along the cobblestone promenade. A popular place to grab a bite to eat is Quincy Market, where visitors can find 18 restaurants and 35 colonnade eateries, including Boston’s famous clam chowder. - 6/ ryan harvey // Flickr Beacon Hill, Boston's stony residential area, is best known for its historical landmarks, antique shops, boutiques, eateries, and bars. Tourists flock to see the charming neighborhood boasting a variety of architectural styles including Federal, Greek Revival, and Victorian. The iconic bar, Cheers, that served as an inspiration for its namesake TV show, is located in Beacon Hill. - 7/ Chiefhuggybear // Wikimedia Commons Boston Public Garden, the first public botanical garden in the United States, is a lovely spot to spend an afternoon. Tourists can take a ride on the famous Swan Boats and stop to see the "Make Way for Ducklings” bronze statues created by Boston artist Nancy Schön. The garden is also home to the "Good Will Hunting” bench where Robin Williams delivered his speech to Matt Damon. - 8/ Adavyd // Wikimedia Commons A stop to the site where the first rumblings of the American Revolution began is on the top of the list for many visitors to Boston. Old North Church is Boston’s oldest surviving church and is found along the Freedom Trail. Made famous by Paul Revere’s midnight ride, Old North Church is the most visited historic site in Boston. - 9/ Allan Grey // Flickr Visitors to Boston come to Bunker Hill to stand on the grounds of what many consider the first battle of the American Revolutionary War. Intrepid tourists can climb the 294 stairs to the top of the Bunker Hill Monument and take a moment to reflect on a poignant chapter in American history. - 10/ Pixabay Boston’s Chinatown is the third largest Chinatown in the United States. An easy walk from Boston’s downtown shopping district, Chinatown’s signature gates welcome visitors to a neighborhood full of restaurants offering Chinese favorites simmering with great flavors. - 11/ werkunz1 // Wikimedia Commons Fenway Park is the home of the Boston Red Sox. The oldest surviving stadium in Major League Baseball, Fenway Park has gone through many renovations since it was opened in 1912. In 1947, the left field wall was painted green and earned the moniker the "Green Monster,” which became one of the park’s most iconic features. - 12/ Emmanuel Huybrechts // Wikimedia Commons Once a fort in colonial times, the Fort Point neighborhood is brimming with warehouses that are home to art studios and galleries. Activities include a visit to the Children’s Museum or the Boston Tea Party Ships and Museums. Beer aficionados can also pop by Trillium Brewing Company’s flagship brewery or listen to music at Lucky’s Lounge. - 13/ Envoy Hotel Visitors looking to get a glimpse of Boston’s modern skyline and waterfront can stop by the rooftop at the Envoy Hotel. The panoramic view is open all year long, as LED igloos are installed in the winter. - 14/ John Alzapiedi // Flickr Once an empty waterfront brimming with parking lots, the Seaport District has evolved into one of the city’s hottest neighborhoods. The single largest development project in Boston is home to a growing number of upscale restaurants, shops, and the Institute of Contemporary Art. - 15/ Smart Destinations // Wikimedia Commons The Institute of Contemporary Art’s history stretches back to 1936 when it was founded as the Boston Museum of Modern Art, a sister to New York’s MoMA. It was renamed the Institute of Contemporary Art in 1948. In 2006, a new building was constructed on Boston’s waterfront that expanded the scope and size of ICA’s exhibitions and programs, making it the go-to destination to experience contemporary art in Boston. - 16/ SoWa Sundays // Wikimedia Commons Those looking to discover local flavor should stop by SoWa Art + Design District to find art studios, galleries, boutiques, design showrooms, and eateries. Popular events include SoWa Open Market, SoWa First Fridays, and SoWa Art Walk. - 17/ Robbie Shade // Wikimedia Commons Boston’s Back Bay is one of the city’s most popular neighborhoods featuring boutiques, restaurants, and brownstone homes. Newbury Street is one of the main attractions featuring upscale shops, cafes, and restaurants. Landmarks include Trinity Church, the Prudential Center, and the Boston Public Library. - 18/ Brian Johnson // Wikimedia Commons Boston Public Library, the first large free library in the U.S., is a National Historic Landmark. The library contains more than 1.2 million rare books and documents, as well as paintings, tapestries, and sculptures. John Adams’ personal library is also housed at the library along with original music scores from Wolfgang Amadeus Mozart, Sergei Prokofiev, and many others. - 19/ Daderot // Wikimedia Commons The Esplanade is a three-mile stretch of public green along the Charles River. A perfect spot for walking, running or hiking, visitors can enjoy picnics on the green and concerts in front of the iconic Hatch Memorial Shell. - 20/ John Alzapiedi // Flickr The Museum of Fine Arts (MFA) is heralded as one of the most comprehensive art museums in the world, boasting almost 500,000 works of art. Visited by over 1 million people each year, the MFA features numerous collections and exhibits, including one of the largest collections of Claude Monet outside of France. - 21/ Sarah Nichols // Wikimedia Commons Visitors to the Mary Baker Eddy Library will gain a unique perspective of the world at the three-story Boston landmark globe. Tourists can explore the Earth at its center, surrounded by continents and oceans, all from the bridge of the giant, stained glass sphere. Since 1935, more than 10 million people have crossed the 30-foot-long glass bridge. - 22/ Ingfbruno // Wikimedia Commons Granary Burial Ground, Boston’s third oldest cemetery, was built in the 1660s. The cemetery has only 2,300 markers, though it is estimated that more than 5,000 people have made it their final resting stop. Visitors can pay their respects to Paul Revere, John Hancock, and Samuel Adams. Although a large obelisk bears the name of Benjamin Franklin, he is actually buried in Philadelphia. - 23/ Groupe Canam // Wikimedia Commons New England’s largest sports and entertainment arena, TD Garden is best known as the home of the NHL’s Boston Bruins and NBA’s Boston Celtics. More than 3.5 million people a year visit TD Garden to enjoy concerts, sporting events, and family shows. - 24/ Pixabay Visitors can stand in the footprints of George Washington, John Adams, and General Lafayette at oldest American parkthe U.S.’ oldest park. Boston Common still remains a stage for free speech and public assembly, though it has evolved from a common ground for cow grazing and public hangings to a popular green space to play ball, ice skate, and enjoy the wonder of nature. - 25/ Jameslwoodward // Wikimedia Commons Southie is often depicted as a rough neighborhood in movies and television shows—think "Good Will Hunting,” "The Departed,” and "Ray Donovan.” But the neighborhood where George Washington’s army set up cannons in 1776 is now a desirable area serving as home to Fort Point, beaches, and parks. - 26/ Daderot // Wikimedia Commons Boston’s Museum of Science is one of the world's largest science centers and New England's most visited cultural institution. The Museum introduces more than 1.4 million visitors a year to STEM (science, technology, engineering, and math) through the world-class, hands-on exhibits and programs. Visitors can enjoy IMAX films, planetarium shows, live presentations, and interactive permanent exhibits. - 27/ Pixabay Trinity Church, located in Boston’s Back Bay, is renowned for its history and architectural splendor. The National Historic Landmark building is considered one of this country’s top 10 buildings by the American Association of Architects. Almost 300 years after it was built, Trinity Church is an active parish with numerous services on Sunday, as well as a few during the week. - 28/ U.S. Navy // Flickr The USS Constitution is the world’s oldest commissioned warship still afloat. It is located inside Boston Historical Park as part of the Charlestown Navy Yard in Charlestown. "Old Ironsides” is free and open to the public. There is also a museum next door where visitors can learn more about the history of the USS Constitution. - 29/ John Alzapiedi // Flickr The Rose Kennedy Greenway stretches for a mile and a half of contemporary parks that ribbon through the heart of Boston. Highlights include a rooftop garden atop a highway, food trucks, the Trillium Garden, Greenway Carousel, and numerous fountains. Tours are held May through September. - 30/ Robert Linsdell // Flickr Tourists can visit Fort Independence, enjoy a picnic, take in the view, or watch planes take off from Logan International Airport. Those who visit Castle Island on July 4 can witness Old Ironsides take its annual cruise from the Charlestown Naval Yard and enjoy a great seat for the fireworks. - 31/ Chris // Flickr A great way to explore the city of Boston is to board one of the famous Duck Tours. Highlights include Beacon Hill, Boston Common, Newbury Street, Quincy Market, the Prudential Tower, and a splash in the Charles River. Winning Boston sports teams take their victory ride on the World War II-style amphibious vehicles.
https://thestacker.com/stories/2387/photo-guide-boston-birthplace-american-revolution
Static analysis is often performed on instruction code of computer-based software applications to identify issues such as logic errors and security vulnerabilities within an instruction code set. For example, one common type of static analysis, referred to as taint analysis, is used to identify references within a code set that refer to data that come from or are influenced by an external and/or untrusted source (e.g., a malicious user), and are therefore vulnerable to attack. Unfortunately, taint analysis techniques are not perfect and often provide false positive identifications of vulnerabilities that are not really vulnerabilities.
Written by hackers for hackers, this hands-on book teaches penetration testers how to identify vulnerabilities in apps that use GraphQL, a data query and manipulation language for APIs adopted by major companies like Facebook and GitHub. Web applications are increasingly using the query language GraphQL to share data, but the security of these useful APIs is lagging behind. Authored by the developers of widely used GraphQL security-testing tools, Black Hat GraphQL will teach you how to find and exploit flaws in this technology. Early chapters provide in-depth knowledge of GraphQL and its query language, as well as its potential security pitfalls. Readers will then be guided through setting up a hacking lab for targeting GraphQL applications using specialized GraphQL security tools. They will learn how to conduct offensive security tests against production GraphQL systems by gleaning information from GraphQL implementations during reconnaissance and probing them for vulnerabilities, like injections, information disclosure, and Denial of Service.
https://foxgreat.com/black-hat-graphql-attacking-next-generation-apis/
By Mark Donahue Community service is nothing new to Maura Waldron, MSN. “In my family it was always expected that you would volunteer your time and your efforts to give back to the community,” says Waldron, a nurse in Rush University Medical Center’s general medicine unit. While earning her master’s degree from the Rush University College of Nursing’s Generalist Entry Master’s program, which she received in 2015, Waldron volunteered at Richard T. Crane Medical Preparatory High School, just a few blocks from Rush University Medical Center. She and her fellow students tapped Crane students to become “peer health ambassadors,” who learned more about the benefits of healthy eating and exercise in weekly meetings to spread the word to their peers. Other schoolwide health events included a 5K run, a food drive and a seminar about healthy eating. Waldron is but one of the thousands of Rush students, resident physicians and faculty who volunteer each year in the area surrounding Rush’s Near West Side campus. They are all part of the Rush Community Service Initiatives Program (RCSIP), which is celebrating its 25th anniversary this month. From its beginnings in community health clinics, RCSIP has blossomed into a full-service volunteer effort offering myriad ways to help local residents. Homeless individuals get physicals at RCSIP clinics. Children with HIV go on trips to museums. Like Waldron, many Rush students mentor high school students interested in health care careers — and these are just a few of the programs. “This really is who we are at Rush” says Sharon Gates, MA, senior director of community engagement at Rush. “Our students want to know, ‘How can I make an impact on the community with the talent I possess?’” Rush University students are required to complete a certain number of service hours to graduate, depending on their program, with the exception of the Graduate College. But volunteer work is more than a requirement. In a survey completed in 2014, 66 percent of all Rush students surveyed said knowledge of community engagement opportunities influenced their decision to study there, with that tally rising to 82 percent for Rush Medical College students. There’s more proof in the numbers. In 2015, more than 2,300 students volunteered more than 9,000 hours supervised by Rush faculty, clinicians and staff. Waldron believes community service opportunities give Rush students a chance to not only make an impact on the lives of others, but also to sharpen their focus as they consider future careers.“I think that going into different areas and having experiences with different peoples in different settings allows students to realize what we are actually interested in,” she says. In the past few years, RCSIP improved the process for helping bring new service ideas to life. Rush is making smarter decisions to create lasting community efforts and to build on its reputation for service. For example, the Crane health ambassadors club was developed through an in-house funding grant program that helps launch student-led service projects at established RCSIP sites. The process for applying for these grants has been greatly improved, leading to more focused projects, said Gates. Proposals must now align with one or more priorities in Rush’s Community Health Needs Assessment and include at least one of these areas of focus: accessible patient care, education resources for disease management and prevention, health science career education, community-based research, and community partnership building. Projects must have clear goals, a faculty mentor and must account for future student participation, as well. Proposed student service projects must also cut across the colleges of the University, Gates says. At Crane, Waldron and others from the College of Nursing joined students from two programs in the College of Health Sciences to launch the health ambassadors club. While RCSIP remains the official Rush vehicle for community service, students may also work through outside channels, whether in their own community or through opportunities such as the Schweitzer Fellowship. For Waldron and students like her, community engagement is becoming one of the pillars their health care future will rest on.
https://www.rushu.rush.edu/news/%E2%80%98how-can-i-make-impact%E2%80%99
Born in Pretoria, South Africa, to immigrant Dutch parents. Pierneef had wanted to study architecture and the characteristic of 'structure' is a strong theme in all his work. He treated the trees, mountains and rock formations as elements to structure the composition. Simplification of subjects emphasized their basic structures. Clouds and trees were of special interest to him and reflected the theme and the underlying symbolism of his painting. Not only did the specific trees have specific symbolic meanings to Pierneef but as they were also characteristic of particular geographic areas, he used them to describe the character and atmosphere of that area. 1886 - 1957.
https://12bythedozen.weebly.com/pierneef.html
Postulez en utilisant le formulaire ci-dessous ou créez un compte pour accélérer le processus de candidature futur. Déjà enregistré ? PROGRAM MANAGER Santé Organisme à But Non Lucratif 310 jours | Richmond, Virginia, United States | BrightSpring Health Services Date de début :mars 24 Appliquer avant :avril 24 2021 Industrie : Santé, Organisme à But Non Lucratif, Type :Temps plein Niveau d'ancienneté : 1-2 ans Description du poste Our Company Overview Operations Management focuses on efficiently meeting the needs of our clients across various lines of business. If your passion is managing and developing staff to ensure quality care to help our clients live their best life, we encourage you to apply today! Responsibilities - Direct, manage and administer the coordination and service delivery components of programs that service individuals/consumers in a group home setting - Supervise the delivery of service and ensure individuals and consumers are receiving the highest quality care. - Routinely observe service delivery on-site and monitor for demonstration of knowledge of individual/consumer health needs, behavior management techniques, and emergency procedures - Serve as a liaison between the community and agencies in the service delivery system, family/guardians and the agency - Ensure all homes under supervision are environmentally maintained, in strict compliance with all state/federal guidelines and licensure requirements and company policy at all times - Coordinate investigations of serious incidents and alleged abuse allegations, including appropriate reports to required agencies - Serve as on-call support for group homes - Ensure payroll and billing is completed accurately and timely - Oversee the Accounts Payable for group homes - Monitor and assure compliance with monthly, quarterly, and yearly financial goals to ensure services are provided as indicated in each person’s Individual Program Plan - Coordinate Incident/Accident Review process, monitor for patterns, and provide feedback for action necessary to prevent incidents in future - Serve as member of agency management team - Hire, train, evaluate, and monitorany other assigned personnel complete timely Performance Reviews to ensure employees are productive, accountable, and successful in their positions - Conduct and/or monitor training for all staff in least restrictive techniques, behavior management, active treatment, client rights, prevention of abuse/neglect, documentation/data collection, emergency procedures, and other areas as needed - Other duties as assigned Qualifications - Degree in Nursing or Human Services field or related field and minimum of two years working directly with Elderly and Developmentally Disabled required, or equivalent combination of education and experience - Two years supervisory experience preferred About Our Line Of Business StepStone Family and Youth Services provides the full spectrum of support to children in need of alternative, safer and more positive living environments with residential and family services. StepStone connects children and youth who need homes with foster families, as well as foster care training, respite care and support services. For young adults transitioning from foster care to independent living, StepStone provides personalized guidance and training on basic life skills, including money management, life skills and education. For more information visit www.stepstoneyouth.com .
https://www.projectmanagerjobs.com/fr/job/offre/en-brightspring-health-services-/enprogram-manager-richmond-virginia-united-states-59528
About this Issue The United States has just concluded a presidential election. As we anticipate the transition of power, it may be worth asking whether the specific form that our democracy takes is truly representative. Might we do better? And if so, how? For years, third-party activists and many others have promoted ranked choice voting as a way to more clearly and fully express what the electorate wants. In 2016, the voters of Maine adopted ranked choice voting statewide, an event that has prompted us to take a closer look at the strengths and weaknesses of ranked choice voting as compared to our current system, which in most cases is a simple plurality vote system. Writing the lead essay this month we have the executive director of FairVote, Rob Richie, who worked actively for Maine’s initiative and on many other RCV measures. Joining him will be Professor Jason Sorens of SUNY Buffalo; libertarian political activist Thomas L. Knapp; and Professor Jason McDaniel of San Francisco State University. Lead Essay Hacking America’s Antiquated Elections American democracy today is working more poorly than it has in generations. Even as the toxic 2016 presidential campaign featured the two most unpopular major party candidates in modern history and Congress has historic lows in approval, minor party presidential challengers were marginalized, and nearly 98% of congressional incumbents won re-election. New voices are demeaned as spoilers, which suppresses debate about innovative ideas and shoehorns our diverse political views into two fiercely partisan camps. With the overwhelming majority of elections predictably going to a district or state’s partisan majority, most voters lack meaningful choice even among two candidates. In conflict with the spirit of the Constitution, our electoral rules punish representatives who seek to govern outside their party boxes, blocking sensible changes that have majority support. Absent reform, it is a near certainty that these problems will continue. No single change can unlock voters and spark a democracy where the best ideas rise to the surface and policymakers are able to implement the will of the people with respect for all. But this year we saw a true glimmer of hope for change: with 52% of the vote, Maine voters adopted ranked choice voting (RCV) for all their elections for governor, U.S. Senate, U.S. House, and state legislature in a campaign endorsed by the Libertarian Party, the Green Party, and hundreds of major party elected officials from across the spectrum. Starting in 2018, Mainers will be able to vote for the candidates they like the most without helping elect the candidates they like the least. They will earn what we all deserve: a fair vote and a truce in the battle over whether minor party and independent candidates can have an enduring seat at the electoral table. Ranked choice voting (sometimes called “instant runoff voting” and “preferential voting”) is a proven voting method designed to accommodate having more than two choices in our elections. When used to elect one candidate, RCV essentially simulates the math of traditional majority runoffs, but in one trip to the polls. Voters have the freedom to rank candidates in order of choice: first, second, third, and so on. Their vote is initially counted for their first choice. If a candidate wins more than half the votes, that candidate wins, just like in any other election. If no candidate has more than half the votes, then the candidate with the fewest votes is eliminated. The votes of those who selected the defeated candidate as a first choice are then added to the totals of their next choice. This process continues until the number of candidates is reduced to two or the winner earns more than half of the active votes. RCV upholds majority rule while accommodating increased voter choice. It creates incentives for winning candidates to reach out to all voters in order to get a higher ranking and allows a voter to consider more choices with a greatly reduced likelihood of “splitting” their vote in a manner that might otherwise result in an unrepresentative outcome. Based on the context of its use, RCV can mitigate partisan inflexibility, foster greater accountability for incumbents, increase civic engagement, and reduce the impact of campaign spending. When used in multi-winner elections, RCV becomes a candidate-based form of proportional representation that expands the percentage of people who elect preferred candidates, increases competition, and provides a natural means to elect more diverse legislatures that include accurate representation of the left, right, and center, as well as representatives who break free from the two-party box. Maine’s victory was grounded in grassroots energy, effective organizing, and a well-run campaign. RCV had been debated in the legislature for years and been widely hailed as a success in mayoral elections in the state’s largest city of Portland. In the midst of yet another campaign for governor where the winner received less than half the votes – as has been the case in all but two gubernatorial elections since 1974 – reformers seized a chance to launch an initiative campaign. With barely a week to organize, Election Day volunteers collected more than half the signatures required to put it on the 2016 ballot. The Committee for Ranked Choice Voting and its allies, like the League of Women Voters of Maine and FairVote Maine, launched a two-year campaign of education and advocacy that resulted in more than 300 published letters to the editor, more than 175,000 one-on-one conversations about RCV with Mainers, nearly 3,000 donations from Mainers, and community presentations across the state. A surge of funding allowed for television and digital media that helped push the measure over the top despite being a new idea to most voters. RCV also won in a local campaign in Benton County, Oregon. These wins and more than a dozen other victories for RCV in cities since 2000 demonstrate that RCV is politically viable and impactful in practice. Cities using RCV for mayor and other local offices include Minneapolis (MN), St. Paul (MN), Oakland (CA), San Francisco (CA), San Leandro (CA), Takoma Park (MD), Telluride (CO), and Portland (ME), while Cambridge (MA) has used RCV to elect its city council and school board for decades. Cities awaiting implementation after voter approval include Memphis (TN), Santa Fe (NM), and Sarasota (FL). Internationally, RCV has been used for years to elect Ireland’s president, Australia’s House of Representatives, and the mayors of London (UK) and Wellington (New Zealand). With recommendations by procedural guides like Robert’s Rules of Order, RCV is widely used in nongovernmental organization elections, ranging from major private associations like the American Chemical Society and American Psychiatric Association to nearly every major party in Australia, Canada, Scotland, and the United Kingdom, as well as Republican and Democratic parties in Iowa, Maine, Texas, Utah, and Virginia. Young people have adopted RCV for their student elections at some 60 American colleges and universities and are the most likely to support it on the ballot. RCV’s track record in those elections is impressive. Although still a winner-take-all system that isn’t designed to elect those with minority views, RCV gives everyone a fair shot to run. Australia typically has more than six candidates per house race, and the strongest minor parties run in every district without any fingerpointing or talk of spoilers. Instead, they can make their case, see the best of their ideas adopted by the major parties, and grow their vote such that these parties are now winning fair shares of seats in senate elections held with the multi-winner proportional representation form of RCV. In city elections in the United States, there has been a string of open seat elections where the best-financed favorites run traditional campaigns focused on their base and lose to enterprising challengers who engage directly with more voters in grassroots campaigns designed to earn not only first choice support, but second and third choice support from backers of other challengers. The pattern seems to be that the best-financed candidates rely on traditional techniques of identifying their stronger supporters, getting them to vote, and going more negative on other candidates – and the best challengers can win by putting more effort into direct voter contact regardless of first choice support. Extensive data analysis from more than 125 RCV elections in the Bay Area shows that (1) every single winner has been the “Condorcet” candidate, or the one who would defeat all others in simulated head-to-head contests, even though several winners trailed in first choices and one winner initially was in third; (2) voters regularly rank more than one candidate, including close to nine in ten voters in competitive mayoral elections; (3) fewer voters now skip city elections when at the polls for president and governor; (4) voter turnout in decisive elections has on average risen sharply from prior systems with primaries and runoffs; and (5) and more than 99% of voters cast valid ballots, which is often higher than their valid ballot rate in other races with large candidate fields. RCV’s promise and track record have helped earn notable support. American political leaders backing RCV include President Barack Obama (prime sponsor of RCV legislation as an Illinois state senator), Sen. John McCain (recorded a robo call in support of a ballot measure to implement RCV), former Vermont governor Howard Dean (author of several pro-RCV op-eds, including in the New York Times this fall), former Republican Congressman John Porter (author of a piece in a Brookings Institution report on policy proposals), Sen. Bernie Sanders (who testified on its behalf to the Vermont state legislature in 2007 on a bill that passed the legislature) and this year’s presidential nominees for the Libertarian Party (Gary Johnson) and Green Party (Jill Stein). Ways to Expand Use of Ranked Choice Voting Ranked choice voting is imperfect, but perfection is literally impossible – and advocates of other, untested systems should be cautious about overstating their potential absent experience. But RCV is viable, legal, and successfully tested as a flexible tool for addressing problems in our elections. Once it becomes easy for all jurisdictions to use, as is likely within the next four years, both legislators and populist reformers will find RCV to be valuable. With each new advance, voters’ conceptions of what it means to vote will change from marking an “X” to ranking choices. The RCV ballot has drawn support in several different contexts, including the following. Replacing plurality voting: The great majority of American elections are held with plurality voting, where candidates with the most votes win, even if they do so with less than half the votes. As Maine showed, voters are ready to support RCV when they are frustrated by elections that mean either having to vote for the lesser of two evils, or else for unrepresentative winners. Replacing runoff elections: Holding a separate runoff between the top two finishers is a means to eliminate “spoilers.” But runoffs have downsides. The strongest candidates may not reach the runoff due to split votes. Runoffs exacerbate demands for campaign contributions and often have disparate voter turnout between elections. More than 96% of the nearly 200 regularly scheduled congressional primary runoffs since 1994 experienced declines in turnout, with an average turnout decline of more than 30% – a far steeper decline than the number of voters who don’t rank finalists in RCV races. Finally, runoffs increase election costs and burdens on voters, making them an easy target for budget-cutting policymakers. These problems explain why more than a dozen cities have voted to replace runoffs with RCV. Replacing problematic means of nominating candidates: Traditionally, parties used conventions to choose nominees, which ensured nominees were accountable only to the parties’ most active members. But the main alternative, the primary system, has unrepresentative turnout, with steadily declining percentages of Americans registering with a major party. RCV can help solve problems associated with nominating candidates. RCV could be built into the major party presidential candidate nominating processes, starting with party-run caucuses, and RCV could be used more generally to ensure nominees for all offices earn greater support. More dramatically, states could stop paying for primaries entirely and use RCV to accommodate voters having more general election choices among independents and party nominees. One form of RCV is drawing particular attention: modifying the Top Two primary to advance four candidates, with RCV to be used in November. As used in California and Washington, Top Two establishes that all candidates seeking an office run in the same primary contest, and the top two finishers face off in November regardless of party. FairVote’s analysis of California’s 2012 congressional elections found that advancing four candidates to an RCV contest in November would nearly triple the number of general election races with third party or independent candidates and more than quintuple the number of general elections with more than one candidate from the same major party. Opening up legislative elections to better choice and fairer representation: The combination of winner-take-all rules and rising partisanship has led to a sharply rising percentage of districts in which only one party has any real prospect of winning, and more legislatures where one party has a lock likely to last for generations. It has entrenched incumbents, depressed participation, promoted unrepresentative homogeneity within parties, and created barriers for women, racial minorities, and minor parties to win more seats. Redistricting alone has limited impact on these problems, as suggested by distorted partisan outcomes in California and not a single congressional seat changing hands in 2016. Truly unlocking democracy depends on adopting RCV in multi-winner elections. The first step is to have larger districts with more voters and more seats; for example, one might combine five adjoining districts into a larger district with five representatives. These would be chosen by RCV, with the percentage of the vote necessary to win declining in relation to the number of seats in the district – about 17% of like-minded voters being able to elect a candidate in a five-winner district. Multi-winner RCV is used in at least one governmental election by every voter in Australia, Ireland, Malta, New Zealand, Northern Ireland, Scotland, Minneapolis (MN), and Cambridge (MA). FairVote’s congressional election simulations show that not a single voter in a state with more than two representatives would be represented by only one party. Congress would have a far broader mix of perspectives. New opportunities would arise for independents and third parties to hold the major parties accountable, and more cross-cutting representatives would be likely to forge compromises. Expect to see the Fair Representation Act based on this form of RCV introduced in Congress next year, and for more cities and states to consider it. Looking forward, American politics is at a tipping point. Our current system simply isn’t working, and all trends suggest it will keep getting worse. Maine shows that voters are ready for change, and reformers are planning city and state campaigns for RCV across the nation. Now is the time to think big – and rank the vote. Response Essays Approval Voting: Works Great, Less Complicated Nearly a quarter of a millennium into the American experiment in constitutional representative democracy, I am far more surprised the system has survived for so long than that it suffers from numerous problems. I have lived through less than a quarter of the history of U.S. elections, and heck, I’m falling apart already. A basket of methods intended to serve a population of not much more than 3 million now serves a population of more than 300 million. Yes, it has evolved. In 227 years we have gone from suffrage only for white male landowners to “universal” (okay, not really - ask a convicted felon) adult suffrage. We have gone from hand-written or party-printed ballots to a government-printed “Australian” ballot with candidate access controlled and limited in each state by the ruling parties. U.S. Senators, once chosen by state legislatures, are now chosen by popular vote. And so on. Replacing plurality voting with Ranked Choice Voting (RCV) - as lead essayist Rob Richie recommends - or with some other new voting system, strikes me as more in the nature of a knee replacement than a heart transplant, but hey, why not? I’m a market anarchist myself, but if we are going to take democracy seriously, I suppose there is something to be said for continuously tweaking technical voting methods to most accurately reflect real voter preferences. Along the way we may still do well to recall H. L. Mencken’s conception of democracy as “the theory that the common people know what they want, and deserve to get it good and hard.” Democracy by my lights is never going to be perfect. Rob makes two especially strong arguments for RCV: - It provides for majority rather than plurality winners. Even if there is nothing magical about majority versus plurality vis a vis “the popular will,” there is a longstanding and visceral appeal to majority rule. - It eliminates the need for separate “runoff” elections when a majority is required and no candidate receives one. So far, so good. But I cannot approve of the complexity of RCV versus my own preferred voting alternative, Approval Voting. Those who paid attention to the Florida presidential vote recount in 2000 probably recall quite a bit of controversy over the use of “butterfly ballots.” Many voters claimed to find those ballots confusing and difficult to understand. Some even claimed that in retrospect they had probably voted for Pat Buchanan when they meant to vote for Al Gore. No, I’m not saying that voters are stupid, although some undoubtedly are. I am saying that those of us with an ongoing, even obsessive, interest in politics tend to forget that most American voters don’t spend a lot of time thinking about the technical aspects of casting a ballot. They show up every two or four years to spend an hour standing in line and five minutes in a booth filling out an unfamiliar form. The more complicated that unfamiliar form becomes, the greater the likelihood of error. Some voters seem to have a difficult time looking at lists of candidates, picking one for each office to be elected, and accurately recording their choices. We can hardly expect those voters to do a very good job of looking at lists of candidates and ranking all those candidates on all those lists in order of preference. If they even bother to try, the ratio of noise to signal in results will presumably rise. Some voters will blithely choose one candidate, unaware that they should rank their choices. Others will get the numbering system or other measuring mechanism backward when ranking their choices and end up weighting the power of their votes in favor of the candidates they like least instead of most. Human sacrifice, dogs and cats living together… mass hysteria! Well, okay, maybe not that last part. But if the goal is to align outcomes with real voter preferences, there are real potential defects in RCV as the method of doing so. Approval Voting is simpler. Here is how it works: The voter votes for all the candidates he or she likes. And that’s it! Let’s consider an example. Donald Trump, Hillary Clinton, Gary Johnson, and Jill Stein all appear on the ballot for president. The voter is okay with Clinton or Stein, but not with Trump or Johnson, so the voter votes for Clinton and Stein. Or the voter is okay with Trump or Clinton, but not Johnson or Stein, so the voter votes for Trump and Clinton. Or the voter is only okay with Johnson, so the voter votes only for Johnson. The candidate who receives the most votes wins. Easy peasy. One variant of Approval Voting requires that the winning candidate, in addition to receiving the most votes, must receive votes from a majority of voters. Are there problems with Approval Voting? Sure. In the majority requirement variant, a runoff might be required. RCV avoids that. Approval Voting can also be gamed. If I prefer Johnson but am okay with Stein, I might vote only for Johnson because I would rather have Johnson and don’t want to put any gas in Stein’s tank. And Johnson’s campaign might encourage his “base” voters to do exactly that. But of course RCV can be gamed in the same way. A voter doesn’t have to rank all the available choices. In the above example she could rank Johnson first and rank no other candidates, so that only her vote for Johnson counts no matter the circumstances. Simpler is better. If the purpose of a voting method is to align outcomes with voter preferences, simplicity matters, because complexity produces noise that interferes with our ability to understand those preferences. Is simplicity the only consideration? Of course not. But it’s an important one, and in my view Ranked Choice Voting’s virtues are not sufficient to offset its complexity versus Approval Voting. A final thought: Maybe voting method isn’t really as earth-shakingly important as those of us who spend time thinking about it want it to be. In his essay, Rob points out that “[e]xtensive data analysis from more than 125 RCV elections in the Bay Area show that … every single winner has been the ‘Condorcet’ candidate who would defeat all others head-to-head…” But if something was broken in Bay Area election outcomes, RCV didn’t obviously fix it, and it may have been that nothing was broken at all. To be fair, I’m not sure Approval Voting would have fixed anything here either. Perhaps we’re looking in the wrong place for better outcomes. The False Promise of Instant Runoff Voting By ballot initiative, Maine has just adopted instant runoff voting (IRV) for state elections. IRV is also used in some local elections around the country. In principle, IRV has some desirable properties compared to the status quo electoral system in the United States (single-member-district plurality), but once we look at the actual political context in which it would be implemented in the United States, it may well make things worse for third parties, especially Libertarians. Moreover, there are alternative electoral systems that are superior both in the abstract and in the concrete to both the status quo and IRV. Advantages of IRV Rob Richie’s essay did a good job of laying out the potential benefits of IRV. Its single biggest advantage may be how it addresses the wasted-vote problem. If you like a third-party candidate best, you may be able to safely rank that candidate first on your IRV ballot, knowing that if and when that candidate is eliminated, your second-choice preference will count. For instance, if you liked Ralph Nader but preferred Al Gore to George W. Bush in 2000, under IRV you could have ranked Nader first and Gore second, and your vote would have counted for Gore once Nader was eliminated from the counting process. If every state had used IRV to allocate electoral votes in 2000, Al Gore would probably have won the presidency. By reducing the wasted-vote problem, IRV also reduces major-party gamesmanship. In last month’s U.S. Senate election in my state, New Hampshire, there was a “conservatarian” independent candidate, Aaron Day, whose mission was to defeat Republican incumbent Kelly Ayotte by drawing away conservative and libertarian votes. He spent no money on his race, but in the closing days, a shadowy, Democratic-linked group sent a glossy mailer to Republican households touting Day’s conservative record and bashing Ayotte as too liberal. Ayotte lost by about a tenth of the votes Day won, and so he quite plausibly cost her reelection. In 2010, Democrats ran “Tea Party” independents in congressional races, and Republicans have also been caught out funding Green candidates in the past (William Poundstone’s book Gaming the Vote describes many such shenanigans). When used in computer-simulated elections with voters having randomly assigned preferences, IRV outperforms plurality voting by the metric of interpersonally comparable utility (“Bayesian regret” is the technical term). In other words, voters are in aggregate happier with IRV than plurality, at least when they vote sincerely rather than tactically. Disadvantages of IRV in the Abstract Even in the abstract, though, IRV has some disadvantages. It doesn’t actually eliminate the wasted-vote problem, and it may reduce it only a little bit. Think about a case with a strong third-party candidate, like Ross Perot in 1992. Suppose 35% of voters prefer Bill Clinton to George H.W. Bush to Perot, 31% prefer Bush to Clinton to Perot, and 34% prefer Perot to Bush to Clinton. If everyone votes sincerely under IRV, Clinton wins after Bush is eliminated in the first round – even though 65% of voters prefer Bush to Clinton. But it gets worse. If just a small number of Perot preferrers (>3%) put Bush first and Perot second, then Perot would be eliminated first, and Bush – their second choice – would win. They’ll have a strategic incentive to falsify their preferences. Now, every electoral system is subject to tactical voting like this. But IRV makes it easy and obvious how to vote tactically. In general, you “up-vote” your lesser-evil candidate and “bury” your lesser-evil candidate’s most viable opponent. This is just what voters do under plurality, voting tactically for the lesser evil instead of their preferred third-party candidate. As the Bush-Clinton-Perot example above shows, IRV can also fail to select the Condorcet winner, the candidate that a majority of voters prefer to each other candidate. In fact, this happened in Burlington’s mayoral election in 2009, causing the city to end IRV for mayoral elections when the Progressive won over the Republican in the final round of IRV counting, even though the Democrat, eliminated earlier, was actually the Condorcet winner. There are several other “in the abstract” disadvantages to IRV. IRV has a more complicated ballot, potentially confusing some voters, and a much more complicated counting process than plurality or approval voting. IRV requires counting at a centralized location rather than by precinct. IRV is also subject to a technical-sounding but important problem called “non-monotonicity,” which means that you can help your preferred candidate by ranking her lower. Finally, the Bayesian regret criterion suggests IRV is worse than other ranked-ballot alternatives like Condorcet-consistent methods and Borda count, and much worse than approval and score voting. Disadvantages of IRV in the Real World Getting away from the blackboard for a moment, we also need to think about how IRV would actually work in American elections. It’s no accident that IRV is almost universally a project of the ideological left here in the United States. Vermont started using it in local elections once the Progressive Party became a threat to the Democrats. Maine adopted it after two consecutive elections in which a Republican governor was elected because the left split its vote between a Democrat and a left-of-center independent. (Maine also has a strong Green Party.) Republicans face little third-party threat from their right flank (the Constitution Party is extremely weak), but Democrats do face such a threat in certain places. IRV helps them overcome that threat. IRV actually neuters third parties, especially those with a strong ideological orientation. Third parties may get higher shares of first-preference votes under IRV, but it is still almost impossible for them to win seats, and they lose all the “blackmail power” that they enjoy under plurality. Currently, strategic third parties can choose to run candidates in races where they want to punish one of the major-party candidates (as Day did to Ayotte) and refrain otherwise. This possibility gives major parties an incentive to cater a bit to ideological minorities. Is this blackmail power a good thing or a bad thing? It depends on one’s perspective, I suppose, but one way to defend it is to note that democracy’s institution of majority rule threatens to trample the rights and interests of passionate minorities. Third-party blackmail power under plurality rule gives passionate minorities some leverage; it is a way of incorporating intensity of preference into our otherwise majoritarian political system. For libertarians, the Libertarian Party’s potential blackmail power is a valuable thing. We libertarians expect Democrats at least to be decent on civil liberties and Republicans at least to be decent on economic freedom. When they stray to the authoritarian side of the spectrum, the Libertarian Party can run a candidate to punish them by campaigning on those issues, drawing away conservatives upset by a Republican’s apostasy on economic freedom, for instance. The potential for this sanction should make the major parties govern in a more libertarian fashion than they would otherwise. If this is right, the adoption of IRV would result in less freedom. Better than IRV: Approval Voting Approval voting (AV) is a simple system that lets voters select more than one candidate in an election. The candidate with the most votes wins. There is no need for new ballots or new counting equipment, it doesn’t exhibit non-monotonicity (you can’t hurt a candidate by voting for her), it scores much better than IRV and plurality on Bayesian regret in experimental simulations, and although it is gameable like every electoral system, it is actually difficult and non-obvious how to vote tactically (you need good information about other voters’ preferences and to be able to calculate expected utilities). For libertarians, AV is especially attractive. The Libertarian Party tends to be perceived as a party of the ideological center in the United States. For instance, Gary Johnson voters were about equally split between Trump and Clinton preferences. Moreover, the Libertarian candidate frequently garners double-digit percentages when there is only one major-party candidate on the ballot. Thus there are many Democrats willing to cast Libertarian votes when there is no Democrat on the ballot for a race, and many Republicans willing to do the same when there is no Republican. Under approval voting, then, these voters might well cast votes for both the Libertarian and their preferred major-party candidate in three-way races. Libertarians might actually have a chance of winning some elections. Consider an election like the following: |Percentage of voters||Ranking||Approved-of| |30%||Trump>Johnson>Clinton>Stein||Trump, Johnson| |5%||Trump>Johnson>Clinton>Stein||Trump| |7%||Johnson>Trump>Clinton>Stein||Johnson| |3%||Johnson>Trump>Clinton>Stein||Johnson, Trump| |7%||Johnson>Clinton>Stein>Trump||Johnson| |3%||Johnson>Clinton>Stein>Trump||Johnson, Clinton| |5%||Stein>Clinton>Johnson>Trump||Stein, Clinton| |30%||Clinton>Stein>Johnson>Trump||Clinton, Stein, Johnson| |10%||Clinton>Stein>Johnson>Trump||Clinton| Under plurality with sincere voting, Clinton wins narrowly, as she actually did. Under IRV, Stein is eliminated right away, and then Johnson. Clinton wins the election handily. Under approval voting, Johnson wins in a landslide. Now, I am not actually claiming that Gary Johnson would have won the 2016 presidential election if it were held under approval voting, although it is possible given how unpopular the other candidates were, but rather I am making the point that approval voting helps candidates who can manage to be everyone’s second choice. Under IRV, everyone’s second choice is eliminated right away and has no chance. AV tends to pick consensus candidates, and a high-quality, centrist Libertarian could very easily manage to be just that candidate in the United States. Now, of course we have to consider how the other parties would react to approval voting. Chances are they would run to the center and try to snag approval votes from libertarians. They would try not to alienate significant minorities with inflammatory language and discriminatory policies, which would motivate “disapproval votes” (ballots marked for every other candidate). In fact, the major parties would probably keep winning most elections using this strategy – but the policy outcome doesn’t sound so bad to me. Conclusion We know this for sure: had the Republican Party used approval voting in its primaries, it would never have nominated Donald Trump. Had it used IRV… who knows? In the real world United States, approval voting has massive advantages for libertarians, moderates, and the total satisfaction of voters that IRV lacks – and IRV might even be worse than the status quo because it neuters third parties, generally to the advantage of center-left Democrats. Using the term “ranked-choice voting” for IRV is not really correct, because there are many ranked-ballot systems of which IRV is just one. Ranked Choice Voting Likely Means Lower Turnout, More Errors The history of electoral reform in America is littered with people who eagerly made grand claims about how their preferred solution would cure what ailed American democracy. Usually this involves “liberating” the sanctified voters from the dastardly efforts of politicians and political parties to suppress the “true” preferences of the electorate. Too often, the fruit of these reform efforts has been the creation of electoral processes that reduce voter engagement and maintain the status quo. Rob Richie’s arguments in favor of Ranked Choice Voting (RCV) echo many of the same arguments that reformers have been making for over a hundred years. Unfortunately, pro-RCV arguments are similarly based on faulty assumptions about voters, and too often they ignore or inappropriately minimize empirical research that highlights the potential for negative consequences. In evaluating electoral reforms, I am guided by a clear baseline principle: changes that expand access to the vote and encourage more political participation tend to be better than those that seek to restrict voting or make it more difficult. Based on the results of my research into the impacts of Ranked Choice Voting in city elections, this particular electoral reform fails on both counts. RCV makes voting more complicated, which leads to several negative consequences for the level and quality of voter participation in elections. Equality of Electoral Voice: Lower Turnout and More Ballot Errors Under RCV, overall voter turnout does not increase, and is likely to decrease significantly, especially among those segments of the electorate that are already least likely to participate. I examined voter turnout in five San Francisco elections from 1995 to 2011, the last two of which featured Ranked Choice Voting. My main topic of inquiry was whether variation in turnout across racial lines was related to the adoption of RCV. According to the results of my analysis of over 2,500 precincts across the five elections, turnout declines among African-American and white voters was significantly correlated with the adoption of RCV. Additionally, I found that the adoption of RCV exacerbated disparities in voter turnout between those who are already likely to vote and those who are not, including younger voters and those with lower levels of education. In additional research, I analyzed voter turnout in nonpartisan mayoral elections in RCV cities compared to similar elections in non-RCV cities. The results show that the impact of RCV on voter turnout depends upon whether elections occur during odd years or even years. In odd-year elections with Ranked Choice Voting, voter turnout decreases about eight percentage points, on average, compared to a non-RCV general election at the same time. Whereas in even-year elections RCV has little or no effect on voter turnout. Even-year elections usually coincide with congressional or presidential elections, and they generally have much higher voter turnout than those that occur in odd years. One explanation for this is that during odd-year elections it is very difficult to get low-propensity voters to the polls. By making voting more complicated, RCV exacerbates this tendency, making it less likely that new and more casual voters will enter into the process. Another major concern with respect to equality and integrity of the electoral process under Ranked Choice Voting is how it affects the tendency of voters to make errors when marking ballots. Research indicates that when voting is made more complicated, for example through ballot design or the presence of many candidate options, ballot errors increase. My colleague Francis Neely and I analyzed almost two million individual ballots in order to measure the incidence of errors that disqualify a ballot from being counted after the adoption of Ranked-Choice Voting in San Francisco elections. We found that such errors were significantly more common in RCV elections than plurality elections. The rate of errors was comparable to electoral situations that involve either very complex voting tasks or poorly designed ballots. To be clear, it is the additional complexity of voting under Ranked Choice Voting, not RCV itself that causes more ballot errors. Nonetheless, it is highly likely that implementation of RCV will result in higher rates of ballot errors that cause individual ballots to be disqualified. These ballot errors will be concentrated among those portions of the electorate who are already most vulnerable to being underrepresented. Polarization and Voter Confusion An underlying theme of Rob Richie’s argument is that Ranked Choice Voting will help to alleviate some of the problems related to partisan polarization. The idea is that RCV will encourage more moderate and/or third party candidates to run for elective office, and that voters will not be forced to choose the “lesser of two evils.” While this argument seems to make a certain amount of sense, it rests on the faulty assumption that voters are capable of consistently and accurately distinguishing the ideological and policy positions of candidates for whom they vote, absent clear partisan and ideological cues to guide them. Unfortunately, implementation of Ranked Choice Voting will most likely increase voter confusion. My research into racial group polarization in nonpartisan urban elections illustrates the problem. I found that there was a reduction in the polarized preference gap between different racial groups. However, this reduction in racial polarization was most likely caused by voter confusion about how candidates will represent their interests rather than any sincere expression of preferences previously suppressed by plurality systems. This result is consistent with analysis by Corey Cook showing that voters’ candidate preference rankings in RCV elections are highly inconsistent, lacking any well-ordered ideological or policy-based structure. Research into the Top Two primary system in California indicates that voters are not able to reliably identify candidates’ ideologies or policies absent strong partisan cues. Because of this, the new system has failed to achieve its supporters’ goals of increasing the chances of ideologically moderate candidates and reducing polarization. While there is some debate among political scientists about the extent and causes of partisan polarization, there is little doubt that voters are highly polarized from each other. Given the reality of polarized voters, there is little reason to believe that a change to how people vote will result in the election of politicians who are less polarized. Conclusion: First Do No Harm The results of my research provide reasons for skepticism about whether the benefits of RCV outweigh the potential costs. However, my research has been limited to nonpartisan city elections, and therefore the findings may not generalize to partisan state or federal elections in which candidates may be better known and voters may be more likely to participate. Additionally, other researchers have documented some positive aspects of RCV. First, voters who experience Ranked Choice Voting tend to express satisfaction with the process and confidence in their ability to understand it. Second, based on high-quality research by other political scientists, it is reasonable to expect that the level of negative campaigning may decrease under RCV. Despite these positive aspects, and given the unlikelihood that RCV will contribute to a reduction in partisan polarization, I believe that the potential for lower turnout and more ballot errors outweighs the potential benefits of Ranked Choice Voting. Rather than expand the electorate, reduce turnout inequalities, and ameliorate polarization, it is quite likely that the adoption of Ranked Choice Voting for state and federal elections around the country will further entrench the status quo. The Conversation An Incremental Win Between Rob Richie’s lead essay and the responses from Jason Sorens, Jason McDaniel and myself, the main takeaway I get from this discussion relates less to the virtues or defects of any particular voting method than to its ultimate purpose. All four of us seem to agree that the technical goal of an election is to reflect, in aggregate but as accurately as possible, the diverse preferences of voters. Single member districts with first past the post plurality winners don’t do that very well. Even the most lopsided outcome — many U.S. Representatives and U.S. Senators win election after election with 60% or even 80% of votes cast — tends to leave a significant percentage of the electorate unrepresented in its preferences. That’s not even counting those who abstained from voting for reasons other than the apathy usually ascribed to them. The glass gets filled for those who like the winner; it remains empty for everyone else. Ranked choice voting, approval voting, score voting, and the like all mitigate this problem to a degree. Even if the voter’s most preferred candidate doesn’t win, there’s a decent chance that his second or third choice might, and that his glass will come out half full at any rate. Even as an anarchist, I have to call that a win. But as an anarchist, I’m also compelled to interject a sentiment expressed by a distinctly non-anarchist historical figure, Abraham Lincoln: “No man is good enough to govern another man without the other’s consent.” The best way of giving expression to diverse preferences is to take as many matters and decisions as possible out of the hands of government altogether. Politics, including representative democracy, really amounts to some of us telling the rest of us what to do. Put that way, I assume most of us would like to see as little of politics as possible. In the absence of politically determined mandates and prohibitions, most of us should be able to have our own way most of the time. The exception that lends credibility to political methods is when having our own way constitutes aggression against others. Yes, that is a real problem, but I don’t consider it obvious that politics ever has been, is, or can ever be, an optimal solution to that problem. I’ve followed experiments in voting method closely for years, and will continue to do so with interest. I do genuinely hope that ranked choice voting makes voters in Maine happier with outcomes. I also hope it makes them more free. If not, well, at least it will add to the body of information we can draw on for purposes of improving (or, says anarchist me, justifying rejection of) American voting and election systems. Either way, as I said before, a win.
https://www.cato-unbound.org/print-issue/2164/
Understanding where people choose to live and the factors that influence housing location decisions could improve the type, scale, and timeliness of transportation improvements and policies. This project examines demographics, housing characteristics, education quality, and other factors in comparison to traffic congestion levels to gain insight into how congestion affects housing choice. The resulting assessment will enable policy makers to better understand the role of traditional capital or operational improvements and policy and planning decisions in addressing the state’s congestion problem. The assessment will provide insight to a comprehensive policy approach to address urban and suburban congestion while encouraging market-based decisions. Check back for a publication of this study to come out late Summer 2015.
https://policy.tti.tamu.edu/the-link-between-congestion-and-housing-choice/
Mary Ann combines a variety of exercises that improve function including a new exercise that uses the fingers and breath to control abdominal muscles. - program length: 30 minutes - episode #1318 - program's official website Wednesday, October 17 at 7:00 am on 12.1 additional airdates - No additional airdates schedule at this time upcoming episodes - Fit From Head to Toe - Sensory Awareness and Breathing - Fun with the Large Ball - Footwork and Core - Controlled Breathing - Vestibular and Core - Posture - Breathing - Functional Fitness - Good Alignment - New Challenges - Coordination and Reaction Time - Lymphatic System - Mobility and Balance past episodes - Exercising With Accessories - Somatosensory Work - Releasing Muscle Tightness - Myofacial Relief & Gait - Keep The Fun In "Fun"Ctional Fitness - Finger Dexterity - Happy Feet, Healthy Body - All Systems Go! series description This therapeutic tone-and-stretch exercise program designed by Mary Ann Wilson, a registered nurse and professionally certified aerobics instructor, was developed for senior citizens, significantly overweight persons, and those who are wheelchair bound.
http://www.cpt12.org/schedule/program-details/?id=120181017070000
Official estimates for the number of new homes being built each year in Ireland are entirely unreliable and may be overestimating the level of supply by nearly 50 per cent, a leading housing expert has warned. Dublin architect Mel Reynolds claims the 15,000 figure for house completions last year, being used by the Department of Housing, bears no relationship to the number of new homes coming on the market. This is because official estimates are based on ESB meter connection data, which typically overcount the level of new builds, reflecting the fact that new electricity connections can be triggered by work to existing buildings or by formerly vacant units coming back on stream. “It’s a wholly unreliable way to calculate house completion levels,” Mr Reynolds said. Nama developments He also claimed the numbers were being inflated by recently completed but unoccupied National Asset Management Agency developments and by the finishing out of so-called ghost estates, which do not strictly qualify as new homes. Mr Reynolds said the actual level of house completions for 2016, based on stamp duty transactions and an estimate of self-builds, build-to-lets and local-authority construction, was probably closer to 8,000. This is almost 47 per cent lower than the department’s official estimate. The department declined to comment. Listen to Inside Business An alternative way of measuring the number of new homes under construction is by examining commencement notices lodged with local authorities. But Mr Reynolds said this data was also skewed by the fact that large multi-unit schemes often lodge commencement notices for a phased development that may take several years to complete. He also highlighted that while new builds were previously registered with the Building Control Management System, a recently introduced opt-out for once-off dwellings means the database is no longer an accurate measure of construction. First-time buyers Mr Reynolds said the Government’s policy focus was almost entirely on first-time buyers and new builds, which form a comparatively small segment of the market when juxtaposed with the 198,000 homes lying vacant in the State. Dublin Institute of Technology academic Lorcan Sirr said that while the official figures suggest there were 50,951 housing completions between 2011 and 2016, the actual number of new households recorded by the 2016 census was just 18,981. The anomaly is explained by the fact that as new homes come on stream, older ones are becoming derelict or being taken over for reconstruction. Dr Sirr said the focus should be on the increase to the existing housing stock rather than the number of new builds. He also noted the causes of demand in the Irish market were more complex than previously thought, with changing family structures at the core. The latter was linked to more people staying single longer and to more people getting divorced, which necessitated a bigger housing stock, he said.
https://www.irishtimes.com/business/construction/housing-completions-may-be-nearly-half-official-figure-says-expert-1.2932132
What steps is Kuwait taking to engage in new international tax regulations and practices? What steps is Kuwait taking to engage in new international tax regulations and practices? After years of abundant public spending, GCC economies experienced a direct hit on their fiscal balances caused by the 2014 fall in oil prices. As such, Kuwait is following the regional trend of pursuing economic and fiscal reform programmes. Multiple government-led initiatives seek to support economic diversification, strengthen the private... In an era marked by profound technological disruption and intense global competition in new frontier industries, emerging markets are striving to improve and adapt their education systems to reconcile the demands of the modern economy and the needs of citizens. As such, innovative solutions are being developed to address barriers within... Amid concerns over the quality of education and training, and in recognition of the country’s growing youth population, the Kuwaiti government has demonstrated its intention to revitalise the education sector through a series of policy reforms and investments in recent years. Some 37% of the population is under the age of 14, while 35% is... Rising health care costs, ageing populations and changing lifestyles in emerging economies are stoking demand for medical technology (medtech) solutions. These entail not just smart devices that remotely monitor and transmit biometric data, but any instance of technology that helps to deliver health services. These initiatives are happening... Significant improvements in health outcomes have been achieved in Kuwait over the past 20 years, especially in terms of the extension of life expectancy and the reduction of infant mortality rates. The reform of the Kuwaiti health care sector is central to the Kuwait National Development Plan, which is also known as New Kuwait 2035. With a... Stay updated on how some of the world’s most promising markets are being affected by the Covid-19 pandemic, and what actions governments and private businesses are taking to mitigate challenges and ensure their long-term growth story continues.
https://oxfordbusinessgroup.com/country/kuwait?sort_by=search_api_relevance&page=2
Pituitary Gland. also known as the master gland helps produces many other hormones that run other functions in the body. It is situated at the base of the brain and is tiny in shape. The pituitary gland is called the “master gland” because its hormones regulate other important endocrine glands—including the adrenal, thyroid, and reproductive glands. The hormones that are made in the pituitary gland are made in two different areas of the gland. The anterior part and posterior part. The anterior part of the Pituitary gland makes hormones like the prolactin, Growth hormone, Adrenocorticotropin (ACTH), Thyroid stimulating hormone (TSH), Luteinizing hormone (LH) and follicle stimulating hormone (FSH). The posterior part of the gland makes 1. Antidiuretic hormone (ADH) and the Oxytocin hormone. In a lot of other species, the pituitary gland is divided into 3 lobes 1. The anterior lobe 2. The Intermediate lobe and the posterior lobe. In humans, the intermediate lobe does not exist as a distinct anatomic structure but rather remains only as cells dispersed within the anterior lobe.
https://www.physiqueglobal.com/physique-global-article/pituitary-gland-the-master-gland-that-controls-other-hormones/
Sensory Integration is a normal life function and process. Dr. A. Jean Ayers first defined sensory integration as "the neurological process that organizes sensation from one's own body and from the environment and makes it possible to use the body effectively within the environment." This is supposed to happen without any conscious effort at all on our part. When the systems do not function as they are supposed to, it is called sensory integration dysfunction (SID). This is a general term and the way it presents and affects the individual varies from person to person. Carol Kranowitz, in the revised version of The Out of Sync Child lists 3 main categories of SID. These areas include: sensory modulation disorder, sensory discrimination disorder, and sensory based movement disorders. Each of these categories can be further described and clarified. For example, a child could be over-responsive, under responsive, sensory seeking, or sensory avoiding. Children with sensory processing disorders may have any number of a combination of these types of descriptors. This means that some children have motor coordination problems and sensory modulation problems, while others have motor or modulation problems without the corresponding motor or modulation disorder. When testing for sensory integration dysfunction, we look at the end product-the motor action, emotion, or behavior and start looking for patterns or clusters of symptoms. Having one or two "quirks" does not mean that there is dysfunction present. We do use standardized tests and questionnaires to help us put the pieces together. There is one test called the SIPT which stands for Sensory Integration and Praxis Tests. It is able to detect some but not all sensory processing dysfunction. It is a test of motor planning (praxis) and can detect sensory discrimination problems. It is not a test for sensory modulation dysfunction. Treatment plans are highly individualized and are most successful when the family is able to carry over to the home environment strategies specifically selected and designed to treat the child's individual needs. In order for that to happen, it is important that you, the family know and understand the language that we use and the reason behind the "homework" or suggested activities for home. The suggestions that we give you are intended to carry over the work that the child has done in therapy to home, and increase the rate of change and improvement. We will be defining a variety of terms and giving you a chance to experience some of what each of the sensory systems does and gain valuable insight into what you can do at home. We will start with a brief overview of the major types of sensory integration dysfunction. Then we will discuss the tactile, proprioceptive, vestibular, auditory, and visual systems in greater depth. The Three Major Types or Classifications of SID Modulation: is the brain's regulation of its own activity. Modulation involves matching the body's energy level and attention to the demands of the environment. It requires the brain to filter information and attend to certain information while disregarding other stimulation. When an individual over responds, under responds, or fluctuates in response to sensory input in a manner that is disproportional to that input, we say that the person has a sensory modulation disorder. The most common types of sensory modulation disorder are: tactile or sensory defensiveness, gravitational insecurity, aversive responses to movement, and poor sensory registration. Modulation is NOT always being "quiet and calm". For example being quiet, calm, and almost falling asleep at a loud basket ball game is not typical, just as a being excessively loud and rambunctious in a classroom or library setting is not typical. A well operating system with typically functioning modulation would be able to cheer loudly for their favorite basketball team and then immediately shift gears to be able to stand in line to order refreshments. An example of poor sensory modulation to the sense of touch would be to withdraw from and shout out as if in pain when getting foam soap on their hand. Dyspraxia: The word praxis means Motor Planning, and dyspraxia means that there is a sensory based problem with motor planning. It is the ability of the brain to conceive, organize, and carry out a sequence of unfamiliar/novel or un-practiced actions. Motor planning is necessary to perform any coordinated movement - gross or fine motor. It requires the brain to recognize sensory information correctly, process it, and compare it to memory and to the desired outcome. It also involves planning the movement, executing it, and finally analyzing the performance in order to make adjustments to improve the next attempt. Eventually, the action becomes a habit and no longer requires so much of a conscious effort. A problem any where in this long sequence of events can be devastating to the final outcome and result in poorly coordinated movement, especially if the problem lies in the recognition and processing of the incoming sensory input. Sensory Discrimination: is the ability to correctly register (or recognize) sensory input on a neurological level in order to use it functionally. When our sensory systems are functioning correctly (meaning they are registering and responding to incoming sensory input correctly) we are able to discriminate or know things about ourselves and the world around us without testing them out every time. For example, we know which way is up even if we are upside down, we can tell the difference between a penny and a quarter without looking at it, we can anticipate how much force to use when picking up an egg shell compared to a closed can of pop, and we can tell the difference between a safe touch and a threatening touch (feeling something crawling on your arm vs. a gentle pat on the arm). Sensory discrimination problems are nearly always the root cause of dyspraxia or poor motor coordination. The Tactile System I. Definition: The tactile system is the sensory system which layers our bodies and gives us information about physical objects surrounding us. It is the physical barrier between ourselves and the environment. The tactile receptors are found in the skin, and they serve the purposes of detecting and discriminating. This system is responsible for detecting and discriminating pressure, vibration, movement, temperature and pain. Once the tactile system takes in this information it is accountable for processing and organizing it to make a meaningful and accurate picture of the stimulus. There are 2 components that make up the tactile sense: 1. Protective (defensive) system: As a protective system the tactile sense is expected to detect and alert the neurological system of danger. If on high alert mode, the tactile system may perceive many things as being dangerous even if they are not necessarily a threat. This protective sense may detect hot temperature, sharp objects, insect bites, or physical harm from another. 2. Discriminative system: The tactile system is also responsible for discriminating between various objects. The reliability of tactile discrimination is the highly reliant on adequate perceptual skills. It distinguishes that we are touching something and where on the body that it occurs. Tactile discrimination also tells us whether the touch is light or deep. Lastly the tactile discriminative system informs us of the size, shape, temperature, texture and density of the stimulus. II. Dysfunction: Tactile system dysfunction occurs when there is inefficient processing in the central nervous system of sensations perceived through our skin. A child who is hyper-responsive (over responsive) has difficulty with touching and being touched by objects and people, especially when unexpected. A child with tactile system dysfunction may be unable to distinguish between dangerous and safe tactile sensations, and may misinterpret a friendly touch as being threatening. Examples of Tactile system dysfunction: 1. An over responsive tactile system (tactile defensiveness) may look like the following: - Reacts negatively and emotionally to unexpected, light touch. - Fight or flight response - Avoids contact with typical age appropriate activities, i.e. finger paint, baths, clothes, pets, and people - Is a picky eater- various textures are not tolerated orally 2. An under responsive tactile system may look like the following: - Under reacts to tactile experiences (decreased self-protection) i.e. injuries - Requires extra stimulation - May constantly touch objects and people - Unaware of messy face, wet clothes, etc. 3. Poor tactile discrimination may look like the following: - Not registering information about how things feel. - May repeatedly touch things to learn about their properties - Difficulty learning new skills - Difficulty manipulating things in his/her hands (crayons, scissors, and utensils) The Proprioceptive System I. Definition: Proprioception is the unconscious awareness of body position. It tells us about the position of our body parts in relationship to each other and the environment. It allows us to have a knowledge of how much force and speed the muscle is required to generate in order to accomplish a specific movement which results in appropriately graded muscle control. The receptors are located in the muscle belly, tendons, ligaments, joint capsules, connective tissue, and the skin. An example of an intact proprioceptive system would be the ability to pick up an egg and place it on a plate without breaking it. The child must realize their position in relationship to the egg, create a plan to grasp the egg, carry out the motor plan, and then terminate the activity. II. Dysfunction: Dysfunction in proprioceptive processing is apparent through poor awareness of body and position. These children have difficulty grading movement and controlling motor patterns. They often press too hard or not hard enough on pencils and small objects. Sometimes movement is overshot or undershot. In the example of the egg, the child may break the egg when he picks it up, or place it too roughly on the plate causing it to break. Children with proprioceptive dysfunction also have a poor sense of postural stability through their trunk. A strong stomach and back are required to have an upright posture and provide a stable base of support for the arms to function smoothly. These are necessary components for tasks such as balance and sequential movements. Without appropriate awareness of body position, motor patterns are difficult to create and carry out fluently. Many of these children appear to be "Klutzy", running into objects or pressing/pushing too hard. A Child with Proprioceptive Dysfunction may demonstrate some of the following characteristics or behaviors: - The child runs and crashes into objects without registering pain to the point where you may think "that looks like it hurt" - poor body awareness - The child who has a tired hand after 5 minutes of handwriting and seems to be pressing down really hard on his pencil- poor awareness of pressure - The child who slips like butter into any supportive surface (your lap, the table or chair) – poor postural activation - The child who has difficulty carrying out fluent movement patterns that other children his age can do with ease - poor motor planning - The child who pushes other team members excessively hard, but may not intend to or realize it - poor awareness of pressure The Vestibular System I. Definition: The vestibular system is defined as the system that is located in the inner ear and tells us where our head/body is in relation to the earth. The primary function of this system is to register movement of the eyes, head, and neck and to respond to the pull of gravity. This system sends messages to the Central Nervous System about balance and movement, and helps us to generate muscle tone so that we can move smoothly and efficiently. A: Some examples that can better explain the vestibular system is knowing whether you are moving or if you are standing still. Another example is knowing whether the objects in the room are motionless or moving. You are using your vestibualr system to help you to distinguish these differences. II. Dysfunction: Dysfunction in the vestibular system is the result of inefficient processing in the brain, of sensations received through the inner ear. A child with vestibular dysfunction may have difficulty processing information about gravity, balance, and movement through space. Other areas affected by vestibular dysfunction can include muscle tone, bilateral coordination, praxis (motor planning), arousal state, vision, hearing, and emotional security. A: The Over Responsive or Hypersensitive Child may demonstrate characteristics such as: 1. Showing an intolerance to movement: - Dislikes playground activities such as swinging, spinning, and sliding. - Is cautious, slow moving, and prefers sedentary play. - Has difficulties taking risks or trying new things. - Is uncomfortable in the car, elevators, and escalators and may become motion sick rather easily. - Seems clingy and demands physical support from a trusted figure. 2. Demonstrating Gravitational Insecurity: - Has an outstanding fear of falling, even if no real danger exists. - Is fearful of heights or raised surfaces. - Avoids curbs, stairs, or movements when his/her feet leave the ground. - Has difficulties when head is tilted in various directions. - Can have poor visual discrimination. B: The Under Responsive or Hyposensitive Child may demonstrate characteristics such as: 1. Showing little response to movement: - Does not seem to notice when they are being moved. - Does not register movement effectively enough to decipher when they are dizzy. - May not notice when they are falling; which can result in decreased protective responses. 2. Demonstrating an increased tolerance for movement: - Needs constant movement in order to effectively function. - Has difficulties remaining seated or staying still. - Craves excessive movement such as bouncing on furniture, assuming upside down positions, or rocking. C. You may also see additional related characteristics in a child with Vestibular Dysfunction. They can include the following: 1. Poor postural control - Easily loses balance. - May be a clumsy child. - May have a loose, floppy body. - Is limp, constantly leaning, slumping and/or has difficulties sitting in a chair. 3. Poor motor planning: - Has difficulties with conceptualizing, organizing, and carrying out movement. - Has a hard time generalizing previous learned knowledge. 4. Is emotionally insecure: - Gets easily frustrated and gives up quickly. - Is reluctant to try new activities. - Has a low tolerance to potentially stressful situations. - Has a low self esteem. - Is irritable in other’s company and avoids/withdraws from people. - Has difficulties making friends and relating to peers. The Auditory System I. Definition: The auditory system is composed of the outer ear, inner ear, and the nerves and areas of the brain and is a complex system of hearing and processing sound. This system registers and responds to intensity (volume), frequency (high/low pitch), duration (time), and localization (direction) of the sounds in our environment. The ability to hear and processes sound effects the child's ability to understand their position in space, body scheme, right side from left side in space, balance, posture, arousal, muscle tone, and emotional tone. The ability to process sound and to actively listen is directly related to the vestibular system due to the anatomical structure of the inner ear. When working together, the vestibular system and the auditory system give us the best picture of where our body is in space and what it is doing, functionally giving us a foundation for planning and executing motor plans. II. Dysfunction: Dysfunction occurs when a child becomes over sensitive to sound and is unable to tolerate normal daily sounds. For other children, dysfunction presents as under-sensitivity and they require or seek out intense sounds that interfere with daily tasks. This may also lead to speech and language problems. The Child with Auditory System Dysfunction May Exhibit some of the following Characteristics or behaviors: - They become very upset in response to loud sounds like fire drills and class bells, may cover ears or become agitated - The child is too easily distracted by other sounds inside or outside of environment (heater, fans, light fixtures, talking) - The child can't focus on task with background noise - The child seems oblivious to loud or sudden sounds - The child can't locate the source of sound - The child doesn’t respond when name is called - The child may talk or make sounds with mouth, hands or feet excessively - The child may not speak as clearly as children their age - The child is unable to follow verbal directions The Visual System: I. Definition: The visual system is our window to the world. Its primary structure is the eyeball, but includes 6 muscles around each eye, the eye lid, nerves, and the primary reception area in the brain. It enables us to have a three dimensional picture of the world around us, identify sights, anticipate what is "coming at us", and allows us to prepare for a response. The visual system is closely linked to our vestibular system. We use both our visual system and our vestibular system to give us our ability to balance. The two systems work closely together to give us good visual perception (an accurate mental representation of what we are seeing) and good visual motor skills (being able to coordinate our hands and eyes for work in space and for written work). II. Dysfunction: A. The over-sensitive/ over stimulated child may: - Have trouble making or keeping eye contact - Withdraw from bright light - Have trouble attending to details on paperwork - Have trouble finding a desired object from a cluttered or competing background. - Not notice important details about the environment around them. B. The Under-Responsive Child may: - Stare into bright light - Line up toys/objects - Be overly drawn to spinning or stimulating objects Visual system problems can occur for many reasons and can be very complex. Therefore, it is very important to be sure that each child's vision is good. This goes far beyond screening to see if the child can see clearly (i.e. passing a typical school screening). Your therapist may suggest an eye exam by a developmental optometrist to look more specifically at all of the functions and necessary underlying skills of the visual system to rule out any non-sensory causes of visual system problems. Resources I. Books: - Tool Chest: For Teachers, Parents, and Students by Diana A. Henry, OTR/L (There are several Tool Chest resources including videos and workbooks) - The Out of Sync Child by Carol Kranowitz - The Out of Sync Child Video By Carol Kranowitz - The Out of Sync Child Has Fun By Carol Kranowitz - Out of the Mouths of Babes by Sheila Frick, Ron Frick, Patricia Oetter, & Eileen Richter - Sensory Integration and the Child By A. Jean Ayers - Unlocking the Mysteries of Sensory Integration by Anderson and Emmons - Understanding the Nature of Sensory Integration with Diverse Populations by S. Roley, E. Blanche, and R. Schaaf - Sensabilities by Mary Ann Colby Trott et. Al. - Too Loud Too Bright Too Fast Too Tight by Sharon Heller, Ph.D. - The Goodenoughs Get In Sync By Carol S. Kranowitz; illustrated by T.J. Wylie II. Catalogs: - Southpaw Enterprises 1-800-228-1698 www.southpawenterprises.com - Abilitations 1-800-850-8602 www.abilitations.com - Integrations 1-800-622-0638 www.integrationscatalog.com - Pocket Full of Therapy 1-800-PFOT-124 www.pfot.com - Sensory Comfort 1-888-436-2622 www.sensorycomfort.com - Beyond Play 1-877-428-1244 www.beyondplay.com III. Internet: Doing a search for Sensory Integration will provide you with a wealth of information. There are many good web sites that have specific information related to individual questions. Dr. A. Jean Ayers first defined sensory integration as "the neurological process that organizes sensation from one's own body and from the environment and makes it possible to use the body effectively within the environment."
https://www.kidscando.org/sensory-integration
The film, The War with Grandpa (2020), is based on the young adult novel, by Robert Kimmel Smith, of the same name. Now, this is the part of the review where I write something witty about having checked the Rotten Tomatoes and Goodreads results, and realised that I should have reviewed the book instead... Except it's not, because at the time this review is being written, the novel has a respectable 3.83/5 on Goodreads, but the audience score for the film is 85% on Rotten Tomatoes. WHAT? The people think that this film deserves an A? I've officially lost all faith in humanity... The War with Grandpa follows the great battles of Peter (Oakes Fegley), a sixth grade middle school student, and Grandpa De Niro, a recently widowed and somewhat stubborn retiree. Grandpa De Niro, or GDN as he is more affectionately referred, moves in with his daughter and her family, after a faux pas at a shopping centre... It isn't a faux pas. GDN can't work out the self-serve checkout, because GDN is OLD, so he steals his groceries. Mum acknowledges that GDN is OLD, so she gives him Peter's room, and Peter is forced to move into the attic with the mice and the rats and the bats. Enter the war games. (Seriously though, could they not have removed the animals first?) At its very core, this film is 90 minutes of pranks. On a scale of bad pranks to great pranks, these certainly rank higher than the pranks of Pup Academy, but they also aren't the greatest pranks in the world. There's the good ol' snake in the bed, removing the screws from the furniture, changing up the shaving cream and more. This prank war goes for a long time and unfortunately, there weren't many pranks that took me by surprise, or made me laugh. Thankfully though, when I viewed this film, my cinema was occupied by children and teenagers whose reactions did enhance my viewing experience (a sentence that I never dreamed I'd write). In the moment when GDN removes the screws from all of Peter's furniture, and Peter is waltzing around the room being afflicted by collapsing desks, office chairs and beds, the audience was completely silent. No reaction. No laughter. No nothing. Only a few minutes later however, Peter removes his Air Jordans from his locker and they are now covered in pretty pink lovehearts and bedazzled with cute gems. Suddenly, the audience awakens. "Ohhhhhh noooooooooooooooo," they collectively cry. You got 'em, Tim Hill. You got the young people right in the feels...and I thought it was hilarious! I get it, Air Jordans, they're expensive, they're rare, they're hard to acquire in sixth grade. Still though, I didn't expect this prank, to be the prank that sends the audience into a pit of despair. That moment happened much later for me: Peter presses OLD man GDN's panic button whilst he is sleeping, and the ambulance officers who arrive begin to aggressively shake and berate GDN in an effort to treat him. It wasn't far from abuse. I was horrified. Although this film isn't hugely successful in the laughs department, I will freely admit that it is very successful in terms of advertising Lyft, the rideshare company. GDN is OLD; he doesn't understand the technologies. He still listens to records. But boy does he get Lyft. Lyft is convenient and easy! At the swipe of a button your ride has arrived! #Lyft. Your friend with a car. I digress, GDN is also impressed with Lyft, because every time he seemingly orders a ride, it's the same person who rocks up: Chuck. And Chuck dresses like a personal security detail, which is what every retiree on a fishing trip really needs. 5 stars. Now for my final paragraph, and the most puzzling part of this film. Peter's younger sister Jennifer, is obsessed with Christmas. She wanders around singing Christmas carols and decorating her half of the bedroom in Christmas paraphernalia. In fact, Jennifer loves Christmas so much, that she has a Christmas themed birthday party. "Merry Birthday Jennifer!" the people exclaim. Now, I'm not sure if Jennifer is obsessed with Christmas, because she's young and small children often have obsessions, or because they wanted a quick way to make this film marketable around Christmas time? It could be either, and honestly, that's not the confusing part. The confusing part, is that for Jennifer's Christmas themed birthday party, her parents go all out. There's a jumping castle, there's a banner, there's face paint, all the usual stuff. However, there is also a winter wonderland, complete with numerous Christmas trees, costumes and fake snow falling from the sky. To say it's excessive, would be a gross understatement. Now, riddle me this, if you have so much disposable income that you are able to hire fake snow and multiple Christmas trees for a birthday party in the off season, why can't you move into a house that's big enough for everyone? Mum even says, "It's not about the house or the money," when pranking goes too far. These people clearly have a lot of money! What are they doing? Evidently, I am not going to recommend that you view this film. You could however, fast-forward to the middle, to watch the one moment that I actually really enjoyed. GDN sits in his/Peter's bedroom, watching the sunset through his window. We don't see the sun, but an orange glow warms his face. Jennifer enters. She carries a tablet, and as young people do, asks GDN to play with her. He declines. She is insistent, pointing out all the great games that they could play. A white light, projected by the device, illuminates her face. The juxtaposition between the lighting is beautiful, and communicates a profound message about age/generations, but also about the way that we choose to spend our time. It was beautiful. And I feel like it was very purposeful... But it was still only one minute of goodness. So all in all, probably not worth it....
https://www.inconceivablereviews.com/post/the-war-with-grandpa-2020
CISA Directive 23-01 guides federal agencies to focus on asset discovery and vulnerability enumeration. Learn what security leaders can do now to prioritize visibility and vulnerability management. The Cybersecurity and Infrastructure Security Agency (CISA) continues to drive higher standards in helping Federal agencies to manage cyber risk. This follows on from last year’s Binding Operational Directive (BOD) 22-01 which introduced the valuable Known Exploitable Vulnerabilities catalog and required Federal civilian agencies to identify and remediate these specific vulnerabilities within a certain timeframe. This new BOD, 23-01 builds on that work and is intended to help civilian agencies to improve their operational visibility, a key building block of any successful cybersecurity program. The directive is focused on 2 critical initiatives: - Asset Discovery – defined as an activity through which an organization identifies what network addressable IP-assets reside on their networks and identifies the associated IP addresses (hosts) - Vulnerability Enumeration – This describes the identification and reporting of suspected vulnerabilities on assets. This is achieved through detecting host attributes, such as operating systems, applications etc. and matching them with data on known vulnerabilities. CISA’s required outcomes for Asset and Vulnerability Visibility CISA is offering to support agencies on their current baseline asset management capabilities and provide technical and program assistance and is not prescriptive in their technology recommendations. However, the Directive is specific about the desired outcomes that agencies will have to meet. These are: - Maintain an up-to-date inventory of networked assets as defined in the scope of this directive; - Identify software vulnerabilities, using privileged or client-based means where technically feasible; - Track how often the agency enumerates its assets, what coverage of its assets it achieves, and how current its vulnerability signatures are; and - Provide asset and vulnerability information to CISA’s Continuous Diagnostics and Mitigation (CDM) Federal Dashboard. For all impacted Federal Civilian Executive Branch (FCEB) agencies, this means that by April 3, 2023, they will need to: - Perform automated asset discovery at least every 7 days, including, at a minimum, the organization’s entire IPv4 space - Be able to drive ‘on-demand’ asset discovery and vulnerability enumeration within 72 hours of a CISA request, and present the findings back to CISA within 7 days. - Run an organization-wide vulnerability enumeration process every 14 days, including endpoints, networks devices and roaming devices, such as laptops. Vulnerability detection signatures should be no more than 24 hours old. Additional guidance as to the specifics of this program are available in CISA’s implementation guidance, and there is the understanding that for some agencies, this process will be more complex. A 2021 US Senate report on the state of cybersecurity in Federal agencies highlighted common asset and vulnerability management challenges and, while only Federal agencies are required to comply with this Directive, it provides a roadmap for other organizations on the potential direction of regulatory compliance, as noted by Erik Nost and Jess Burn at Forrester in their blog on CISA’s latest move. How can Noetic help organizations with Asset Discovery and Vulnerability Enumeration? Noetic’s approach to cyber asset attack surface management (CAASM) aligns very well with CISA’s new Directive. Our ability to continuously identify assets and map their cyber relationships in our graph database means that we understand how assets, vulnerabilities, networks and datasets are connected and can present this information to security teams in a way that clearly understandable and actionable. Our perspective is that cyber asset management and vulnerability prioritization should be tighlty integrated processes, as you need to understand both asset criticality and likelihood of exposure, as well as vulnerability severity and exploitability, in order to run a successful cybersecurity program. Noetic’s approach to Asset Discovery The Noetic platform leverages existing security, IT management and DevOps tools through agentless API connectors to build a map of all assets across the organization – cloud and on-premises. The asset data is continuously updated, aggregated and correlated into our graph database to provide security teams with a 360-degree view of all assets, the cyber relationships between them and their current security posture. This ensures that security teams always have the latest understanding of their cyber asset landscape, attack surface and potential security gaps that need to be addressed. Automated action is also a key part of the Noetic solution, allowing teams to quickly trigger automated processes across their security infrastructure to address control drift. Noetic’s approach to Vulnerability Enumeration In the Noetic platform, vulnerabilities are mapped into the graph along with machines, networks, users and other relevant security assets and data. We ingest vulnerability data from a range of different tools, including EDR, network discovery and Vulnerability Assessment platforms. This correlated data is then enriched from external intelligence sources, such as NIST NVD, CIRCL CVE, First EPSS, the CISA Known Exploitability catalog and more. What this gives us is a unique ability to put vulnerabilities in their necessary context. Security teams using Noetic can quickly identify and prioritize outstanding vulnerabilities based on a combination of criteria so they can focus remediation where it makes the most sense. Security teams across Federal agencies and the commercial sector need to prioritize their workload, and by focusing on exploitable vulnerabilities that are impacting on business-critical applications or systems first, they can reduce their potential exposure and cyber risk. To learn more about how Noetic can help meet CISA’s new requirements in BOD 23-01, you can register for our upcoming live demo, where we will cover off asset visibility and vulnerability prioritization use cases. Alternatively you can request a personalized demo of the Noetic platform here.
https://noeticcyber.com/new-cisa-directive-23-01/
It is the main opera of Shaanxi Province. It is said that the opera was called the Qin emperor's opera during the Tang period and was later renamed Qinqiang opera. It has a unque style, a rich list of plays, and a deep influence and holds an important position in the annals of Chinese operas. The Qinqiang opera, Beijing (Peking) opera, and Yuju (Henan) opera are the three major operas in northern China. People at the Shaanxi Provincial Song and Dance Ensemble composed the dance by sorting out historical data and then put it on at the Tang Music Palace. The musicians present the music with the Tang Dynasty-style musical instruments, and the dancers wear a Tang Dynasty-style robe with long white silk sleeves. The most famous item is the Rainbow Skirt and Feathered Coat Dance. The Yangko dance falls into two categories: the Grand Yangko and Tichangzi. The former is a collective song and dance performance given in a square. The performance is often accompanied with the lion, dragon, and boat dances. Tichangzi dancers must be even in number, with the male dancers holding colored fans and the female dancers waving colored silk ribbons. The dance finds its origin in Ansai County and is known for its great momentum and sonorous rhythm. The dancers, with a white towel tied around their heads and in laced suits, look brilliant and impressive. Each has a drum tied to his or her waist and dances with joy amidst the beating of drums and gongs.
http://chinaplanner.com/shaanxi/sxi_recr.htm
Proceedings started at 10.30am in the graveyard of St Mary’s, Adderbury where the side honoured the contribution of Charlie Coleman, a survivor of the pre-World War One side, who was able to pass on memories of the Adderbury dances to Bryan Sheppard and Jim Plester founders of the revival side in the 1970s. AVMM honours the contribution of Charlie Coleman, a survivor of the pre-WW1 side. Dancing commenced in Church Lane and continued throughout the day at a number of pubs and landmarks around the village, including Lake House where the carers and residents enjoyed a performance of five of our dances. Although the weather wasn’t entirely hospitable, the day was a very happy and memorable one with an excellent turn out of dancers and a large crowd of friends, families and supporters, some of whom had travelled great distances to join us. Many favourable comments were heard about the high standard of dancing. Tribute must be made to all those members who worked so hard during pre-season practices in The Tithe Barn and particularly to Dave Reed, who patiently passed on constructive tips on the dances to new recruits and more experienced dancers, all of whom are keen to improve the standard of our performances. We were delighted to welcome our youngest new member, Xavier Peissel, just two weeks old, looking resplendent in new green baldricks (picture above in the front row). He later accompanied proud father Damien in our Drinking Jig outside The Bell. We welcomed some Morris dancers who had travelled down from Yorkshire and are fans of the Adderbury Village dances. With other invited guests, they made up a set of no less than 24 dancers for Brighton Camp. A fine lunch was provided by Tony at The Coach and Horses, where we danced and entertained the locals with the singing of Come Landlord Fill The Flowing Bowl. A total of 52 dances were performed during the day covering our entire repertoire with two sets for The Happy Man and Postman’s Knock (twice) and three sets for a number of other dances. It was good to see Bill Plester come out of retirement to dance with son Tim at Le Hall Place, where we were provided with an excellent tea with a fine selection of cakes by our friends from Sharp and Blunt. The day ended with all three local sides taking turns to dance outside The Bell and a final performance of Brighton Camp led by AVMM with guest dancers and musicians from the other sides joining our regular musicians Donald and Malcolm. Special mention must be given to Troy and Ryan who danced superbly, performing the very demanding jig Jockey To The Fair at each of the three local pubs, also to our youngest dancers, Alfie, Theo and Dylan who showed enormous enthusiasm and energy and were keen to tackle all of our dances and are surely the future of the side for many years to come. Thanks to all who made this such an enjoyable Day of Dance. It was certainly a day to remember.
http://adderburyvillagemorrismen.co.uk/2018/04/30/adderbury-day-of-dance-saturday-28-april-18/
Today, at the Neutrino2018 conference in Heidelberg, the NOvA collaboration reported the first results from the antineutrino experiments, which indicate that muon antineutrinos oscillate into electron antineutrinos. This phenomenon is observed for the first time. The NOvA neutrino experiment with a record large distance between the source and the detector is set in the Fermi National Accelerator Laboratory (Fermilab).The goal is to study neutrinos, the particles capable of passing through matter without any interaction with it.The long-term goal of the experiment is to find similarities and differences in how neutrinos and antineutrinos change from one type, muon neutrino in this case, to two other types, electron and tau neutrinos.The evidence for this transition of neutrinos and antineutrinos and their comparison will allow scientists to better understand how the Universe is constructed. The NOvAexperiment uses two detectors: a smaller detector at Fermilab, Illinois, and a larger detector in Minnesota, 810 km away from the smaller one. Neutrino and antineutrino beams are produced at Fermilab and sent to Minnesota straight through the Earth without any special tunnel. The new result was obtained in the first antineutrino data-taking run at the NOvA accelerator complex. Antineutrino studies began in the NOvA experiment in February 2017. Fermilab accelerators generate a beam of neutrinos (or muon antineutrinos), and the distantly located detector specially designed to observe changes in particles detects oscillations of the generated muon neutrinos (antineutrinos) into electron neutrinos (antineutrinos). If antineutrinos did not change from the muon to the electron type, scientists would observe only five assumed electron neutrinos in the far detector during the first run.However, after the analysis of the recorded data, scientists found 18 antineutrinos of this type, which became evidence for existence of antineutrino oscillation. Photo from the UCL NOvA site.
https://dlnp.jinr.ru/61-projects-news/nova-news/339-evidence-of-antineutrino-oscillations-found-in-the-nova-experiment?lang=en-GB
Washington, D.C. — The Inter-American Development Bank Group together with MIT Solve launch an Open Innovation Challenge that seeks to find the most innovative solutions that substantially reduce or eliminate single-use plastic and plastic waste in Latin America and the Caribbean. The Rethink Plastics Challenge offers the selected solutions prizes totaling 60,000 USD. IDB announces winner of the 2019 Superheroes of Development Award September 16, 2019 The winning IDB-financed project developed a new solution to monitor public infrastructure works and to empower citizens to evaluate the use of public resources in Honduras. WASHINGTON D.C., September 12, 2019. The Inter-American Development Bank (IDB) announced the winner of the Superheroes of Development contest, an award given to executing agencies that have successfully implemented innovative solutions in projects financed by the IDB. IDB launches Superheroes of Development contest to recognize executing agencies March 04, 2019 In its second edition, the call for proposals will be open from March 4th to May 17th, 2019. The Superheroes of Development Award will recognize executing agencies of IDB-financed projects that have successfully addressed challenges during its implementation. The contest will identify eight finalists from across Latin America and the Caribbean who will travel to IDB’s Headquarters in Washington, D.C. to present their solutions. IDB launches Blue Tech Challenge with up to US$2M in funding for Blue Economy proposals September 24, 2018 The IDB through its Multilateral Investment Fund (MIF), Natural Capital Lab, Sustainable Islands Platform, and in alliance with the Compete Caribbean Partnership Facility, which is also supported by DFID, CDB and the Government of Canada, will identify firms and organizations looking to pilot and scale up business models that use cutting edge technologies to contribute to the sustainable management of oceans, marine ecosystems and coastal resources. Government spending waste costs Latin America and Caribbean 4.4% of GDP: IDB study September 24, 2018 Inefficiencies and fraud in procurement, civil service and targeted transfers could be as large as $220 billion a year Report includes policy recommendations to improve spending in healthcare, education, infrastructure and public safety IDB launches report detailing steps to help avoid social conflicts September 10, 2018 A new report by the Inter-American Development Bank (IDB) identifies ten key elements that can help projects improve their sustainability and reduce potential social conflicts, from proper community engagement and legal foundations to adequate monitoring of results. IDB reaffirms commitment to urban transportation in Quito with new loan April 27, 2018 The Inter-American Development Bank (IDB) has approved a $250 million loan to help build a metro line in the capital city of Quito, Ecuador. This operation, which adds to a $200 million loan approved in December 2012, is testimony to the commitment of the IDB to support sustainable urban mobility projects in areas under pressure to meet growing demands for an efficient public transportation system. Caribbean leaders launch plan to make region a “climate-smart zone,” with IDB support December 11, 2017 Paris — The Inter-American Development Bank Group (IDB Group) announced its support for the newly formed Caribbean Climate-Smart Coalition, a public-private initiative aimed at funding an $8 billion investment plan to transform the region into the world’s first “climate-smart” zone and benefit an estimated 3.2 million households in the region. Belize to reduce climate vulnerability with IDB assistance December 06, 2017 US$10 million loan to benefit 103,503 people in Belize City and Caye Caulker Belize will seek to reduce its vulnerability to climate change and risk with the implementation of climate resilience measures in the tourism sector and the improvement of disaster risk management governance, through a US$10 million loan from the Inter-American Development Bank (IDB). The project is expected to directly or indirectly benefit 103,503 people living in Belize City and Caye Caulker. The IDB and Miami Dade College select eighteen startups from Latin America, the Caribbean and South Florida to participate in Demand Solutions Miami 2017 August 21, 2017 Miami Dade College (MDC) and the Inter-American Development Bank (IDB) will host Demand Solutions Miami on Thursday, Oct. 19, at the Wolfson Campus in Downtown Miami. Eighteen startups in design, fashion, gastronomy, music and multimedia have been selected to participate in a one-of-a-kind conference that will highlight innovation and entrepreneurship that improves lives.
https://www.iadb.org/en/news?f%5B0%5D=filter_news_by_topic%3A1100&f%5B1%5D=filter_news_by_country%3A1315&f%5B2%5D=filter_news_by_country%3A1006&f%5B3%5D=filter_news_by_country%3A1047&f%5B4%5D=filter_news_by_topic%3A1129&f%5B5%5D=filter_news_by_country%3A1019&amp%3Bf%5B1%5D=filter_news_by_country%3A1041&amp%3Bamp%3Bf%5B1%5D=filter_news_by_country%3A1043&amp%3Bamp%3Bf%5B2%5D=filter_news_by_topic%3A1124
Many low- and middle-income countries remain challenged by a financial infrastructure gap, evidenced by very low numbers of bank branches and automated teller machines (ATMs) (e.g., 2.9 branches per 100,000 people in Ethiopia versus 13.5 in India and 32.9 in the United States (U.S.) and 0.5 ATMs per 100,000 people in Ethiopia versus 19.7 in India and 173 in the U.S.) (The World Bank 2015a; 2015b). Furthermore, only an estimated 62 percent of adults globally have a banking account through a formal financial institution, leaving over 2 billion adults unbanked (Demirgüç–Kunt et al., 2015). While conventional banks have struggled to extend their networks into low-income and rural communities, digital financial services (DFS) have the potential to extend financial opportunities to these groups (Radcliffe & Voorhies, 2012). In order to utilize DFS however, users must convert physical cash to electronic money which requires access to cash-in, cash-out (CICO) networks—physical access points including bank branches but also including “branchless banking" access points such as ATMs, point-of-sale (POS) terminals, agents, and cash merchants. As mobile money and branchless banking expand, countries are developing new regulations to govern their operations (Lyman, Ivatury, & Staschen, 2006; Lyman, Pickens, & Porteous, 2008; Ivatury & Mas, 2008), including regulations targeting aspects of the different CICO interfaces. EPAR's work on CICO networks consists of five components. First, we summarize types of recent mobile money and branchless banking regulations related to CICO networks and review available evidence on the impacts these regulations may have on markets and consumers. In addition to this technical report we developed a short addendum (EPAR 355a) which includes a description of findings on patterns around CICO regulations over time. Another addendum (EPAR 355b) summarizes trends in exclusivity regulations including overall trends, country-specific approaches to exclusivity, and a table showing how available data on DFS adoption from FII and GSMA might relate to changes in exclusivity policies over time. A third addendum (EPAR 355c) explores trends in CICO network expansion with a focus on policies seeking to improve access among more remote or under-served populations. Lastly, we developed a database of CICO regulations, including a regulatory decision options table which outlines the key decisions that countries can make to regulate CICOs and a timeline of when specific regulations related to CICOs were introduced in eight focus countries, Bangladesh, India, Indonesia, Kenya, Nigeria, Pakistan, Tanzania, and Uganda. In this brief, we report on measures of economic growth, poverty and agricultural activity in Ethiopia. For each category of measure, we first describe different measurement approaches and present available time series data on selected indicators. We then use data from the sources listed below to discuss associations within and between these categories between 1994 and 2017. Donor countries and multilateral organizations may pursue multiple goals with foreign aid, including supporting low-income country development for strategic/security purposes (national security, regional political stability) and for short-and long-term economic interests (market development and access, local and regional market stability). While the literature on the effectiveness of aid in supporting progress on different indicators of country development is inconclusive, donors are interested in evidence that aid funding is not permanent but rather contributes to a process by which recipient countries develop to a point that they are economically self-sufficient. In this report, we review the literature on measures of country self-sufficiency and descriptive evidence from illustrative case studies to explore conditions associated with transitions toward self-sufficiency in certain contexts. In Mozambique, the legacies of colonial rule, socialism and civil war continue to constrain economic growth and agricultural production. Eighty percent of Mozambique’s labor force derives its livelihood from agriculture, but the nation remains a net food importer. The majority of all farmland is cultivated by smallholders whose fertilizer usage and crop yields are among the lowest in Africa. While Mozambique has experienced reasonable economic growth since the end of its civil war in 1992, it remains poor by almost any measure. In this literature review, we examine the state of agriculture in Mozambique, the country’s political history and post-war recovery, and the current fertilizer market. We find evidence that smallholder access to fertilizer in Mozambique is limited by lack of information, affordability, access to credit, a poor business environment, and limited infrastructure. The data demonstrate that increased investment in infrastructure is an important step to improve input and output market access for smallholders. The main government intervention currently impacting smallholder fertilizer use is the Agricultural Sector Public Expenditure Program (PROAGRI) initiative, however, more data is necessary to assess the impact of its policies and programs. In Tanzania, agriculture represents approximately 50 percent of GDP, 80 percent of rural employment, and over 50 percent of the foreign exchange earnings. Yet poor soil fertility and resulting low productivity contribute to low economic growth and widespread poverty. Chemical fertilizer has the potential to contribute to crop yield increases. Yet high prices and weaknesses in the fertilizer market keep fertilizer use low. This literature review examines the history of government interventions that have intended to increase access to fertilizers, and reviews current policies, market structure, and challenges that contribute to the present conditions. We find that despite numerous strategies over the last fifty years, from heavy government involvement to liberalization, major weaknesses in Tanzania’s fertilizer market prevent efficient use of fertilizer. High transportation costs, low knowledge level of farmers and agrodealers, unavailability of improved seed, and limited access to credit all contribute to the market’s problems. The government’s current framework, the Tanzania Agriculture Input Partnership (TAIP), acknowledges this interconnectedness by targeting multiple components of the market. This model could help Tanzania tailor solutions relevant to specific road, soil, and market conditions of different areas of the country, contributing to enhanced food security and economic growth. The Government of Kenya (GoK) has historically encouraged its farmers to use fertilizer by financing infrastructure and supporting fertilizer markets. From 1974 to 1984, the GoK provided a fertilizer importation monopoly to one firm, the Kenya Farmers Association. However, the GoK saw that this monopoly impeded fertilizer market development by prohibiting competing firms from entering the market and, in the latter half of the 1980s, encouraged other firms to enter the highly regulated fertilizer market. This report examines the state of fertilizer use in Kenya by reviewing and summarizing literature on recent fertilizer price increases, Kenya’s fertilizer usage trends and approaches, market forces, and the impact of government and non-government programs. We find that most studies of Kenya’s fertilizer market find it to be well functioning and generally competitive, and conclude that market reform has stimulated fertilizer use mainly by improving farmers’ access to the input through the expansion of private retail networks. Overall fertilizer consumption in Kenya has increased steadily since 1980, and fertilizer use among smallholders is among the highest in Sub-Saharan Africa. Yet fertilizer consumption is still limited, especially on cereal crops, and in areas where agroecological conditions create greater risks and lower returns to fertilizer use. Farmers in Sub-Saharan Africa (SSA) use less fertilizer than farmers in any other region in the world. Low fertilizer use is one factor explaining the lag in agricultural productivity growth in Africa. A variety of market interventions to increase fertilizer use have been attempted over the years, with limited success. In the past several decades, Malawi has tried to alter that trend through a variety of innovative programs aimed at achieving national food security through targeted input subsidy programs. The best known of these programs is Malawi’s Starter Pack Programme. The Starter Pack Programme was amended twice into the Targeted Inputs Programme (TIP) and Expanded Targeted Inputs Programme (ETIP), and eventually replaced with the Agricultural Input Subsidy Programme (AISP). The efficiency and equity of the Starter Pack Programme and its successors have been the subject of debate. This report reviews the history, implementation, and perceived effectiveness of the various input subsidy schemes in the context of Malawi’s political economy. We find that AISP is credited with significantly increasing maize yields in Malawi. However, we also find that there are serious challenges facing the most recent input subsidy program, ranging from the rising cost of the subsidy to ongoing implementation struggles related to increased bureaucracy and corruption.
https://epar.evans.uw.edu/research?f%5B0%5D=field_epar_geographic_focus%3A285&f%5B1%5D=field_epar_population%3A287&f%5B2%5D=field_epar_geographic_focus%3A283&amp%3Bf%5B1%5D=field_epar_research_topic%3A294&amp%3Bamp%3Bf%5B1%5D=field_epar_dataset%3A278
When my four sisters and I were growing up, my mother dressed us alike on Christmas, above, and other special occasions. She made our dresses on a Singer sewing machine that was set up in our playroom. I have no idea how she could concentrate while the five of us swirled around her, but I am sure it was a way for her to be there….and not be there, if you know what I mean. Also shown in the background is a handmade "supermarket" playhouse made by my father, who was a talented woodworker. As we got older my mother taught those of us who were interested how to sew and to knit, another of her many talents. By the way, we were not always a family of girls. My wonderful brother finally made his debut, and by the time my parents were thirty they had six children…not unusual for those post-World War II days. Now that my mother is 88, her notion of a dull day is a) when the sun isn't shining (most of this winter), and b) when she doesn't have a lunch date. My siblings count ourselves lucky that our mother can live in her own house and still drive, but she is always looking for something more to do. She had lost interest in sewing until we both saw CNBC's Brian Williams air one of his "Making a Difference" segments–this one on The Little Dresses for Africa earlier this year. I called her right after the broadcast to ask if she had seen it. Indeed she had, and the more we talked about making one of the dresses ourselves the more enthused she became. Soon she retrieved her old Singer from my sister Jody, and was whipping up the little sundress shown below, right. My mother dolled up her dress with a cute little bow on the pocket. Mine is on the left and is made from a sarong that I cut apart. I photographed the dresses in our laundry room under a tin Haitian sculpture that makes doing the laundry a little cheerier. A couple of weeks ago, I mailed the dresses off to the Little Dresses charity, which is based in Michigan. My mother and I hope that before long two little girls in Africa will feel just as pretty as they are when they wear our creations. If you are so inclined, the instructions for the dresses can be found on this website. One version uses a pillowcase and promises minimal sewing. So, here is where this post is going: If I didn't have fabric and the tools needed to complete the project in my stash of supplies, I would have found myself treking to the county to get them. So on my wish list for the neighborhood is a home-goods store…a Ben Franklin, perhaps, where we could find fabric, sewing needles, buttons, and more.
https://www.nickiscentralwestendguide.com/page/532/
Crucial information for a possible shift in production from biomass biocombiustibili was discovered by sequencing the DNA of the plant species of Selaginella moellendorffii . “When we burn coal, burn the ancestors of Selaginella , “said Jody Banks, Purdue University botanist who first proposed to the Joint Genome Institute (JGI) of the United States Department of Energy (DOE) sequencing of the plant as part of the scientific program of 2005. As explained by Igor Grigoriev, senior author of the study, published online in Science Express , the genome of Selaginella fills a huge gap in the evolution of plants among the green alga Chlamydomonas , sequenced by JGI in 2007, and angiosperms with the vascular system. Click to continue » Click to continue » |Category: Biology||Tags: biofuels| 31 From the stomach of cattle novel enzymes for biofuels | | | | | | The analysis allowed to understand the role of some 30,000 genes that encode enzymes that break down biomass into simple sugars, the first crucial step for biofuel production Cattle feed on grass is what is observed for millennia. From this diet - allowed only to those animals capable of degrading cellulose and hemicellulose, substances without any nutritional value for most organizations - ruminants are able to extract all that is needed to feed themselves, their offspring and the humans who raise them. Now just the ruminants, or rather the set of organisms that dwell in the rumen, are providing researchers with crucial information that will accelerate in future large-scale development of biofuels. In a recent survey, whose report is published in Science, the techniques of large-scale DNA sequencing have enabled scholars of the Joint Genome Institute (JGI) of the United States Department of Energy (DOE) and the ‘ Energy Biosciences Institute (EBI) to characterize the genes of microbes isolated from the rumen of cattle. Click to continue » Click to continue » |Category: Biology||Tags: biofuels, degrading cellulose| 29 The use of micro-algae for biofuel production in Germany | | | | | | In an energy context marked both by the need to develop renewable energy more environmentally friendly and the medium term exhaustion of oil resources, micro-algae is a future challenge for biomass production and biofuels. Aware of the benefits of this technology, France and Germany are intensifying their efforts in R & D in the field of algocarburants. A mission of French experts in Germany, held talks Scientific Service of the Embassy of France in Germany, was the opportunity to make an overview of the main German initiatives and to establish the groundwork for potential collaborations Franco German-sector level of technological innovation. Click to continue » Click to continue » |Category: Environment||Tags: biofuels| 14 Biofuels from algae: the catalytic effect of sodium bicarbonate | | | | | | Biofuels from algae appear as a promising alternative to fossil fuels. However, some problems remain to be overcome: the feasibility for productivity, downstream processes (harvesting, extraction of components) … The main obstacle to their development, as we mentioned in a previous report , is the economic viability of the process for obtaining them. In late June 2010, the Obama administration announced funding of $ 24 million for research on the subject. This money has benefited three research groups working on the production of biofuels from algae: Algal Biofuels Consortium on Sustainable in Arizona, the Consortium for Algal Biofuels Commercialization in California and Cellana, Consortium LLC in Hawaii. In this context, researchers at Montana State University (MSU) have identified a compound having the ability to increase production considerably algal oil precursor in the synthesis of biodiesel: sodium bicarbonate. Click to continue » Click to continue » |Category: Agricultural Science||Tags: Algal Biofuels, algal oil, biofuels| 2 Genetic Engineering Plant to Increase Capacity of Soil Freeze on Atmospheric CO2 | | | | | | The Miscanthus, a kind of herb useful as a raw material for the production of biofuels could play a dual role in combating climate change, thanks to its strong ability to make the carbon is retained in soils for thousands of years . Should scientists used genetically modified crops to produce biofuels, in order to enhance their ability to capture CO2 in the atmosphere? “This strategy could be applied to the scale necessary to mitigate climate change? These questions and possible answers are part of a new analysis done by Christer Jansson, a scientist at Lawrence Berkeley National Laboratory, and researchers at Oak Ridge National Laboratory, both centers in the United States. The analysis explores ways in which the said crops could help combat the alarming buildup of carbon dioxide in the atmosphere. At the heart of the analysis is the idea that crops for biofuels can combat climate change in two ways. Click to continue » Click to continue » |Category: Biology||Tags: biofuels, genetically modified crops, mitigate climate change| 10 Biofuels in the U.S.: the expected changes in regulation | | | | | | The issue of biofuels and incentives to develop this sector in the United States is again in the spotlight and at two levels. Scientists from Oregon State University have recently published an article challenging the regulations on the use of GMOs for perennial crops for biofuel production. In addition, the secretary of the USDA (U.S. Department of Agriculture), today announced a series of measures to promote production of biofuels and the EPA (Environmental Protection Agency), meanwhile, decided to authorize the use of fuel blends containing up to 15% ethanol (E15). On the other hand, recent results of midterm elections, dated November 2, 2010, announcing a tidal Tide Republican prove moderately positive for the biofuels industry. While the issue of biofuels and in particular those of the second and third generations are supported bi-partisan, conservative Democrats (Blue Dog Coalition) who are strong advocates of bioenergy have lost many seats compared to the Liberal Democrats. USDA: new measures to support domestic biofuels In the framework of the objectives set by the RFS2 (Renewable Fuel Standard), namely production, by 2022, 36 billion gallons of biofuels, including 21 billion from new-generation fuels, the following measures were announced by Tom Vilsack, Secretary of the USDA, 21 October: 1. The final regulations for implementing the CAPA, Biomass Crop Assistance Program would allow producers to develop new crops (ligno-cellulosic). The financial support will reach 75% of the costs of crop establishment and payment of an annual pension to help cover transition costs for existing crops. Click to continue » Click to continue » |Category: Agricultural Science, Food Science||Tags: aviation fuel, bi-partisan, biofuels, raw material| 4 Analyze the impact of biofuel production in food prices | | | | | | A group of Spanish researchers of the Centre for Research in Economics and Agribusiness Development ( CREDA ) is using econometric models to quantify the relationship between oil prices , biofuels and food in the USA , Brazil (both countries are the biggest producers in the world biofuels ) and Spain. The project will also consider social preferences for biofuels and determine their socio- economic implications . The “green” character of biofuels has caused exponential growth rate of world output, which has doubled in recent years. This increase is mainly due to crude oil prices have hit record highs , the prohibition of the use of Methyl tert- Butyl Ether (MTBE , for its acronym in English ) as a gasoline additive in the U.S., and the increased profile policies related to energy security and climate change. Although biofuels are produced from various materials of organic origin, at present, its main output is made from food agricultural commodities like corn , soybeans and sugarcane , which has prompted the demand for these raw materials for energy production , and direct competition with agricultural production for food and feed. The team of researchers from the Center for Research in Economics and Agribusiness Development ( CREDA ) is conducting the project Price transmission between energy and food markets : the impact of biofuels to measure the relationship between oil prices , biofuels and food. Click to continue » Click to continue » |Category: Agricultural Science||Tags: biofuels, ethanol, fossil fuels, Methyl tert- Butyl Ether, organic origin| 2 Convert glycerin biodiesel process precursors for the production of bioplastics | | | | | | Group researchers Chemical Process Technology and Biochemistry, University of Valladolid and the Laboratory of ACOR biodiesel plant in Olmedo work together on the recovery of one of the products obtained from the process of biodiesel , glycerin , alcohol which is the basis of many pharmaceuticals . As explained DiCYT Maria Teresa Garcia Cubero, one of the researchers of the group, ” for every hundred pounds of biodiesel produced ten of glycerin , a significant amount that the pharmaceutical industry can not continue . Therefore, try to give new outlets for this product . “There are many treatment options and we opted for one that involves the procurement of high value-added products , such as the production of precursors used in the production of biodegradable plastics , “is proceeding . Through a project of the Ministry of Science and Innovation ( MICINN) , studying the transformation of glycerine in two alternative ways . One is the chemical transformation of shape -selective catalysts that can be derived propanediol , which is the precursor to which they work, and the other is a biochemical transformation ” with microorganisms able to assimilate glycerol . While the first method gives 1.3 propanediol , the biochemical transformation as a product provides 1-2 propanediol. “This is similar isomers although slightly different properties , “explains the expert. The research group working on this project with different microorganisms which are used to, aerobic and anaerobic , but not ” strict. ” “We need anerobiosis conditions (no oxygen ) to complete the body to function well , “said Maria Teresa Garcia Cubero. Click to continue » Click to continue » |Category: Agricultural Science||Tags: biochemical transformation, biofuels, glycerine| 5 Research Partnership Sino-US biofuel use by airlines | | | | | | China and the United States launched on May 28 last, a research project on the development of biofuels destined for Chinese airlines. The study will focus on biofuels made from walnut oil or algae. A first flight could take place this year. This announcement comes after research partnerships promise contained in a high-level meeting between the two governments to cooperate closely in the field of renewable energy. This is problematic for both countries a key issue in the fight against climate change and could spur new industries. At a conference on renewable fuels, David Sandalow, U.S. Assistant Secretary of Energy, said the “development of renewable energy is at the heart of our cooperation with China.” The two countries signed a series of research partnerships between Boeing Co., the U.S. government agency and Chinese research institutions and some state companies, including Air China Ltd. and PetroChina Ltd.. Click to continue » Click to continue » |Category: Energy||Tags: biofuels, civil aviation| 2 Cellulosic Ethanol: new options and new barriers identified | | | | | | The transfer to industrial scale production of ethanol from various sources of cellulosic biomass, is of three main elements: identifying and obtaining or vegetable raw materials with the highest concentration of carbohydrates, the establishment point grinding processes saccharification and economically viable to release simple sugars contained in these carbohydrates and identification of microorganisms with the gene to ferment all of these sugars. Results from recent studies conducted by researchers at the ARS / USDA have been the subject of articles in scientific journals such as Biotechnology and Bioengineering, the Journal of Biobased Materials and Bioenergy and the Journal of Industrial Microbiology and Biotechnology. This research focuses on two types of projects: first the development of a protocol for the transformation of wheat straw into ethanol and secondly, the characterization of bacteria infecting plants producing ethanol and interfering with fuel production.
https://www.scienceknowledge.org/tag/biofuels/
They would then be more likely to do a thorough job. EEG signal stability, as quantified by frequency variance, was found to increase with age in their sample of preschool age subjects. I believe there are two components to the answer, one involving education, but the other, more worryingly, relating to certain cultural features of the cognitive neuroscience community. Such labelled compounds are known as radiotracers. You might find that someone with a good idea to share reads your preregistration and helps you out. They take the approach of a bottom-up revision of fMRI methodology based on acquisition of multi-echo fMRI and comprehensive utilization of the information in the TE-domain to enhance several aspects of fMRI analysis in the context of a developmental study. These also tend to be consistent within studies from the same lab. History[ edit ] The concept of emission and transmission tomography was introduced by David E. Disadvantages are that shot noise in the raw data is prominent in the reconstructed images, and areas of high tracer uptake tend to form streaks across the image. The fact that the review process is ostensibly anonymous is meant to address this issue, but it can be easily bypassed. A related trick is to send your manuscript to a journal where your friend and colleague is the main editor, and who will accept your manuscript, almost regardless of what the reviewers say. I think the phenomena of small effect size 0. The electrical diagram of the EEG provides split-second timing while the MRI provides high levels of spatial accuracy. Real heads are non-spherical and have largely anisotropic conductivities particularly white matter and skull. Besides fMRI, another example of technology allowing relatively older brain imaging techniques to be even more helpful is the ability to combine different techniques to get one brain map. In practice, the LOR has a non-zero width as the emitted photons are not exactly degrees apart. Localization of the positron annihilation event[ edit ] The most significant fraction of electron—positron annihilations results in two keV gamma photons being emitted at almost degrees to each other; hence, it is possible to localize their source along a straight line of coincidence also called the line of response, or LOR. We need more of this. Should a reviewer or editor insist on independent replication of an entire study, for it to be accepted? Besides its established role as a diagnostic technique, PET has an expanding role as a method to assess the response to therapy, in particular, cancer therapy, where the risk to the patient from lack of knowledge about disease progress is much greater than the risk from the test radiation. Then there is the issue of nepotism in the review process. What we need is more direct replication — and I think this can be part of on-going research and not simply a goal in itself although that is of course also welcome. Future studies planned from this team are expected to relate this novel EEG biomarker with the development of executive function and cognitive flexibility in children, with the overarching goal of understanding electrical activity metastability in atypically developing children. But on the other hand, if the study is horribly flawed, the methods are invalid and so on, publishing the paper will merely drag the field down, and make it more likely that future researchers make the same mistake. For others who view scientists less suspiciously, the situation must be worse. To attempt to reproduce these 17 findings, Boekel et al. MRI scanners have become significant sources of revenue for healthcare providers in the US. For example, a dual-headed camera can be used with heads spaced degrees apart, allowing two projections to be acquired simultaneously, with each head requiring degrees of rotation. Deciding on the exact analysis would be even better. Contemporary scanners can estimate attenuation using integrated x-ray CT equipment, in place of earlier equipment that offered a crude form of CT using a gamma ray positron emitting source and the PET detectors. In his family moved to New York, where they lived in poverty in the Lower East Side before moving to Brooklyn in I for one, would be very hesitant to return to the field of brain imaging in the absence of preregistration not perhaps a great loss to the field. It is true that often grant proposals are a long way from what is actually done. Academic publishing is currently undergoing a revolution, amidst the call for open access.Mild cognitive impairment is a syndrome defined as cognitive decline greater than expected for an individual's age and education level but that does not interfere notably with activities of daily life. Positron-emission tomography (PET) is a nuclear medicine functional imaging technique that is used to observe metabolic processes in the body as an aid to the diagnosis of disease. The system detects pairs of gamma rays emitted indirectly by a positron-emitting radionuclide, most commonly fluorine, which is introduced into the body on a biologically active molecule called a radioactive tracer. Dear Twitpic Community - thank you for all the wonderful photos you have taken over the years. We have now placed Twitpic in an archived state. The Vision of the Department of Electronics and Communication Engineering, National Institute of Technology Silchar is to be a model of excellence for undergraduate and post graduate education and research in the country. A. Current Request. CMS opened this national coverage analysis (NCA) to reconsider coverage indications for MRI. We note that CMS’ intent regarding this MRI reconsideration was to only reconsider section (C)(1) rather than of the NCD Manual in its entirety.
https://fejecahexoceqym.agronumericus.com/the-history-power-and-application-of-mri-in-neuroimaging-1480lir.html
0 dollars,0 people have bought. Reading is over. You can download the document and read it offline 0people have downloaded it Document pages: 10 pages Abstract: 3D Multi-object tracking (MOT) is crucial to autonomous systems. Recent workuses a standard tracking-by-detection pipeline, where feature extraction isfirst performed independently for each object in order to compute an affinitymatrix. Then the affinity matrix is passed to the Hungarian algorithm for dataassociation. A key process of this standard pipeline is to learn discriminativefeatures for different objects in order to reduce confusion during dataassociation. In this work, we propose two techniques to improve thediscriminative feature learning for MOT: (1) instead of obtaining features foreach object independently, we propose a novel feature interaction mechanism byintroducing the Graph Neural Network. As a result, the feature of one object isinformed of the features of other objects so that the object feature can leantowards the object with similar feature (i.e., object probably with a same ID)and deviate from objects with dissimilar features (i.e., object probably withdifferent IDs), leading to a more discriminative feature for each object; (2)instead of obtaining the feature from either 2D or 3D space in prior work, wepropose a novel joint feature extractor to learn appearance and motion featuresfrom 2D and 3D space simultaneously. As features from different modalitiesoften have complementary information, the joint feature can be morediscriminate than feature from each individual modality. To ensure that thejoint feature extractor does not heavily rely on one modality, we also proposean ensemble training paradigm. Through extensive evaluation, our proposedmethod achieves state-of-the-art performance on KITTI and nuScenes 3D MOTbenchmarks. Our code will be made available atthis https URL Please select stars to rate! 0 comments Sign in to leave a comment. - Data loading, please wait...
https://www.eduzhai.net/47198-gnn3dmot-graph-neural-network-for-3d-multi-object-fileshare
CROSS-REFERENCE TO PRIOR APPLICATION This application relates to and claims the benefit of priority from Japanese Patent Application number 2020-48918, filed on Mar. 19, 2020 the entire disclosure of which is incorporated herein by reference. BACKGROUND The present invention relates generally to a computer technique for supporting production management. As a technique for supporting production management, the visualization technique disclosed in Document 1 is known for example. In the technique disclosed in Document 1, for each product, start time points of a plurality of steps are coupled with lines and end time points of the plurality of steps are coupled with lines. Document 1: Japanese Patent No. 6287018 SUMMARY Holistically viewing a production situation makes it possible to make an estimate of abnormal locations. Accordingly, holistically viewing the production situation contributes to support for production management. A production system is known in which a plurality of different product types of products are loaded and the sequential order of two or more of a plurality of steps is different depending on the product type, for example, a job shop production system or a cell production system. In such a production system, the same facility is used for two or more types of products in at least one step. In production control, it is desirable to maintain, for each facility, a constant operational availability of the facility. In production, “stocks in work queue (retention stocks)” occurs in each step. A step for handling a lot of stocks in work queue potentially causes unnecessary and wasted stocks. In such a case, for example, the administrator can consider adopting a method of increasing the number of facilities for the corresponding step in order to reduce the stocks in work queue. On the other hand, when there is no stock in work queue, it may result in wasted time for facilities because work cannot be started. In such a case, for example, the administrator can consider adopting a method of increasing the facilities in a step immediately before the corresponding step in order to increase the number of products entering the corresponding step. Accordingly, it is conceivable that a support for management of the appropriateness of the facility operational availability is to see the appropriateness of a retention number (the number of stocks in work queue) from a higher perspective, for example, to visualize the appropriateness of the retention number at each time point in each step. Some stocks in work queue may be prepared intentionally. However, when an intended retention number exceeds a reference value (threshold), the intended retention number will be detected as abnormality. When there is no fluctuation in the retention number after the retention number exceeds the reference value, it seems as if the abnormality continues. The abnormality can be detected when the retention number exceeds the reference value, while the cause of the abnormality may be present before the retention number reaches the reference value. However, no abnormality is detected even when the cause of the retention number reaching the reference value is present early. The appropriate value of the retention number is not necessarily the same in all steps. Thus, it is difficult to set a suitable reference value for the retention number. However, the visualization of the appropriateness of the retention number is not always appropriate. An example of the reasons is at least one of the following. Production past record information that is information as past records showing the execution time point of each step is accumulated for each product loaded into a production system. A support system calculates, for each step, a retention increase rate at each time point on the basis of the production past record information. For each step, the “retention increase rate” is an amount of increase in retention number per unit time. The support system displays a holistic chart for a production situation. The holistic chart is a chart with a time axis (axis corresponding to time) and a step axis (axis perpendicular to the time axis and corresponding to steps). A display mode of each position in the holistic chart depends on whether a retention increase rate for the time point and step corresponding to the position is a negative value, zero, or a positive value, and a difference between the retention increase rate and a rate reference value (a reference value for the retention increase rate). According to the present invention, the “retention increase rate” at each time point in each step, that is, a ratio between IN (input amount of products between time points) and OUT (output amount of products between time points) at each time point for each step is calculated. When the ratio between IN and OUT for each facility is maintained constant, it can be estimated that the facility operational availability is maintained high. Therefore, making it possible to holistically view the appropriateness of the retention increase rate supports the management of the appropriateness of the facility operational availability. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 shows the outline of an embodiment; FIG. 2 shows a configuration of a production management supporting system according to the embodiment; FIG. 3 shows an example of functions implemented in a management server; FIG. 4 shows an example of a production past record table; FIG. 5 shows an example of an order-of-step table; FIG. 6 shows an example of a retention past record table; FIG. 7 shows a holistic chart according to a comparative example; FIG. 8 shows a first example of a holistic chart according to an embodiment; FIG. 9 shows a second example of the holistic chart according to the embodiment; FIG. 10 shows examples of a relationship among a rate abnormality degree, an acceleration rate abnormality degree, and an influence degree; FIG. 11 shows examples of a relationship between a combination of a retention number, a rate abnormality degree, and ab acceleration rate abnormality degree, and a display mode; FIG. 12 shows an example of specification of any cell in the holistic chart; FIG. 13 shows a first example of a display of a relationship between a step to which the specified cell belongs and steps previous and subsequent to the step; FIG. 14 shows a second example of a display of a relationship between a step to which the specified cell belongs and steps previous and subsequent to the step; FIG. 15 shows a flow of a drawing process of a holistic chart for retention increase rate; FIG. 16 shows a flow of a drawing process of a holistic chart for retention increase rate and retention increase acceleration rate; FIG. 17 shows a flow of a drawing process of a previous relation object and a subsequent relation object; and FIG. 18 shows a flow of a drawing process including control of display modes of the previous relation object and the subsequent relation object. DESCRIPTION OF EMBODIMENTS In the following description, an “interface portion” includes one or more interfaces. The one or more interfaces may include at least one of a user interface portion and a communication interface portion. The user interface portion may include at least one I/O device among one or more I/O devices (e.g., input devices (e.g., a keyboard and a pointing device) and an output device (e.g., a display device)) and a display computer or may include an interface device for the at least one I/O device. The communication interface portion may include one or more communication interface devices. The one or more communication interface devices may be one or more communication interface devices of the same type (i.e., one or more network interface cards (NICs)) or two or more communication interface devices of different types (e.g., a NIC and a host bus adapter (HBA)). In the following description, a “storing portion” includes one or more memories. At least one of the memories associated with the storing portion may appropriately be a volatile memory. The storing portion is used mainly during a process performed by the processor portion. The storing portion may also include, in addition to the memories, one or more nonvolatile storing devices (e.g., hard disk drives (HDDs) or solid state drives (SSDs)). In the following description, the “processor portion” includes one or more processors. At least one of the processors is typically a microprocessor such as a central processing unit (CPU), but the processors may also include a processor of another type such as graphics processing unit (GPU). Each of the one or more processors may be a single-core processor or a multi-core processor. The processors may also include a hardware circuit which performs a part or the whole of a process. In the following description, a process may be described using a “program” as a subject. Since a program performs a determined process by being executed by the processor portion, while appropriately using the storing portion (e.g., memory), the interface portion (e.g., communication port), and/or the like, the subject of the processor may also be the processor. The process described using the program as the subject may also be a process performed by the processor portion or an apparatus having the processor portion. The processor portion may also include a hardware circuit (e.g., field-programmable gate array (FPGA) or an application specific integrated circuit (ASIC)) which performs a part or the whole of the process. The program may also be installed from a program source to an apparatus such as a computer. The program source may be, e.g., a program distribution server or a recording medium (e.g., non-transitory recording medium) which is readable by the computer. In the following description, two or more programs may be implemented as one program or one program may be implemented as two or more programs. In the following description, information may be described using such an expression as “a xxx table”, but the information may be expressed using any data structure. Specifically, to show that the information does not depend on any data structure, “a xxx table” can be referred to also as “xxx information”. Also, in the following description, a configuration of each table is exemplary. One table may be divided into two or more tables or all or any of two or more tables may be one table. In the following description, a “production management supporting system” may be configured to include one or more computers. Specifically, when, e.g., a computer has a display device and displays information on the display device thereof, the computer may appropriately be the production management supporting system. Alternatively, when, e.g., a first computer (e.g., management server) transmits information to be displayed to a remote second computer (display computer (e.g., management client)) and the display computer displays the information (when the first computer displays the information on the second computer), at least the first computer of the first and second computers may appropriately be the production management supporting system. The production management supporting system may also have an interface portion, a storing portion, and a processor portion coupled to the interface portion and the storing portion. The “display of information to be displayed” by the computer in the production management supporting system may be the display of information to be displayed on the display device of the computer or may also be the transmission of the information to be displayed from the computer to the display computer (in the latter case, the display computer displays the information to be displayed). The function of at least one of the management server in the production management supporting system and the production management supporting system may also be implemented by a virtual computer (e.g., virtual machine (VM)) implemented by at least one physical computer (e.g., a physical calculation resource on a cloud basis). At least a portion of the production management supporting system may be software-defined. A “product” generally means a produced item, i.e., a finished product. However, in the following description, the “product” means each of the items loaded in a production system. Accordingly, in the following description, the “product” may be any of an item before loaded into the production system, an item currently in the production system (i.e., “semi-finished product”), and a finished product in a shippable state through all the corresponding steps in the production system. In the following description, the step immediately before a step can be referred to as the “previous step”, and the step immediately after a step can be referred to as the “subsequent step”. Similarly, the time point immediately before a time point can be referred to as the “previous time point”, and the time point immediately after a time point can be referred to as “subsequent time point”. Also, in the following description, the step of a step ID x (x is a natural number) may be referred to as a “step x”, while an inter-step range between the step x and a step y may be referred to as an “inter-step range x-y”. The step y is typically the subsequent step of the step x. In addition, in the following description, an interval between a time point j and a time point k may be sometimes referred to as a “time point interval j-k”. 13 13 13 In some cases in the following description, in the case where the same type of components are described without being distinguished, a common part of symbols is used. In the case where the same type of components are distinguished, symbols are used. For example, when previous relation objects are not discriminated from each other, each of the previous relation objects may be referred to as a “previous relation object ”. When the previous relation objects are discriminated for each other, each of the previous relation objects may be referred to as a “previous relation object A” or a “previous relation object B”. FIG. 1 shows the outline of an embodiment. Note that, in the following description, “UI”, which stands for user interface, typically refers to a graphical user interface (GUI). 100 110 120 130 110 120 120 272 150 130 150 A production management supporting system has an I/F (interface) portion , a storing portion , and a processor portion coupled to the I/F portion and the storing portion . The storing portion stores management information and a support program . The processor portion executes the support program . 272 351 352 351 352 351 272 The management information includes a production past record table and a retention past record table . The production past record table is a table of past records showing the execution time point of each product loaded in the production system in each step. The retention past record table is a table indicating retention past records calculated on the basis of the production past record table . Some of the management information may be information collected from facilities in the production system or information input by a user (e.g., a worker or an administrator). 150 351 1 2 FIG. 1 The support program calculates, for each step, a retention increase rate at each time point on the basis of the production past record table . For each step, the “retention increase rate” is an example of at least a part of the retention past records, and is an amount of increase in retention number (the number of stocks in work queue) per unit time. For each of steps and of a plurality of steps, the retention increase rate for each time point (in other words, the retention increase rate in time series) is as shown in the graph of . 150 50 50 51 52 50 The support program displays a holistic chart for a production situation. The holistic chart is a chart with a time axis (axis corresponding to time) and a step axis (axis perpendicular to the time axis and corresponding to steps). A display mode of each position in the holistic chart depends on whether a retention increase rate for the time point and step corresponding to the position is a negative value, zero, or a positive value, and a difference between the retention increase rate and a rate reference value (a reference value for the retention increase rate). The “retention increase rate” at each time point in each step, that is, a ratio between IN (input amount of products between time points) and OUT (output amount of products between time points) at each time point for each step is used as an evaluation index for facility operational availability. When the ratio between IN and OUT for each facility is maintained constant, it can be estimated that the facility operational availability is maintained high. Visualizing the appropriateness of the retention increase rate for each time point for each step supports the management of the appropriateness of the facility operational availability. 351 351 The production past record table is also a table in which information collected from facilities in the production system is recorded. The information recorded in the production past record table includes all information necessary for calculating a retention number for each time point in each step. In the present embodiment, the retention increase rate for each time point in each step is calculated. However, the calculation of the retention increase rate does not require to prepare new information other than the information necessary for calculating the retention number. This is because, for each time point in each step, a difference between the retention number at that time point and the retention number at a time point immediately from (typically, immediately before) that time point can be calculated as the retention increase rate. In this way, according to the present embodiment, it is possible to support the management of the appropriateness of the facility operational availability without new additional information other than the information necessary for calculating the retention number. The reference values for retention numbers are not necessarily the same in all steps, while the rate reference value may be common to a plurality of steps but the maximum value and the minimum value of the retention increase rate may differ depending on the steps. This is because the best ratio between IN and OUT does not depend on the steps. Specifically, for example, the rate reference value may be zero. In other words, the rate reference value may be a value that complies with a simple reference that the number of products going out is the same number of products coming in. In this way, setting the rate reference value is easy. 50 FIG. 1 As an example of the holistic chart , a heat map illustrated in can be adopted. Note that differences in the display mode of cells in the heat map may typically be differences in color, but in the drawings, differences in pattern and density is adopted for easy understanding. 50 50 271 150 The holistic chart has a plurality of cells as an example of a plurality of positions. The holistic chart corresponds to a cell matrix. Each of the plurality of cells is an example of a display object. A cell row corresponds to a step, and a cell column corresponds to a time. The display mode of each cell depends on the retention increase rate for the time point and step corresponding to the cell. The difference in display mode may be expressed in any method such as a difference in color, a difference in density, a difference in pattern, the presence or absence of blinking, or a combination of two or more of these elements. A threshold X may be set, for example, in the management server program , as a threshold for the difference (hereinafter, the rate abnormality degree) between the retention increase rate and the rate reference value. The threshold X may be also common to a plurality of steps. The threshold X may be any value. The display mode may be determined by the support program according to the magnitude of the difference between the rate abnormality degree and the threshold X. Note that thresholds X may be prepared separately, such as a threshold for negative retention increase rates and a threshold for positive retention increase rates, but in the present embodiment, the threshold X is one common value. The threshold X may be a threshold for the absolute value of the rate abnormality degree. As a result, it is possible to detect whether the retention increase rate is too high or the retention increase rate is too low. 50 FIG. 1 In the holistic chart illustrated in , the horizontal axis is the time axis and the vertical axis is the step axis. For each cell, no pattern means that the retention increase rate is zero (in the present embodiment, retention increase rate=rate reference value=0). For example, a horizontal striped pattern means that the retention increase rate is a negative value. The density of the horizontal striped pattern means the magnitude of the rate abnormality degree between the negative retention increase rate and the rate reference value, in other words, the relative relationship between the rate abnormality degree and the threshold X. The horizontal striped pattern is, for example, darker as the absolute value of the rate abnormality degree is higher (darkest for the absolute value of the rate abnormality degree exceeding the threshold X). Also, for example, a checkered pattern means that the retention increase rate is a positive value. The density of the checkered pattern means the magnitude of the rate abnormality degree between the positive retention increase rate and the rate reference value, in other words, the relative relationship between the absolute value of the rate abnormality degree and the threshold X. The checkered pattern is, for example, darker as the absolute value of the rate abnormality degree is higher (darkest for the absolute value of the rate abnormality degree exceeding the threshold X). 50 The following will describe the present embodiment in detail. Note that in the present embodiment, as an element that is adopted as an evaluation index for production control and affects the display of the holistic chart , there is given a “retention increase acceleration rate” in addition to the “retention increase rate”. The “retention increase acceleration rate” is an amount of increase in the retention increase rate per unit time. Each of the “retention increase rate” and the “retention increase acceleration rate” will be described in detail in the description of the present embodiment. FIG. 2 100 shows a configuration of the production management supporting system . 100 250 210 250 250 210 200 290 The production management supporting system includes a management server and one or more management clients coupled to the management server . To the management server , each of the management clients and a production system is coupled via a communication network (e.g., a local area network (LAN), a wide area network (WAN), or the Internet) . 200 200 250 200 250 250 200 The production system is a production system (e.g., a factory) in which a plurality of different product types of products are loaded and the sequential order of two or more of a plurality of steps is different depending on the product type, and is, for example, a job shop production system or a cell production system. The production system includes a plurality of facilities (apparatuses) for a plurality of steps, a plurality of sensors which regularly perform measurement for a plurality of measurement items with regard to the plurality of steps, and a server which stores a plurality of measurement values regularly obtained using the plurality of sensors and transmits the plurality of measurement values to the management server . From the production system , information (raw data such as, e.g., production dynamic state data, facility data, and quality measurement data) is regularly or irregularly transmitted to the management server and stored in the management server . For example, the information includes, for each product, a product ID, and the start time point and the end time point of each step. Note that the production system may be a production system other than the production systems described above (e.g., a job shop production system or a cell production system), for example, a line production system. 210 211 212 213 211 212 The management client has an I/F portion , a storing portion , and a processor portion coupled to the I/F portion and the storing portion . 211 290 221 222 223 222 223 The I/F portion includes an I/F (communication interface device coupled to the communication network ) , an input device (e.g., pointing device or keyboard) , and a display device (device having a physical screen which displays information) . A touch screen integrally including the input device and the display device may also be adopted. 212 213 213 212 231 232 231 250 11 232 The storing portion stores a computer program executed by the processor portion and information used by the processor portion . Specifically, for example, the storing portion stores a management client program and a Web browser . The management client program communicates with the management server and displays a UI such as the cost heat map described above via the Web browser . 250 251 252 253 251 252 The management server has an I/F portion , a storing portion , and a processor portion coupled to the I/F portion and the storing portion . 251 290 261 The I/F portion includes an I/F (communication interface device coupled to the communication network ) . 252 253 253 252 271 272 271 231 272 200 272 272 The storing portion stores a computer program executed by the processor portion and information used by the processor portion . Specifically, for example, the storing portion stores a management server program and management information . The management server program communicates with the management client program . The management information may include information related to a past record such as the start time point and the end time point of each of the steps for each product loaded in the production system . The management information may include the raw data mentioned above. Also, the management information may include, for example, information generated on the basis of a result of analyzing information related to a past record, and various thresholds. 271 231 232 50 Through a cooperative process performed by the management server program , the management client program , and the Web browser , the display of the UI such as the holistic chart is implemented. FIG. 2 FIG. 1 211 251 251 110 212 252 252 120 213 253 253 130 271 231 232 271 150 The relationships between the components shown in and the components shown in are, e.g., as follows. Specifically, of the I/F portions and , at least the I/F portion corresponds to the I/F portion . Of the storing portions and , at least the storing portion corresponds to the storing portion . Of the processor portions and , at least the processor portion corresponds to the processor portion . Of the management server program , the management client program , and the Web browser , at least the management server program corresponds to the support program . FIG. 3 250 shows an example of the functions implemented in the management server . 271 253 301 302 303 271 301 302 303 The management server program is executed by the processor portion to allow the illustrated functions, i.e., an input portion , a display portion , and a control portion to be implemented. In other words, the management server program has the input portion , the display portion , and the control portion . 301 301 311 311 311 321 322 321 322 The input portion is the function for receiving information. The input portion includes a user operation receiving portion . The user operation receiving portion is a function for receiving a user operation (operation performed on the UI by the user using the input device). The user operation receiving portion includes a period receiving portion and a selection receiving portion . The period receiving portion is a function for receiving a specification of a display target period in a holistic chart described later. The selection receiving portion is a function for receiving a selection operation (e.g., a user operation for selecting an element desired by the user). 302 302 331 333 331 50 333 The display portion is a function for displaying information. The display portion includes a display generating portion and a display performing portion . The display generating portion is a function for generating the display of a UI such as the holistic chart (e.g., drawing it on a memory portion). The display performing portion is a function for performing the display of the generated UI. 303 303 341 342 The control portion is a function for control. The control portion includes a data managing portion and an analysis portion . 341 272 351 352 353 341 272 351 341 200 272 The data management portion manages information included in the management information , for example, a production past record table , a retention past record table , and an order-of-step table . For example, the data management portion acquires a past record data of a product, a worker, and a facility and updates at least a part of the management information (e.g., the production past record table ) on the basis of the past record data. Note that the “past record data” is data showing the past record of production and including, e.g., a product ID (e.g., product number), a step ID (e.g., step number), a time point (e.g., the collection time point of the data or the start time point and the end time point of the step), and a status (showing that, e.g., a process is currently performed in the step or the step was ended). For example, the data managing portion regularly or irregularly collects the past record data from the production system and updates at least a portion of the management information . 342 272 The analysis portion is a function for analyzing the management information . 272 The following will describe an example of a table included in the management information . FIG. 4 351 shows an example of the production past record table . 351 401 402 403 404 401 402 403 404 In the production past record table , each record stores information such as a product ID , a time point , a step ID , and a status . The product ID indicates the ID of a product. The time point indicates the execution time point (start time point or end time point of execution) of a step for the product. Also, the unit of time point is represented in a year/month/day/hour/minute/second unit, but the unit of a time point may be rougher or finer than the unit used in the present embodiment or may also be represented in a different unit. The step ID indicates the ID of a step performed on the product. The status indicates the status of the step performed on the product (e.g., “start” of the step or “end” of the step). FIG. 5 353 shows an example of the order-of-step table . 353 353 501 502 503 FIG. 5 The order-of-step table shows the relationship between a product type and a sequential order of steps. Specifically, the order-of-step table has records for individual product types on a one-to-one basis. Each record stores information such as a product type ID , a product ID , and an order of steps . One product type is taken as an example (which is the “product type of interest” in the description of ). 501 502 503 The product type ID indicates the ID of the product type of interest. The product ID indicates the product ID of each product belonging to the product type of interest. The order of steps indicates step IDs arranged in the sequential order of the steps for the product type of interest. FIG. 6 352 shows an example of the retention past record table . 352 271 351 353 601 602 603 604 605 606 607 608 609 601 609 271 FIG. 6 The retention past record table has records for individual times on a one-to-one basis. In the present embodiment, the retention number, the increase rate, and the increase acceleration rate for each time point in each step are calculated by the management server program on the basis of the production past record table and the order-of-step table . Specifically, the record for each time point includes information such as a time point as well as a step ID , a previous step ID , a sub retention number , a sub retention increase rate , a sub retention increase acceleration rate , a retention number , a retention increase rate , and a retention increase acceleration rate . Such pieces of information to are stored by the management server program . Hereinafter, one time point and one step will be taken as an example (the “time point of interest” and the “step of interest” in the description of ). 601 602 603 353 1 3 4 1 2 1 3 4 1 2 FIG. 6 The time point indicates the time point of interest. The step ID indicates the ID of the step of interest. The previous step ID indicates the ID of a previous step (the step immediately before) of the step of interest. The ID of the previous step can be specified from the order-of-step table by using the ID of the step of interest as a key. In the example shown in , the previous steps of step are step and step , and the subsequent step of step is step . In other words, products of different product types enter step from step and step , and those products exit from step to step . 604 605 606 602 603 604 605 604 604 604 606 605 605 605 The sub retention number , the sub retention increase rate , and the sub retention increase acceleration rate are given for each set of the step ID and the previous step ID . The sub retention number indicates the retention number of products that enter the step of interest from the previous step of the step of interest. The sub retention increase rate is an amount of increase in the retention number (sub retention number ) of the products entering the step of interest from the previous step of the step of interest per unit time (time point interval), in other words, a value obtained by subtracting the sub retention number at the previous time point of the time point of interest from the sub retention number at the time point of interest. The sub retention increase acceleration rate is an amount of increase in the retention increase rate (sub retention increase rate ) of the products entering the step of interest from the previous step of the step of interest per unit time, in other words, a value obtained by subtracting the sub retention increase rate at the previous time point of the time point of interest from the sub retention increase rate at the time point of interest. 607 608 609 607 604 608 605 609 606 The retention number , the retention increase rate , and the retention increase acceleration rate are given for each step of interest regardless of the number of previous steps given for the step of interest. The retention number indicates the retention number of products that enter the step of interest from all the previous steps of the step of interest, in other words, a sum of all the sub retention numbers corresponding to the time point of interest and the step of interest. The retention increase rate indicates an amount of increase in the retention number of products that enter the step of interest from all the previous steps of the step of interest per unit time, in other words, a sum of all sub retention increase rates corresponding to the time point of interest and the step of interest. The retention increase acceleration rate indicates an amount of increase in the retention increase rate of products that enter the step of interest from all the previous steps of the step of interest per unit time, in other words, a sum of all sub retention increase acceleration rate corresponding to the time point of interest and the step of interest. Note that, in the present embodiment, the time point interval is constant (e.g., one hour), but the time point interval may not necessarily be constant (e.g., the time point interval may be different depending on the time zone). The “time point interval” is an interval between the time point and the latest time point (e.g., immediately before or immediately after that time point). Hereinafter, a holistic chart according to a comparative example will be described, and then a holistic chart according to the present embodiment will be described in detail. FIG. 7 shows the holistic chart according to the comparative example. 750 750 A holistic chart according to the comparative example is a heat map showing the appropriateness of the retention number for each time point in each step. Accordingly, the display mode of each cell in the holistic chart depends on the retention number. Each cell is darker as the retention number is larger, and is lighter as the retention number is smaller. However, holistically viewing the appropriateness of the retention number has the following problems, for example. Problem A: Some stocks in work queue may be prepared intentionally. However, when an intended retention number exceeds a reference value (threshold), the intended retention number will be detected as abnormality (e.g., it is turned into the darkest display). 1 Problem B: When there is no fluctuation in the retention number after the retention number exceeds the reference value, it seems as if the abnormality continues. For example, for step , all the cells are turned into the darkest display, and it looks as if the abnormality continues. 2 5 Problem C: The abnormality can be detected when the retention number exceeds the reference value, while the cause of the abnormality may be present before the retention number reaches the reference value. However, no abnormality is detected even when the cause of the retention number reaching the reference value is present early. For example, for step , the retention number monotonically increased and then exceeded the reference value at time point , but it is not known when the change causing the retention number to exceed the reference value occurred. Problem D: The appropriate value of the retention number is not necessarily the same in all steps. Thus, it is difficult to set a suitable reference value for the retention number. FIG. 8 According to the present embodiment, it is possible to solve problems related to a holistic view of the appropriateness of the retention number, for example, any of the above-mentioned Problem A to Problem D. Hereinafter, the details of the holistic chart according to the present embodiment will be described with reference to and subsequent figures. FIG. 8 shows a first example of the holistic chart according to the present embodiment. 50 A holistic chart is a heat map for the retention increase rate. The display mode of each cell in the holistic chart depends on the retention increase rate. For example, the pattern of each cell depends on whether the retention increase rate is a negative value, zero, or a positive value, and the density of each cell depends on the rate abnormality degree. As described above, the “rate abnormality degree” is a difference between the retention increase rate (the absolute value in the present embodiment) and the rate reference value. Specifically, for example, the pattern of cells having a negative retention increase rate is a horizontal striped pattern, and the density of the horizontal striped pattern depends on the rate abnormality degree (the higher the rate abnormality degree, the darker). The pattern of cells having a retention increase rate of zero is no pattern. The pattern of cells having a positive retention increase rate is a checkered pattern, and the density of the checkered striped pattern depends on the rate abnormality degree (the higher the rate abnormality degree, the darker). As described above, in the present embodiment, the “retention increase rate” at each time point in each step, that is, a balance between IN (input amount of products between time points) and OUT (output amount of products between time points) at each time point for each step is used as an evaluation index for production control. When the ratio between IN and OUT for each facility is maintained constant, it can be estimated that the facility operational availability is maintained high. Visualizing the appropriateness of the retention increase rate for each time point for each step supports the management of the appropriateness of the facility operational availability. 1 50 1 1 50 FIG. 8 FIG. 7 For example, the cell row for step in the holistic chart illustrated in shows that gradual decrease and increase in the retention increase rate are repeated, that is, the retention number is almost constant in the display target period. Therefore, even if the time series of the retention number in step is as illustrated in , it is found that there is no problem in the facility operational availability in step in the display target period shown in the holistic chart . Thus, the above-mentioned Problem A and Problem B are solved. 2 50 4 4 5 4 5 FIG. 8 FIG. 7 FIG. 8 Also, for example, the cell row for step in the holistic chart illustrated in shows that the positive retention increase rate tends to increase, and the rate abnormality degree exceeds the threshold X at time point . In the present embodiment, as will be described later, since the rate reference value is zero, the rate abnormality degree can be regarded as the retention increase rate itself. After time point , although the retention increase rate decreases, the retention increase rate is a positive value, and therefore the retention number itself increases. As a result, the retention number exceeds the reference value at time point (see ). However, in the example shown in , an abnormality (that the rate abnormality degree exceeds the threshold X) is detected at time point before time point . In this way, it is possible to early detect a sign of the retention number exceeding. As a result, the above-mentioned Problem C is solved. The “rate reference value” is a reference value for the retention increase rate, that is, a reference for a ratio between IN and OUT for products. The ratio is preferably balanced in order not to increase the retention number, and such an idea about the ratio does not depend on the steps. Therefore, the rate reference value can be made common to a plurality of steps (typically, all steps), and in the simplest case, the rate reference value can be set to zero. For this reason, the threshold X can be made common to a plurality of steps (typically, all steps). Therefore, the above-mentioned Problem D is also solved. As described above, the calculation of the “retention increase rate” can be performed using the same information as the existing information necessary for calculating the retention number for each time point in each step. In other words, it is not necessary to prepare new additional information that is not necessary for calculating the retention number for each time point in each step. For example, in order to solve the above-mentioned Problem A to Problem D, a method of repeating trial and error by increasing the information items to be used or increasing the information items to be newly collected in the so-called big data is considered. However, such a method is complicated and burdensome. According to the present embodiment, no new information is required, which contributes to saving the storage resources of the computer. As described above, the new evaluation index of “retention increase rate” contributes to the support for production control. Extensive research conducted on the holistic chart serving as a heat map for the retention increase rate can provide the following further knowledge. It is expected that a sign that the retention number will be in an inappropriate state can be provided to the user earlier by changing the display mode even for a relatively high rate abnormality degree depending on whether the retention increase rate is increasing in the direction of deterioration or increasing in the direction of mitigation, or changing the display mode even for a relatively low rate abnormality degree depending on whether the retention increase rate is decreasing in the direction of deterioration or decreasing in the direction of mitigation. To this end, a new evaluation index other than the “retention increase rate” can be introduced. In addition, it is desirable to calculate such a new evaluation index without requiring new information, as with the “retention increase rate”. 271 351 353 FIG. 6 Therefore, in the present embodiment, the above-mentioned “retention increase acceleration rate” is also adopted as an evaluation index indicating a tendency of increase or decrease in the retention increase rate. The “retention increase acceleration rate” is an amount of increase in the retention increase rate per unit time. The management server program calculates a retention increase acceleration rate for each time point in each step on the basis of the production past record table and the order-of-step table . An example of a specific calculation method is as described with reference to . FIG. 9 shows a second example of the holistic chart according to the present embodiment. 50 50 A holistic chart is a heat map for the retention increase rate and the retention increase acceleration rate. The display mode of each cell in the holistic chart depends on the retention increase rate and the retention increase acceleration rate. For example, the pattern of each cell depends on whether the retention increase rate is a negative value, zero, or a positive value, and the density of each cell depends on at least an acceleration rate abnormality degree of the rate abnormality degree and the acceleration rate abnormality degree. The “acceleration rate abnormality degree” is a difference between the retention increase acceleration rate and an acceleration rate reference value. The “acceleration rate reference value” is a reference value (threshold) for the retention increase acceleration rate, that is, a reference for a ratio between an amount of increase in IN (input amount of products between time points) and an amount of increase in OUT (output amount of products between time points). The ratio is preferably balanced, and such an idea about the ratio does not depend on the steps. Therefore, the acceleration rate reference value can be made common to a plurality of steps (typically, all steps), and in the simplest case, the acceleration rate reference value can be set to zero. 5 2 900 2 1 4 4 271 5 2 50 FIG. 9 FIGS. 8 and 9 For ease of explanation, time point and step are focused on, as indicated by reference numeral in . According to the time series of the retention increase acceleration rate in step , the retention increase acceleration rate gradually increases from time point to time point , but rapidly decreases after time point . This means that the retention number is increasing as the retention increase rate is increasing, but it is stopping soon. Accordingly, as can be seen from the comparison between , the management server program sets the pattern of the cell corresponding to time point and step in the holistic chart to a light pattern (reduces the density). FIG. 8 50 As described above, in addition to the elements described with reference to , the display mode of each cell in the holistic chart depends on whether the retention increase acceleration rate corresponding to the cell is a negative value, zero, or a positive value, and a difference between the retention increase acceleration rate and the acceleration rate reference value. 50 50 More specifically, the display mode of each cell in the holistic chart mainly depends on the retention increase rate and supplementally depends on the retention increase acceleration rate. In other words, the display mode of each cell in the holistic chart depends on an influence degree defined by the following (x) to (z). (x) Which of a negative value, zero, and a positive value is the retention increase rate? (y) Rate abnormality degree (z) Which of increase (deterioration), no change, and decrease (mitigation) is the tendency of increase or decrease in the acceleration rate abnormality degree? FIG. 10 shows examples of a relationship among a rate abnormality degree, an acceleration rate abnormality degree, and an influence degree. FIG. 10 50 In , the influence degree has four levels: high, medium, low, and none. For each cell in the holistic chart , the pattern depends on whether the retention increase rate is a negative value, zero, or a positive value, and the density depends on the influence degree. The higher the influence degree, the higher the density. 271 Note that a threshold Y for the acceleration rate abnormality degree may be set in the management server program , for example. Thresholds Y may be prepared separately, such as a threshold for negative retention increase acceleration rates and a threshold for positive retention increase acceleration rates, but in the present embodiment, the threshold Y is one common value. The threshold Y may be a threshold for the absolute value of the acceleration rate abnormality degree. This makes it possible to detect both deterioration and mitigation of the retention increase acceleration rate. For example, when the acceleration rate abnormality degree is a positive value and the absolute value of the acceleration rate abnormality degree exceeds the threshold Y, the acceleration rate abnormality degree may be set to increase. When the acceleration rate abnormality degree is a negative value and the absolute value of the acceleration rate abnormality degree exceeds the threshold Y, the acceleration rate abnormality degree may be set to decrease. When the acceleration rate abnormality degree indicates neither deterioration nor mitigation, the acceleration rate abnormality degree may be set to no change. The threshold Y can be common to a plurality of steps (typically, all steps). 50 The display mode of each cell in the holistic chart could mainly depend on the retention increase acceleration rate and could supplementally depend on the retention increase rate. For example, for each cell, the pattern could depend on whether the retention increase acceleration rate is a negative value, zero, or a positive value. However, as in the present embodiment, it is desirable that the retention increase rate be dominant rather than the retention increase acceleration rate. This is because it allows the user to recognize whether or not the retention number is increasing. 50 Also, the display mode of each cell in the holistic chart may further depend on the retention number. FIG. 11 shows examples of a relationship between a combination of a retention number, a rate abnormality degree, and ab acceleration rate abnormality degree, and a display mode. FIG. 11 In the example shown in , the pattern of each cell depends on whether the retention increase rate is a negative value, zero, or a positive value. The density of the cell depends on at least the retention number of the rate abnormality degree and the retention number. For example, when the retention number exceeds the first threshold, the density of the pattern is high. When the retention number is equal to or smaller than the first threshold and equal to or larger than the second threshold, the density of the pattern is medium. When the retention number is smaller than the second threshold, the density of the pattern is low. Also, whether or not the pattern blinks depends on whether or not the acceleration rate abnormality degree increases (deteriorates). Note that the threshold for the retention number may differ depending on the step. FIG. 11 Instead of the example shown in , the density of the cell may depend on at least the acceleration rate abnormality degree of the rate abnormality degree and the acceleration rate abnormality degree, and whether or not the pattern blinks may depend on the magnitude of the retention number. 200 Further extensive research conducted on the holistic chart serving as a heat map for the retention increase rate can provide the following further knowledge. As with a job shop production system, in a case where the production system is a production system in which a plurality of different product types of products are loaded and the sequential order of two or more of a plurality of steps is different depending on the product type, the holistic chart shows no information on relations among a step of interest and steps before and after the step of interest (sequential relation). 271 50 Thus, in the present embodiment, the management server program displays a display object showing a sequential relation of steps on the holistic chart . FIG. 12 4 4 50 For example, as illustrated in , it is assumed that the user specifies a cell corresponding to time point and step in the holistic chart . 271 4 353 4 3 5 4 6 4 4 3 4 5 271 In this case, the management server program specifies the previous step and subsequent step of step by referring to the order-of-step table using step as a key. Here, it is assumed that step and step are specified as the previous steps of step , and step is specified as the subsequent step of step . The previous time point of time point is time point and the subsequent time point of time point is time point . Here, the management server program performs at least one of the following (a) and (b). Note that when there is no previous step or no subsequent step for the specified step for any product type, then (a) or (b) may not be performed. 271 13 3 3 3 5 13 13 271 14 5 6 14 FIG. 13 FIG. 13 (a) The management server program displays, for each of one or more previous cells of the specified cell, a previous relation object that is a display object showing the association between the previous cell and the specified cell. The “previous cell” is a cell corresponding to the previous time point and the specified previous step. In this example, the previous cells are a cell corresponding to time point and step , and a cell corresponding to time point and step . In the example shown in , each of previous relation objects A and B is a line having no directivity, but instead of such a line, a line having directivity (e.g., an arrow showing the sequential relation of the steps) or other display objects may be used. (b) The management server program displays, for each of one or more subsequent cells of the specified cell, a subsequent relation object that is a display object showing the association between the subsequent cell and the specified cell. The “subsequent cell” is a cell corresponding to the subsequent time point and the specified subsequent step. In this example, the subsequent cell is a cell corresponding to time point and step . In the example shown in , a subsequent relation object is a line having no directivity, but instead of such a line, a line having directivity (e.g., an arrow showing the sequential relation of the steps) or other display objects may be used. FIG. 13 4 4 In the example shown in , the user can recognize the relation among the specified cell (time point and step ) and the steps before and after the specified cell. 3 3 3 4 4 In addition, for example, comparing the display mode of the specified cell with the display mode of the previous cell allows the user to estimate which of the previous cells corresponds to the step whose product output affects the rate abnormality degree (retention increase rate) at the time point and step which correspond to the specified cell. For example, since the pattern of the previous cell corresponding to step is a horizontal striped pattern, the user can estimate that “at time point and step , OUT is larger than IN, which is the cause of the retention increase rate at time point and step being a positive value.” 4 4 5 6 In addition, for example, comparing the display mode of the specified cell with the display mode of the subsequent cell allows the user to estimate which of the subsequent cells corresponds to the time point and step whose rate abnormality degree (retention increase rate) is affected by the product output of the step corresponding to the specified cell. For example, since the pattern of the specified cell and the pattern of the subsequent cell are both horizontal striped patterns but the subsequent cell has a lighter pattern than the specified cell, the user can estimate that “at time point and step , OUT is smaller than IN, which is the cause of the retention increase rate at time point and step being a positive value.” 271 13 14 271 50 13 14 352 FIG. 14 The management server program may turn at least one of the previous relation objects and the subsequent relation object into a display mode depending on at least the retention increase rate of the retention increase rate and the retention increase acceleration rate in the time point interval and the inter-step range corresponding to the object as illustrated in , in response to a predetermined user operation (or automatically without the user operation). Note that the management server program may turn the display mode of each cell in the holistic chart into a uniform display mode (e.g., a display mode in no pattern and white) in order to improve the visibility of the display mode of the object. For each of the previous relation objects and the subsequent relation object , the retention increase rate and the retention increase acceleration rate in the time point interval and the inter-step range corresponding to the object are specified from the retention past record table using the time point interval and the inter-step range as keys. 13 13 13 271 For example, the display mode of at least one previous relation object may depend on the sub retention increase rate for the inter-step range and the time point interval shown by the previous relation object . Specifically, for example, the display mode may depend on whether the sub retention increase rate is a negative value, zero, or a positive value, and a difference between the sub retention increase rate and its reference value. The display mode of the previous relation object makes it easy for the user to estimate how much the product output of the previous time point and the previous step affects the rate abnormality degree at the specified time point and step. Note that a reference value serving as a threshold of the sub retention increase rate may be set in the management server program for each inter-step range, for example. 13 13 271 In addition, for example, the display mode of at least one previous relation object may further depend on the sub retention increase acceleration rate for the inter-step range and the time point interval shown by the previous relation object . Specifically, for example, the display mode may depend on whether the sub retention increase acceleration rate is a negative value, zero, or a positive value, and a difference between the sub retention increase acceleration rate and its reference value. The display mode of the previous relation object makes it easier for the user to estimate how much the product output of the previous time point and the previous step affects the rate abnormality degree at the specified time point and step. Note that a reference value serving as a threshold of the sub retention increase acceleration rate may be set in the management server program for each inter-step range, for example. 14 14 14 In addition, for example, the display mode of at least one subsequent relation object may depend on the sub retention increase rate for the inter-step range and the time point interval shown by the subsequent relation object . Specifically, for example, the display mode may depend on whether the sub retention increase rate is a negative value, zero, or a positive value, and a difference between the sub retention increase rate and its reference value. The display mode of the subsequent relation object makes it easy for the user to estimate how much the product output of the specified time point and step affects the rate abnormality degree at the subsequent time point and the subsequent step. 14 14 14 Furthermore, for example, the display mode of at least one subsequent relation object may further depend on the sub retention increase acceleration rate for the inter-step range and the time point interval shown by the subsequent relation object . Specifically, for example, the display mode may depend on whether the sub retention increase acceleration rate is a negative value, zero, or a positive value, and a difference between the sub retention increase acceleration rate and its reference value. The display mode of the subsequent relation object makes it easier for the user to estimate how much the product output of the specified time point and step affects the rate abnormality degree at the subsequent time point and the subsequent step. FIG. 14 13 14 In the example shown in , with respect to the display mode of each of the previous relation objects and the subsequent relation object , the line type depends on whether or not the sub retention increase rate is a positive value, and the line thickness depends on the absolute value of the sub retention rate. 13 3 4 3 4 3 3 4 4 Accordingly, for example, the previous relation object A with a dark solid line shows that the sub retention increase rate corresponding to the time point interval - and the inter-step range - is a positive value, and its absolute value is relatively large (e.g., larger than the reference value), that is, the sub retention number tends to increase. As a result, the user can estimate that the product output of time point and step has a great influence on the retention number at the specified time point and step . 13 3 4 5 4 3 5 4 4 For example, the previous relation object B with a light broken line shows that the sub retention increase rate corresponding to the time point interval - and the inter-step range - is a negative value, and its absolute value is relatively small, that is, the sub retention number tends to decrease. As a result, the user can estimate that the product output of time point and step has a small influence on the retention number at the specified time point and step . 14 4 5 4 6 4 4 5 6 Similarly, for example, the subsequent relation object with a light broken line shows that the sub retention increase rate corresponding to the time point interval - and the inter-step range - is a negative value, and its absolute value is relatively small, that is, the sub retention number tends to decrease. As a result, the user can estimate that the product output of the specified time point and step has a small influence on the retention number at the subsequent time point and the subsequent step . 13 14 Note that the display mode of each of the previous relation objects and the subsequent relation object , for example, the thickness of the line may depend on the sub retention increase acceleration rate corresponding to the object. 271 FIG. 15 FIG. 16 The following will describe some examples of processing performed in the present embodiment. In the present embodiment, the management server program receives, for example, through the UI, a specification of a holistic chart to be displayed for (A) retention increase rate, (B) retention increase rate and retention increase acceleration rate, or (C) retention increase rate, retention increase acceleration rate, and retention number. When (A) is specified, the processing flow shown in is performed. When (B) is specified, the processing flow shown in is performed. Note that the processing flow when (C) is specified is not illustrated, but those skilled in the art will appreciate that flow as referring to the description herein. FIG. 15 shows a flow of a drawing process of a holistic chart for retention increase rate. 271 351 353 1501 271 1502 1507 FIG. 15 The management server program calculates a retention increase rate for each time point in each step on the basis of the production past record table and the order-of-step table (S). The management server program performs S to S for each set of time point and step. One time point and one step will be taken as an example (the time point of interest and the step of interest in the description of ). 271 1502 The management server program determines whether the retention increase rate corresponding to the time point of interest and the step of interest is a negative value, zero, or a positive value (S). 271 1503 271 1504 When the retention increase rate is a positive value, the management server program determines the pattern of the cell corresponding to the time point of interest and the step of interest to be a checkered pattern (S). Further, the management server program determines the density of the pattern on the basis of the relationship between the threshold X and the rate abnormality degree (a difference between the retention increase rate and the rate reference value (=0)) (S). Here, the larger the absolute value of the rate abnormality degree (i.e., the absolute value of the retention increase rate), the higher the density. 271 1505 271 1506 When the retention increase rate is a negative value, the management server program determines the pattern of cell corresponding to the time point of interest and the step of interest to be a horizontal striped pattern (S). Further, the management server program determines the density of the pattern on the basis of the relationship between the threshold X and the rate abnormality degree (S). Here, the larger the absolute value of the rate abnormality degree (i.e., the absolute value of the retention increase rate), the lower the density. 271 1507 When the retention increase rate is zero, the management server program determines the pattern of the cell corresponding to the time point of interest and the step of interest to be no pattern (S). 271 1503 1507 1508 50 FIG. 8 The management server program draws a cell of each set of time point and step on the basis of the results of S to S (S). As a result, the holistic chart illustrated in is drawn. FIG. 16 shows a flow of a drawing process of a holistic chart for retention increase rate and retention increase acceleration rate. 271 351 353 1601 271 1602 1611 1602 1611 FIG. 16 FIG. 10 The management server program calculates a retention increase rate and a retention increase acceleration rate for each time point in each step on the basis of the production past record table and the order-of-step table (S). The management server program performs S to S for each set of time point and step. One time point and one step will be taken as an example (the time point of interest and the step of interest in the description of ). In the description of S to S, will be referred to as appropriate. 271 1602 The management server program determines the influence degree according to the retention increase rate and the retention increase acceleration rate corresponding to the time point of interest and the step of interest (S). 1602 271 1603 FIG. 10 When the influence degree determined in S is the first influence degree described in , the management server program determines the display mode of the cell corresponding to the time point of interest and the step of interest to be a first display mode (S). According to the first display mode, the pattern is a checkered pattern, and the density of the pattern is high (e.g., the larger the absolute value of the retention increase acceleration rate, the higher the density). 1602 271 1604 FIG. 10 When the influence degree determined in S is the second influence degree described in , the management server program determines the display mode of the cell corresponding to the time point of interest and the step of interest to be a second display mode (S). According to the second display mode, the pattern is a checkered pattern, and the density of the pattern is low (e.g., the smaller the absolute value of the retention increase acceleration rate, the lower the density). 1602 271 1605 FIG. 10 When the influence degree determined in S is the third influence degree described in , the management server program determines the display mode of the cell corresponding to the time point of interest and the step of interest to be a third display mode (S). According to the third display mode, the pattern is a checkered pattern, and the density of the pattern is medium. 1602 271 1606 FIG. 10 When the influence degree determined in S is the fourth influence degree described in , the management server program determines the display mode of the cell corresponding to the time point of interest and the step of interest to be a fourth display mode (S). According to the fourth display mode, the pattern is a horizontal striped pattern, and the density of the pattern is low. 1602 271 1607 FIG. 10 When the influence degree determined in S is the fifth influence degree described in , the management server program determines the display mode of the cell corresponding to the time point of interest and the step of interest to be a fifth display mode (S). According to the fifth display mode, the pattern is a horizontal striped pattern, and the density of the pattern is high (e.g., the larger the absolute value of the retention increase acceleration rate, the higher the density). 1602 271 1608 FIG. 10 When the influence degree determined in S is the sixth influence degree described in , the management server program determines the display mode of the cell corresponding to the time point of interest and the step of interest to be a sixth display mode (S). According to the sixth display mode, the pattern is a horizontal striped pattern, and the density of the pattern is medium. 1602 271 1609 FIG. 10 When the influence degree determined in S is the seventh influence degree described in , the management server program determines the display mode of the cell corresponding to the time point of interest and the step of interest to be a seventh display mode (S). According to the seventh display mode, the pattern has no pattern, and the density of the cell is low. 1602 271 1610 FIG. 10 When the influence degree determined in S is the eighth influence degree described in , the management server program determines the display mode of the cell corresponding to the time point of interest and the step of interest to be an eighth display mode (S). According to the eighth display mode, the pattern has no pattern, and the density of the pattern is low. 1602 271 1611 FIG. 10 When the influence degree determined in S is the ninth influence degree described in , the management server program determines the display mode of the cell corresponding to the time point of interest and the step of interest to be a ninth display mode (S). According to the ninth display mode, the pattern has no pattern, and the density of the pattern is zero (e.g., the cell is white). 271 1602 1611 1612 50 FIG. 9 The management server program draws a cell of each set of time point and step on the basis of the results of S to S (S). As a result, the holistic chart illustrated in is drawn. FIG. 17 shows a flow of a drawing process of a previous relation object and a subsequent relation object. 271 50 1701 The management server program receives a specification of a cell in the holistic chart (S). 271 353 1702 The management server program specifies the previous step and subsequent step of the step to which the specified cell belongs from the order-of-step table (S). 271 50 1703 The management server program draws, for each of all the previous cells of the specified cell, the previous relation object for connecting the previous cell and the specified cell on the holistic chart (S). As described above, each previous cell is a cell belonging to the previous time point and previous step of the time point and step to which the specified cell belongs. 271 50 1704 The management server program draws, for each of all the subsequent cells of the specified cell, the subsequent relation object for connecting the subsequent cell and the specified cell on the holistic chart (S). As described above, each subsequent cell is a cell belonging to the subsequent time point and subsequent step of the time point and step to which the specified cell belongs. FIG. 18 FIG. 18 shows a flow of a drawing process including control of display modes of the previous relation object and the subsequent relation object. Note that, in the example shown in , both the display modes of the previous relation object and the subsequent relation object depend on the sub retention increase rate but do not depend on the sub retention increase acceleration rate. However, both the display modes of the previous relation object and the subsequent relation object may depend on the sub retention increase acceleration rate in addition to the sub retention increase rate. 271 50 1801 The management server program receives a specification of a cell in the holistic chart (S). 271 50 1802 The management server program changes the color of the display mode of all cells in the holistic chart to white (an example of the default display mode) (S). 271 353 352 1803 271 1804 1808 FIG. 18 The management server program specifies the previous step and subsequent step of the step to which the specified cell belongs from the order-of-step table , also refers to the retention past record table , and determines, for each set of the target time point interval and the target inter-step range, the sub retention increase rate (S). The management server program performs S to S for each set of the target time point interval and the target inter-step range. One target time point interval and one target inter-step range will be taken as an example (in the description of , the target time point interval of interest and the target inter-step range of interest). Note that the “target time point interval” is between the time point to which the specified cell belongs and the previous time point or the subsequent time point. The “target inter-step range” is between the step to which the specified cell belongs and the previous step or the subsequent step. 271 1804 The management server program determines whether or not the sub retention increase rate corresponding to the target time point interval of interest and the target inter-step range of interest is higher than zero (S). 1804 1804 271 1805 271 1806 When the determination result of S is true (S: TRUE), the management server program determines the line type of the object corresponding to the target time point interval of interest and the target inter-step range of interest to be a solid line (S). Further, the management server program determines the density of the object to be a density according to the absolute value of the sub retention increase rate (e.g., the larger the absolute value is, the darker the object is) (S). 1804 1804 271 1807 271 1808 On the other hand, when the determination result of S is false (S: FALSE), the management server program determines the line type of the object corresponding to the target time point interval of interest and the target inter-step range of interest to be a broken line (S). Further, the management server program determines the density of the object to be a density according to the absolute value of the sub retention increase rate (e.g., the smaller the absolute value is, the lighter the object is) (S). 271 1804 1808 1809 13 14 FIG. 14 The management server program draws an object for each set of the target time point interval and the target inter-step range on the basis of the results of S to S (S). As a result, the previous relation objects and the subsequent relation object illustrated in are drawn. While the embodiment of the present invention has been described heretofore, the embodiment is an example for describing the present invention and is not intended to limit the scope of the present invention to the embodiment. The present invention can be implemented even in various other forms.
Handwriting Analysis, or Graphology, is the study and analysis of writing to reveal your true personality. Your handwriting, often called "brain writing" by psychologists, is the product of your mind, body and emotional experiences, and is as unique as your finger prints and facial features. For the graphologist, it's not what you write that's important, but how you write , thus revealing more than 100 character traits hidden within your subconscious mind. Today Graphology is used worldwide in business, education, personal counselling and compatibility studies. Copyright 2013 Handwriting Analysis . All rights reserved.
http://yourwritingspeaks4u.us/
At 99designs we love great design, and a big part of good design is use of color. We were interested to see how designers make use of color in their designs, so we built an automatic color extractor to enable us to analyze color usage at a massive scale. Think of a design you love. Part of the story it tells is in the colors it uses, the contrast of light and shade, and the subtle emotions those colors convey. Images are made up of pixels, and if you just count the colors of every pixel, you don’t get anything like the list of colors above. This post is about our journey working out how to automatically work out an image’s color palette that is close to what a real person would pick. Quick quiz: What’s the primary color in this image? The problem here is that simply counting the number of pixels with a given color the background color nearly always dominates. We need to work out the background color so we can exclude it. We found a simple approach that works well in most cases. We found that if the pixels in the corners of the image are the same color, that color was the background. Let’s look again. In this case, the corner pixels are: #ffffff, #ffffff, #ffffff, #ffffff— all white. We can thus safely exclude white as a background color, leaving red as the most frequent color. Nice! Quiz time again, how many colors are in this image? Aww, not that great. It has two very similar blues, and misses the yellow entirely. The problem is that colors that are not exactly the same are treated differently to a computer. All the different shades and variants of colors mess up the counts. Humans easily group sets of colors together though, and ideally our program would do the same. So how can we judge if two colors look the same to the human eye or not? Fortunately, a bit of color theory comes in handy here. On a computer, colors are usually represented in the RGB color space. This means that a color is made up of three components: Red, Green and Blue. To work out the distance between two colors you can use the Euclidian distance of the components. Comparing colors in the RGB color space works ok, but it’s not perfect. Differences in RGB don’t accurately match how the human eye perceives color. For example, yellow often appears brighter to humans than a blue of the same brightness. Also, humans can perceive smaller differences in green hues than in pink. A better way to compare colors is to convert to the Lab color space. Lab is designed to allow comparison in a way that matches how the human eye perceives color. A color in the Lab color space has three components: “L” represents lightness, the “a” component ranges from green to magenta, the “b” component ranges from blue to yellow. The distance between colors in the Lab color space is often called the delta-E. A delta-E of less than 1.0 means that the human eye cannot tell the difference between two colors. We can take advantage of this to group similar colors together in a way that matches how the eye would do it naturally. As it turns out, there’s a whole lot of situations in which extra colors get added: antialiasing on edges of shapes, image compression artifacts, textures and gradients all add to the number of different colors that occur, even if they don’t change the overall palette. This technique helps to deal with these issues. It can leave behind very low counts of some noisy colors — an additional threshold filter helps to clean up colors that don’t occur very often. Now we can work out what colors are used in a design, but it turns out that a lot of the colors we find aren’t that visually interesting. Which colors are the most interesting here? In fact, it seems like grays and subdued shades are often used used as fillers to give highlights more impact. How could we isolate these distinctive colors? Excellent question! Lab coats on! After flicking through designs until our retinas got tired, we turned to color theory again. It turns out color theory has a name for the concept of color interestingness: “saturation”. Cutting out colors with a low saturation gives you much more interesting results. We can now automatically work out the palette for an image that closely matches what a human would pick. At 99designs we love open source — so we’re releasing Colorific, our automatic color palette detector. Check it out at github. Colorific has been tested on Python 2.7, but if you have any problems please submit an issue on github. Usage Colorific is designed to run in a streaming manner. You feed in image filenames as input and colorific spits out the filename and the color palette as output. Tune in next time and we’ll tell you all about how we apply Colorific at a massive scale to analyse the huge number of incoming designs at 99designs.
https://en.99designs.fr/blog/engineering/color-analysis/
Creative Subconscious came out of research in connections between creativity and human potential, through examining the creative process as a gateway to the naturally induced non-ordinary states. At Creative Subconscious we believe in the healing power of creativity and that everyone has an inner artist, who once is awakened brings out an authentic connection to self and the outer world. The inner artist taps in the subconscious mind and provides the ability to see the bigger picture and to make creative decisions from an expanded open-minded state. Our founder Marina Kurikhina offers private consultations, workshops, and retreats. She also has a community of creative practitioners, whom she collaborates with or supports by sharing their practices through the Creative Subconscious platform. Each one of those practitioners has helped to form and strengthen Creative Subconscious vision.
https://www.creativesubconscious.org/about-me
When you hear the words “self love,” do you think of terms like vanity, narcissism, and arrogance? Or do you think of concepts like fulfillment, empathy, and compassion? Self love, according to Psychology Today, is “a state of appreciation for oneself that grows from actions that support our physical, psychological, and spiritual growth.” Self love is so much more than the physical state of “looking good” or the mental state of “feeling good.” Rather, it’s an iterative process that encompasses physical, emotional, and mental well-being on an intrapersonal and interpersonal level. Psychology Today continues, saying: “Self love is dynamic; it grows by actions that mature us. When we act in ways that expand self love in us, we begin to accept much better our weaknesses as well as our strengths, have less need to explain away our short-comings, have compassion for ourselves as human beings struggling to find personal meaning, are more centered in our life purpose and values, and expect living fulfillment through our own efforts.” Essentially, loving ourselves, accepting ourselves, and showing compassion for ourselves helps us live a more purposeful, value-driven life. And when we live our life with resolve and empathy, we become better partners, coworkers, parents, friends, and citizens. So how do you go about cultivating self love? Psychology Today outlines several strategies: - Become mindful. Reflect on your personal values and act on them. Guide your decisions and actions based on your principles rather than the expectations of others. - Act on what you need rather than what you want. Stay focused on your future needs rather than fleeting pressures from others. Move forward in a strong and centered way, and cultivate a life based on healthy, positive goals. - Practice self care. Take care of yourself and your needs, like sleep, nutrition, exercise, relationships, and hygiene, so you can take better care of others. - Set boundaries. Don’t stretch yourself too thin and don’t be afraid to say no. Devote more time to the people and activities that matter most in your life, including yourself. - Forgive yourself. Let go of past mistakes. Avoid fixating on that which you cannot change and instead allow yourself to learn and grow. Here at Om Med Spa, we believe profoundly in the concept of self love as a vehicle for empowerment and purpose. We are here to help you cultivate self love on a multitude of levels, whether that means providing a serene hour of reprieve, helping you achieve your self-care goals, or simply listening and learning about your needs. There is no “one-size-fits-all” solution for achieving a state of self love. Rather, it’s an evolving, complex process, and one that we would be honored to be a part of.
https://ommedspa.com/hello-world/
Monday, 18 May 2015 Nuts and Bolts of Travel in New South Wales (NSW) Our visit to Australia was restricted to Sydney and the Greater Blue Mountains Region because much of our time was well spent on catching up with the girls. We used both the family car, Land Cruiser, as well as, the public transport for our travels. Public transport though expensive is very comfortable and convenient. Roads – Most of the highways are broad and well-marked. The State roads could be narrower and two-laned, while the roads in the city and the residential areas could range from broad to narrow, most with very wide footpaths. Since Australia is a huge country with a low density of population, the roadways are not the best but most certainly much better than that of India. Road works were a common sight, having said that, what stood out were the safety measures that were in place during the process. Traffic is well regulated, with occasional miscreants breaking the rules. Honking is almost Nil. Peak hours could witness heavy traffic in the city, with moderate traffic in the towns and villages. A couple of times, we spotted huge trucks tailgating cars; Don informed us that this was against the rules. We also witnessed a huge truck ahead of us travelling above its 20km/hr speed limit on a curvaceous downslope, which at a point veered much out its lane and missed crashing into an oncoming vehicle! Since much of the traffic is organized and under surveillance, a couple of these incidents really do stand out. Taking the family car to the city was an expensive affair; the exorbitant parking charges ranging from $2 - $4 per hour, not to forget the limited parking spaces posing an even bigger problem! Also, when travelling in a private vehicle, arrangements need to be made for an e-TAG or a Visitor’s e-PASS as there are no toll booths like the ones we see in India. Ours was taken care of by the family. Railways – Travelling in the trains of NSW has been a sheer joy. With elevators at every station that we came across, as well as, facilities for ramps such that the train is wheelchair accessible made us realise how friendly this nation is towards differently-abled people. Infact, we noticed a lovely lady in an electric wheel chair, all by herself, waiting for a downtown train to arrive. She seemed to have lost both her legs knees down, and yet, managed herself beautifully. At stations where the platforms were curved, the gap between the train and the platform was not only bothersome but also dangerous. At a station called Katoomba, many of those in their golden years were pretty nervous to disembark the train because the gap between the train and the platform indeed seemed like a gaping chasm! Having said that, the stations are kept very clean with good seating arrangements. Announcements about the trains arriving, departing and the stations at which they would halt at kept playing. The trains too are very clean, most of them being air-conditioned, however, we did travel on an older train which did not have an air-conditioner. We also travelled in a train which had reversible seats. Announcements about the upcoming stations keep running in the train too. Ferries – It was a dizzy kind of feeling walking over their very well maintained wharfs. The ferries are wonderfully clean. Within the Ferry, there are comfortable seats on two decks, most of which are sheltered while the others are in the open. It was indeed a thrill sitting on the upper deck with the wind blowing hard and the sun beating down, both in completion as to who could strike harder! The sun won and we had to go inside! Travel can be expensive, however, the “Opal Card” does help on saving some precious bucks. At the time of our travel, a maximum of $15 were deducted from the Opal card, per day per head, even if the cost of our journey totaled to about $45 per head. We were not charged more than $60 per week. This includes travel by public transport, that is, by local trains, light rail, public buses and Government operated ferry’s in NSW. On a Sunday, any or all of these services together are capped at $2.50 per head for the whole day. Enthusiastic and warm staff made our journeys even more sprightly. On a couple of occasions, we were greeted with a “Namaste” and a broad smile. At the end of it all, we had got accustomed to the regular greeting, “Hey, how are you doing?” and “Have a good day mate.” When it comes to driving within the National Parks, it is better to avail of a National Park Pass if one decides to travel to seven or more National parks or the vehicle entry fees would simply add up. Leaving aside the travel part, it was only when we reached Australia that we realized that the sockets there use flat pins in a V shape unlike the round pins used in India. Thankfully, Alvi and Don had spare plug converters! For our travels, we took the Lebara mobile facility which had free calling to India and within Australia, which also doubled up as a data card. At the time of our travel, as a trial phase, Telstra was offering free Wi-Fi at certain hotspots, a 30 minute session. When it came to accommodation, we were very fortunate to be living with Alvi and Don, else, accommodation could be very pricey, especially so, during the peak season which includes the School Holidays. Dining in Restaurants too is an expensive affair and hence, our eat outs were much restricted to take-away foods and cafes, except for, on two occasions, one, which was a treat from Don. Thank you Don! Since Gluten Free foods were easily available, I managed to gobble up on loads of bread, pizzas and biscuits! To sum up our core travel experience in a nutshell - it was great seeing such a mix of people of various nationalities living together in a given area. It really did widen our own concept of the meaning of brotherhood. At one point, geographical boundaries didn’t seem to hold any meaning. I not only began questioning myself about so called nationalism and its meaning in today’s context, but it also made me ponder on the truth behind the words, “Unity in Diversity”! Travel does opens up one’s mind and makes one question one’s own beliefs, it certainly has questioned many of mine…. -Aarina P.S. : In NSW driving is permitted on Indian driving licence for three months, as long as the licence is in English. Hence I only had to brush up on the local rules, some test drives with Don and we were all set to go. -Delson
Equipment and deployment strategies for remote passive acoustic sensing of marine environments must balance memory capacity, power requirements, sampling rate, duty-cycle, deployment duration, instrument size, and environmental concerns. The impact of different parameters on the data and applicability of the data to the specific questions being asked should be considered before deployment. Here we explore the effect of recording and detection parameters on marine mammal acoustic data across two platforms. Daily classifications of marine mammal vocalizations from two passive acoustic monitors with different subsampling parameters, an AURAL and a Passive Aquatic Listener (PAL), collocated in the Bering Sea were compared. The AURAL subsampled on a pre-set schedule, whereas the PAL sampled via an adaptive protocol. Detected signals of interest were manually classified in each dataset independently. The daily classification rates of vocalizations were similar. Detections from the higher duty-cycle but lower sample rate AURAL were limited to species and vocalizations with energy below 4 kHz precluding detection of echolocation signals. Temporal coverage from the PAL audio files was limited by the adaptive sub-sampling protocol. A method for classifying ribbon (Histriophoca fasciata) and bearded seal (Erignathus barbatus) vocalizations from the sparse spectral time histories of the PAL was developed. Although application of the acoustic entropy as a rapid assessment of biodiversity was not reflective of the number of species detected, acoustic entropy was robust to changes in sample rate and window length.
http://ccom.unh.edu/publications/assessing-cross-platform-performance-marine-mammal-indicators-between-two-collocated
On 29 May, we will celebrate the 70th anniversary of UN peacekeeping! The United Nations Peacekeeping began in 1948. Its first mission was in the Middle East to observe and maintain the ceasefire during the 1948 Arab–Israeli War. Since then, United Nations peacekeepers have taken part in a total of 63 missions around the globe, 17 of which continue today. The peacekeeping force as a whole received the Nobel Peace Prize in 1988. Though the term “peacekeeping” is not found in the United Nations Charter, the authorization is generally considered to lie in (or between) Chapter 6 and Chapter 7. Chapter 6 describes the Security Council’s power to investigate and mediate disputes, while Chapter 7 discusses the power to authorize economic, diplomatic, and military sanctions, as well as the use of military force, to resolve disputes. The founders of the UN envisioned that the organization would act to prevent conflicts between nations and make future wars impossible; however, the outbreak of the Cold War made peacekeeping agreements extremely difficult due to the division of the world into hostile camps. Following the end of the Cold War, there were renewed calls for the UN to become the agency for achieving world peace, and the agency’s peacekeeping dramatically increased, authorizing more missions between 1991 and 1994 than in the previous 45 years combined. The League of Nations-controlled International Force in the Saar (1934–35) may be “the first true example of an international peace observation force”. Before any official peacekeeping mission, the UN played an important role in the conflict concerning Trieste after World War II. From 1947 to 1954, Trieste was declared an independent city state under the protection of the United Nations as the Free Territory of Trieste. The territory was divided into two zones, which later formed the basis for the division of the territory between Italy and Yugoslavia. The UN also authorized two nations to station troops in the Free Territory, the US (Trieste United States Troops) and the UK (British Element Trieste Force) in the northern zone and Yugoslavia in the southern zone. The first UN peacekeeping mission was a team of observers deployed to the Middle East in 1948, during the 1948 Arab–Israeli War. The mission was officially authorized on May 29, 1948. This date is used as a memorial day to all the UN peacekeepers who have lost their lives known as the International Day of United Nations Peacekeepers. The group, the UN Truce Supervision Organization (UNTSO), as it was named, continues to monitor the situation and has provided observers for a number of conflicts in the region since then. In 1949, observers were deployed to the border of India and Pakistan in a similar mission after the Indo-Pakistani War of 1947 (UNMOGIP). They also continue to monitor the border. In 1950, the UN faced one of its greatest early challenges when North Korea invaded South Korea, starting the Korean War. The Soviet Union was, at the time, boycotting the UN in protest over the Chinese seat being occupied by the Republic of China rather than the People’s Republic of China. It was therefore unable to veto the authorization of member states to assist in the defense of South Korea. The United Nations forces pushed the North Koreans out of the South and made it to the Chinese border before the Chinese People’s Volunteer Army intervened and pushed the UN back to the 38th parallel. Although a cease-fire was declared in 1953, UN forces remained along the demilitarized zone until 1967, when American and South Korean forces took over. In 1956, the UN responded to the Suez Crisis with the United Nations Emergency Force to supervise the withdrawal of invading forces. United Nations Emergency Force as a peacekeeping force was initially suggested as a concept by Canadian diplomat and future Canadian Prime Minister Lester Pearson as a means of resolving conflicts between states. He suggested deploying unarmed or lightly armed military personnel from a number of countries, under UN command, to areas where warring parties were in need of a neutral party to observe the peace process. Pearson was awarded the Nobel Peace Prize in 1957 for his work in establishing UN peacekeeping operations. UNEF was the first official armed peacekeeping operation modeled on Pearson’s ideas. Since 1956, most UN peacekeeping forces, including those called “observer” missions, have been armed.
https://militaryleak.com/2018/05/27/the-70th-anniversary-of-un-peacekeeping/
CJST is an open-access journal publishing full-length research papers and review articles covering subjects that fall under the wide spectrum areas of Science and Technology like Chemistry, Biology, Physics, Medical studies, Environmental Sciences, Mathematics, statistics, Geology, All Engineering and Computer science, Natural sciences, Technological Sciences, Medicine, Pharmacy, Industrial, and all other applied & theoretical sciences. The journal is dedicated towards dissemination of knowledge related to the advancement in scientific research. The prestigious interdisciplinary editorial board reflects the diversity of subjects covered in this journal. Audience: The journal is addressed to both practicing professionals and researchers in the field of science and technology, professionals in academia, former researchers, students and other specialists interested in the results of scientific research and related subjects covered by the journal. Peer Review Policy: Manuscripts submitted to the CJST are approved by the Editor-in-chief followed by formal peer review process conducted in collaboration with editorial board members and independent referees. The publisher encourages the authors and reviewers to use the electronic submission and peer-review system. Peer Review Process: Submissions to the CJST passes through a double-blind peer-review process. The criteria for publication in CJST are as follows: - The study presents the results of primary scholarly research. - Results reported have not been published elsewhere. - The research meets all applicable standards of ethics and research integrity. - Experiments, statistics, and other analyses are performed to a high technical standard and are described in sufficient detail. - Conclusions are presented in an appropriate fashion and are supported by the data. - The article is presented in an intelligible fashion and is written in standard English. Once the manuscripts have passed quality control, they are assigned to a member of the Editorial Board (or an expert who is not a member of the Board) for conducting the peer-review process and for making a decision to accept, invite revision of, or reject the article. Information for reviewers can be accessed here. Editorial Publishing Policies: The following editorial and publishing policies apply to content the Caribbean Journal of Science and Technology. - Open Access Agreement - Submission of related manuscripts - Confidentiality - Publication Ethics 1. Open access agreement: Upon submission of an article, the authors are asked to indicate their agreement to abide by an open access Creative Commons license (CC-BY). Under the terms of this license, authors retain ownership of the copyright of their articles. The license permits any user to download, print out, extract, reuse, archive, and distribute the article, so long as appropriate credit is given to the authors and source of the work. The license ensures that the authors’ article will be available as widely as possible and that the article can be included in any scholarly archive. 2. Submission of Related Manuscripts: When submitting an article, all authors are asked to indicate that they do not have a related or duplicate manuscript under consideration (or accepted) for publication elsewhere. Reviewers will be asked to comment on the overlap between related submissions. 3. Confidentiality & Privacy Statement: Editors and reviewers are required to treat all submitted manuscripts in strict confidence. The names and email addresses entered in this journal site will be used exclusively for the stated purposes of this journal and will not be made available for any other purpose or to any other party. 4. Publication Ethics: Authors are expected to be aware of, and comply with, best practice in publication ethics specifically with regard to authorship (for example avoidance of ghost or guest authorship),dual submission, plagiarism, manipulation of figures, competing interests and compliance with policies on research ethics.
http://caribjscitech.com/about-us-0
--- abstract: 'Four-dimensional(4D) spacetime structures are investigated using the concept of the geodesic distance in the simplicial quantum gravity. On the analogy of the loop length distribution in 2D case, the scaling relations of the boundary volume distribution in 4D are discussed in various coupling regions $i.e.$ strong-coupling phase, critical point and weak-coupling phase. In each phase the different scaling relations are found.' address: - ' Department of Physics, Tokai University Hiratsuka, Kanagawa 259-12, Japan' - ' Department of Physics, University of Tokyo Bunkyo-ku, Tokyo 113, Japan' - ' National Laboratory for High Energy Physics (KEK), Tsukuba 305, Japan' - ' Coordination Center for Research and Education, The Graduate University for Advanced Studies, Hayama-cho, Miura-gun, Kanagawa 240-01, Japan' author: - 'H.S.Egawa , T.Hotta $^{\,\, {\rm b}}$, T.Izubuchi N.Tsuda and T.Yukawa $^{\,\, {\rm c,}}$' title: 'Scaling Structures in Four-dimensional Simplicial Gravity [^1]' --- Introduction ============ Simplicial gravity has witnessed a remarkable development toward quantizing the Einstein gravity. This development started with 2D simplicial gravity and has now reached the point of subjecting to simulate 4D case[@Agis_Migd; @Scal_4DQG; @Phase_4D; @4DDT] about the analysis of fractal dimensions, minbu, scaling relations for the loop length distribution and the curvature distribution. The aim of this paper is to investigate 4D Euclidean spacetime structures using the concept of the geodesic distance. It is very important that the scaling relations have been obtained in 2D case. Therefore, on the analogy of the loop length distribution(LLD) in 2D[@KKMW], the scaling relations in 4D are discussed. Actually we measured the boundary volume distribution(BVD) for various geodesic distances in 4D dynamically triangulated(DT) manifold, in analogy to LLD. In order to discuss the scaling relations, we assume that the scaling variable $x$ has a form $x=V/D^{\alpha}$, where $V$, $D$ and $\alpha$ denote the each boundary(cross section) volume, the geodesic distance and scaling parameter, respectively. Hagura ${\it et \; al.}$ argue the scaling properties of the surface area distributions in 3D case by the same analysis as we employ in 4D case(in these proceedings). The model ========= We use the lattice action of 4D model with the $S^{4}$ topology corresponding to the action as $S = - \kappa_2 N_2 + \kappa_4 N_4$, where $N_i$ denotes the total number of $i$-simplexes. The coupling $\kappa_2$ is proportional to the inverse bare Newton constant and the coupling $\kappa_4$ corresponds to a lattice cosmological constant. For the dynamical triangulation model of 4D quantum gravity, we consider a partition function of the form, $Z(\kappa_2, \kappa_4) = \sum_{T(S^4)} e^{-S(\kappa_2, \kappa_4, T)}$. We sum up over all simplicial triangulations $T(S^4)$. In practice, we have to add a small correction term, $\Delta S = \delta \kappa_4 (N_4 - N_4 ^{(target)})^2$, to the action in order to suppress the volume fluctuations from the target value of $4$-simplexes $N_4 ^{(target)}$ and we have used $\delta = 0.0005$. Numerical Simulations and Results ================================= We define $N_{b}(D)$ as the number of boundaries at the geodesic distance $D$ from a reference $4$-simplex in the $4$D DT manifold averaged over all $4$-simplexes. Fig.\[fig:Number\_Boundary\_16K\] shows the distributions of $N_{b}(D)$ for the typical three coupling strength with $N_{4} = 16K$. In the strong coupling limit($\kappa_{2}=0$), the only one boundary that is identified as the mother universe exists almost all the distances(see Fig.1), which means that the mother universe is a dominant structure. The branching structures are highly suppressed, which shows characteristic properties of the “crumpled manifold”, which is similar to the case observed in $2D$ manifold. On the other hand, in the weak coupling phase(for example, we chose $\kappa_{2}=2.0$), we observe the growth of the branches until $D \sim 60$ and can reasonably extract the relation $N_{b}(D) \propto D$ in the region $3 \leq D \leq 30$. Then we call this manifold as the “elongated manifold”. The strong coupling phase ------------------------- Fig.\[fig:BVD\_Strong\] shows BVD, $\rho(x)$, with $x = V/D^{4.5}$ as a scaling variable in the strong coupling region, $\kappa_{2} = 0$, with $N_{4}=32K$ while the fractal dimension($d_{f}$) reaches about $5.5$ which yet increases with the volume in our simulation size. In terms of this variable the mother universe shows scaling relation and distributes like a Gaussian distribution. We can be fairly certain that in the strong coupling phase the scaling parameter $\alpha$ of mother universe satisfies the relation $d_{f} = \alpha + 1$, and the manifold resembles a $d_{f}$-sphere ($S^{d_{f}}$). There seems to be a scaling property with respect to BVD with $x = V/D^{d_{f}-1}$ as a scaling variable. The critical point ------------------ Next, the data near the critical point are shown in Fig.\[fig:BVD\_Critical\] for various geodesic distances with $N_{4} = 32K$. At this point the fractal dimension reaches about $3.5$ and yet increasing for larger values of $N_{4}$. We must draw attention to the double peak structure on the critical point [@BBKP_4DQG]. Therefore, we measure the boundary volume distribution on both peaks, and obtain the more clear signal of the distribution of the mother universe on the peak which is close to the strong coupling phase. We observe the scaling properties for the mother universe $\rho(x)$ with $x=V/D^{2.3}$ in Fig.\[fig:BVD\_Critical\]. In order to discuss the universality of the scaling relations, we assume the distribution function in terms of a scaling parameter $x = V/D^{2.3}$ as $\rho(x) = a_{0} \frac{1}{D^{2.3}} x^{a_{1}}e^{-a_{2}x}$, where $a_{0}$, $a_{1}$ and $a_{2}$ are some constants. Then we can calculate the fractal dimension from $\rho(x,D)$, $ \lim_{x \to \infty} \int_{v_{0}}^{\infty} dV \; V \; \rho(x,D) = V^{(4)}(D) \sim D^{d_{f}}, $ where $v_{0}$ denotes the cut-off volume and $V^{(4)}$ denotes the total volume of the 4D manifold. If $a_{1} > -2$ this integration is convergent and gives a finite fractal dimension. We can extract the function of $\rho(x,D)$ from Fig.\[fig:BVD\_Critical\], and find $a_{1} \simeq 0.5$ for the distribution of mother volume and $a_{2} \simeq 3.0$. Furthermore we investigate the $N_{4}=64K$ case and obtain the same scaling behavior as that of $N_{4}=32K$ case except $d_{f} \simeq 4.0$ and scaling parameter $\alpha \simeq 3.0$. These results lead to the conclusion that the distributions of the baby universes show no scaling behavior. On the other hand, the distribution of the mother universe shows the scaling relation and is universal($i.e.$ it does not depend on the lattice cut-off($v_{0}$)). The weak coupling phase ----------------------- Finally, we show the data in the weak coupling phase($\kappa_{2}=2.0$) within which the fractal dimension reaches about $2.0$. Fig.\[fig:BVD\_Weak\] shows BVD with $x = V/D^{2}$ as a scaling variable. We can safely state $\rho (x) \times D^{2} \propto x^{-2.0} e^{-x}$. In this phase, the dynamically triangulated manifold consists of widely expanding like branched polymers and we cannot observe the mother universe at all. Summary and discussions ======================= On the analogy of LLD in 2D case, the scaling relations in 4D are discussed for the three phases. BVD, $\rho(x,D)$, at geodesic distance $D$ gives us some basic scaling relations on the ensemble of Euclidean space-times described by the partition function $Z(\kappa_2, \kappa_4)$. In the strong coupling limit $\kappa_{2}=0$ we find that the mother part of BVD, $\rho(x,D)$, scales trivially with $x=V/D^{d_{f}-1}$ as a scaling variable. There is fairly general agreement that the 4D DT manifold seems to be a $d_{f}$-sphere($S^{d_{f}}$). What is important is that this scaling property for the mother universe changes gradually into the scaling relation of that of the critical point. The fluctuations of the spacetime growth with $\kappa_{2} \to \kappa_{2}^{c}$. In 2D, the baby loop and the mother loop show scalings with the same parameter($x=L/D^{2}$). However, LLD of the baby loops is depend on the lattice cut-off and we think that it is not universal. At the critical point in 4D case we have obtained the similar BVD. However, we have a different scaling parameters ($x=V/D^{2.3}$ with $N_{4}=32K$ and $x=V/D^{3.0}$ with $N_{4}=64K$) from 2D case for the mother universe. Furthermore, BVD of the baby universes seems to be non-universal. In the weak coupling phase(see Fig.\[fig:BVD\_Weak\]) we have obtained the elongated manifolds, in other words, branched polymers. In this phase no mother universe exists and BVD of the baby universes shows that the scaling relation is not universal. The results of this paper is the first step to research the universal scaling relations in 2, 3 and 4D on simplicial quantum gravity. The results of simulations can be regarded as the possibility that we may have 4D quantum gravity as the generalized DDK model. -Acknowledgment- We would like to thank H.Kawai, N.Ishibashi, S.R.Das, J.Nishimura and H.Hagura for fruitful discussions. Some of the authors (T.H., T.I. and N.T.) were supported by a Research Fellowships of the Japan Society for the Promotion of Science for Young Scientists. [99]{} M.E.Agishtein and A.A.Migdal, Nucl.Phys.B 385 (1992) 395.f J.Ambjørn and J.Jurkiewicz, Nucl.Phys.B451 (1995) 643. S.Catterall, J.Kogut and R.Renken, Phys.Lett.B 328 (1994) 277. B.V.de Bakker and J.Smit, Nucl.Phys.B 439 (1995) 239. H.Kawai, N.Kawamoto, T.Mogami and Y.Watabiki, Phys.Lett. B306 (1993) 19; N.Tsuda and T.Yukawa Phys.Lett. B305 (1993) 223. P.Bialas, Z.Burda, A.Krzywicki and B.Petersson, hep-lat/9601024; B.V.de Bakker, hep-lat/9603024. [^1]: presented by H.S.Egawa
TECHNICAL FIELD This application claims priority to and the benefit of Japanese Patent Application No. 2019-066003 filed on Mar. 29, 2019 and U.S. patent application Ser. No. 16/396,506 filed on Apr. 26, 2019, the entire contents of which are incorporated herein by reference. The present invention relates to a method of protecting field corn from damage by a plant pathogen. BACKGROUND ART Hitherto, a method of applying mefentrifluconazole has been known, as a method for controlling a plant pathogen on corn (see Patent Literature 1). Also, various types of corn such as field corn, sweet corn, popcorn, and waxy corn are known (see Non-Patent Literatures 1, 2). However, it is not known that field corn, especially, can be safely protected from damage by a plant pathogen by applying mefentrifluconazole at certain application rate. CITATION LIST Patent Literature Non Patent Literature PTL 1: WO2013/007767 NPL 1: https://www.bestfoodfacts.org/corn/ NPL 2: Genetics 99, 275-284. SUMMARY OF INVENTION Technical Problem An object of the present invention is to provide a method having superior safety for protecting field corn from damage by a plant pathogen. Solution to Problem Advantageous Effects of Invention The present inventor has found out that field corn can be safely protected from damage by a plant pathogen by applying mefentrifluconazole at certain application rates to foliage of field corn, seeds of field corn, or a soil of cultivation area of field corn. The present invention includes the following aspects [1] and [2]. [1] A method of protecting field corn from damage by a plant pathogen in a cultivation area of field corn, the method including a step of applying mefentrifluconazole to foliage of field corn, seeds of field corn or a soil of the cultivation area of field corn, wherein the application rate of mefentrifluconazole is 20 to 500 g per hectare of the cultivation area. [2] The method according to [1], wherein mefentrifluconazole is applied to foliage of field corn. Field corn can be safely protected from damage by a plant pathogen according to the present invention. DESCRIPTION OF EMBODIMENTS EXAMPLES Example 1 Comparative Example 1 Example 2 Comparative Example 2 Example 3 Comparative Example 3 Example 4 Comparative Example 4 Example 5 Comparative Example 5 The method of protecting field corn from damage by a plant pathogen of the present invention (hereinafter, sometimes referred to as “present method”) includes a step of applying mefentrifluconazole to foliage of field corn, seeds of field corn or a soil of cultivation area of field corn. Mefentrifluconazole is a triazole-type sterol biosynthesis inhibitor, and can be manufactured by a known method. Zea mays indentata Zea mays indurata Zea mays everta Zea mays saccharate Zea mays ceratina Zea mays tunica Non-Patent Literature 3: Index of plant diseases in the United States. Part III. Gramineae. USDA (1953) Non-Patent Literature 4: Field Crops. Fergus and Hammonds (1958) Field corn in the present method is also known as dent corn in general (see Non-Patent Literature 1, and Non-Patent Literature 4), and is a variety group established from var. and/or var. as major ancestors (see Non-Patent Literature 3). Examples of corn which does not belong to field corn include popcorn (var. ), sweet corn (var. ), waxy corn (var. ), pod corn (var. ), and the like. In the present method, variations within field corn are not particularly limited as long as the field corn is a variety which is usually cultivated. For example, field corn belonging to diverse maturity groups from early-maturing to late-maturing can be used. Also, the varieties are not limited by diverse intended usages of the harvest of field corn. For example, field corn for any of the intended usages such as seed production, ornamentals, green manures, silage, grains, and the like can be used. For grains, field corn for any of the intended usages such as starch, ethanol, oil extraction, feed, sugar production, and the like can be used. Examples of field corn varieties include Pioneer Dent Series (for example, P2088), Dekalb Corn Series (for example, DKC5632), MAS40F, Koshu, and the like. Although the weight of seeds of field corn which can be used in the present method is not particularly limited, a seed weight of field corn is usually within a range of 100 to 400 mg/seed, more preferably 200 to 300 mg/seed. The field corn may be the one producible by natural crossing, plants producible by a mutation, F1 hybrid plants, or transgenic plants (also called genetically modified plants). These plants generally have characteristics such as tolerance to herbicides, accumulation of substances harmful to insect pests, reduction in sensitivity to diseases, increase in yield potential, improvement in resistance to biotic or abiotic stress factors, accumulation of substances, and improvement in preservability and processability. The F1 hybrid plants are those which are each a first filial hybrid obtained by crossing two different varieties with each other and usually have characteristics of heterosis, which is a nature of having more excellent trait than both of the parents. The transgenic plants are those which are obtained by introducing an exogeneous gene from other organisms such as microorganisms and have characteristics like those that cannot be easily obtained by crossbreeding, mutation induction, or natural recombination in natural environments. Examples of the technologies used to create the above plants include conventional type variety improvement technologies; genetic recombination technologies; genome breeding technologies; new breeding technologies; and genome editing technologies. The conventional type variety improvement technologies are specifically technologies for obtaining plants having desired properties by a mutation and crossing. The genetic recombination technologies are technologies in which a target gene (DNA) is extracted from a certain organism (for example, microorganism) to introduce it into a genome of a different target organism, thereby imparting new properties to the organism, and antisense technologies or RNA interference technologies for imparting new or improved characteristics by silencing a certain genes existing in plants. The genome breeding technologies are those improving breeding efficiency by using genome information and include DNA marker (also called genome markers or genetical markers) breeding technologies and genomic selection. For example, the DNA marker breeding is a method in which a progeny having a target gene with a useful trait is selected from a lot of cross progenies by using a DNA marker which is a DNA sequence and is a marker of the presence position of a gene with a specific useful trait on a genome. This method has the characteristics that the time required for breeding can be efficiently reduced by analyzing the cross progeny using a DNA marker when the progeny is a juvenile plant. Also, the genomic selection is a technique in which a prediction formula is created from a phenotype obtained in advance and genome information to predict the characteristics from the prediction formula and the genome information without any evaluation of the phenotype and is technologies contributing to improvement in efficient breeding. The new breeding techniques are a generic term of variety-improvement (=breeding) techniques that are combinations of molecular biological techniques. Examples of the new breeding techniques include cisgenesis/intragenesis, introduction of an oligonucleotide-directed mutation, RNA-dependent DNA methylation, grafting onto a GM rootstock or scion, reverse breeding, agroinfiltration, and seed production technology (SPT). The genome editing technologies are those in which genetic information is transformed in a sequence-specific manner which enables, for example, deletion of a base sequence, substitution of an amino acid sequence, and introduction of an exogenous gene. Examples of tools for these techniques include sequence-specific genome modification techniques such as zinc-finger nuclease (ZFN), TALEN, CRISPR/Cas9, CRISPER/Cpf1, and Meganuclease which each enable sequence-specific DNA scission and CAS9 Nickase and Target-AID which are each created by modifying the aforementioned tools. Examples of the plants mentioned above include plants listed in GM APPROVAL DATABASE of genetically modified crops in the electronic information site (http://www.isaaa.org/) of INTERNATIONAL SERVICE for the ACQUISITION of AGRI-BIOTECH APPLICATIONS (ISAAA). More specifically, these examples include herbicide tolerant plants, insect pest resistant plants, disease resistant plants, and quality modified (for example, increase or decrease in content of a certain component or change in composition) plants of products (for example, starch, amino acid, and fatty acid), fertile trait modified plants, abiotic stress tolerant plants, or plants modified in traits relating to growth and yield. Examples of plants to which tolerance to herbicides is imparted are given as follows. The tolerance to herbicides is obtained, for example, by reducing the compatibility of a chemical with its target, by rapid metabolism (for example, breakdown or modification) resulting from the expression of a chemical deactivation enzyme, or by inhibiting the incorporation of a chemical into a plant body or the transfer of the chemical in the plant body. The plants to which herbicide tolerance is imparted by genetic recombination technologies include plants to which tolerances to the following inhibitors are imparted by genetic recombination technologies: 4-hydroxyphenyl pyruvate dioxygenase (hereinafter abbreviated as HPPD) inhibitors such as isoxaflutole and mesotrione, acetolactate synthetase (hereinafter abbreviated as ALS) inhibitors such as imidazolinone type herbicides including imazethapyr and sulfonylurea type herbicides including thifensulfuron-methyl, 5-enolpyruvylshikimate-3-phosphate synthase (hereinafter abbreviated as EPSP) inhibitors such as glyphosate, glutamine synthetase inhibitors such as glufosinate, auxin type herbicides such as 2,4-D and dicamba, oxynil type herbicides including bromoxynil, and protoporphyrinogen oxidase (herein after abbreviated as PPO) such as flumioxazin. In the present method, mefentrifluconazole is usually used after making formulation by mixing with a carrier such as a solid or liquid carrier, and adding auxiliary agents for formulation such as a surfactant as necessary. In the case of making formulation, preferable formulation type is a soluble liquid, soluble granule, an aqueous suspension concentrate, oil-based liquid suspension, wettable powder, water dispersible granule, granule, aqueous emulsion, oil-based emulsion, and emulsifiable concentrate. More preferable formulation type is aqueous suspension concentrate. Moreover, a formulation containing mefentrifluconazole singly as an active ingredient may be independently used or may be tank-mixed with a formulation containing other fungicides as active ingredients. Also, a formulation containing mefentrifluconazole and other fungicide may be used. Also, a formulation containing mefentrifluconazole and other fungicide as active ingredients may be tank-mixed with a formulation containing, as active ingredients, fungicide different from the above fungicides. The content of the active ingredients (mefentrifluconazole or a total of mefentrifluconazole and other fungicides) in the formulation is usually within a range of 0.01 to 90% by weight, preferably 1 to 80% by weight. In the present method, “applying mefentrifluconazole to foliage of field corn” means to apply mefentrifluconazole to foliage of field corn planted in the cultivation area. In the present method, when applying mefentrifluconazole to foliage of field corn or a soil of the cultivation area of field corn, the application is usually conducted using a spray dilution prepared by mixing a formulation containing mefentrifluconazole with water. These applications may be conducted uniformly on the cultivation area, or may be conducted locally as a spot treatment onto foliage of field corn or the soil around the field corn. The amount of the dilution to be sprayed is usually 10 to 1000 L, preferably 100 to 500 L, and more preferably 140 to 300 L per hectare of cultivation area of field corn though no particular limitation is imposed on it. In the present method, when applying mefentrifluconazole to seeds of field corn, the treatment is usually conducted by coating or spraying seeds with a dilution prepared by mixing a formulation containing mefentrifluconazole with water. In the present method, the application rate of mefentrifluconazole is usually within a range of 20 to 500 g, preferably 40 to 200 g, more preferably 60 to 150 g per hectare of cultivation area of field corn. Examples of the specific application rates of mefentrifluconazole include 30 g, 50 g, 70 g, 80 g, 100 g, 120 g, 250 g, 300 g, and 400 g per hectare of cultivation area of field corn. These application rates can be described with “approximately.” “Approximately” means plus/minus 10%, so, for example, “approximately 100 g per hectare” means “90 to 110 g per hectare.” In the present method, when applying mefentrifluconazole locally as a spot treatment onto foliage of field corn or the soil around the field corn, usually 0.001 to 2 mg of mefentrifluconazole is applied per field corn plant. Preferably, 0.01 to 1 mg of mefentrifluconazole is applied per field corn plant. For example when 0.5 mg of mefentrifluconazole is applied locally per plant as a spot treatment and 400,000 plants are grown per hectare of cultivation area, the application rate of mefentrifluconazole is 200 g per hectare of cultivation area of field corn. In the present method, when applying mefentrifluconazole to seeds of field corn, 0.001 to 1 mg of mefentrifluconazole is usually applied per seed of field corn. Preferably 0.01 to 0.2 mg of mefentrifluconazole is applied per seed of field corn. Seeds treated with mefentrifluconazole are usually sown uniformly to cultivation area so that the application rates of mefentrifluconazole per hectare of cultivation area may be a desired range. For example, when 0.1 mg of mefentrifluconazole is applied per seed, and 1,000,000 seeds are sown per hectare of cultivation area, the application rate of mefentrifluconazole is 100 g per hectare of cultivation area. 2 Although a period of time for conducting the present method is not particularly limited, the period of time is usually within a range from 5 a.m. to 9 p.m., and the photon flux density at land surface of the place where the present method is conducted is usually 10 to 2500 μmol/m/s. The spray pressure when conducting the present method is usually 30 to 120 PSI and preferably 40 to 80 PSI though no particular limitation is imposed on it. Here, the spray pressure is a set value just before the dilution is introduced into the nozzle. The nozzle used in the present method may be flat-fan nozzles or drift-reducing nozzles. Examples of flat-fan nozzles include Teejet110 series and XR Teejet110 series manufactured by Teejet Company. When using these nozzles, the spray pressure is generally 30 to 120 PSI and the volume median diameter of liquid droplets discharged from the nozzle is usually less than 430 micro meter. The drift-reducing nozzle is a nozzle which leads to less drift compared with a flat-fan nozzle and which is called an air induction nozzle or pre-orifice nozzle. The volume median diameter of a liquid droplet discharged from the drift-reducing nozzle is usually 430 micro meter or more. In the present method, when applying mefentrifluconazole to seeds of field corn, the application is usually conducted before sowing the seeds. In the present method, when applying mefentrifluconazole to foliage of field corn, the application is conducted usually between just after emergence of field corn and its harvesting stage, more preferably between 1 leaf stage of field corn and its grain filling stage, further preferably between 2 leaf stage of field corn and its silking stage. In the present method, seeds of field corn may be treated with one or more compounds selected from the group consisting of insecticidal compounds, nematicidal compounds, fungicidal compounds except mefentrifluconazole, and plant growth regulators. Examples of compounds to be used for the seed treatment include neonicotinoid compounds, diamide compounds, carbamate compounds, organophosphorous compounds, biological nematicidal compounds, other insecticidal compounds and nematicidal compounds, strobilurin compounds, metalaxyl compounds, SDHI compounds, other fungicidal compounds except mefentrifluconazole, and plant growth regulators. Plant pathogens in the present method are usually fungi. Examples of Fungi include Ascomycota, Basidiomycota, Blasocladiomycota, Chytridiomycota, Mucoromycota and Olpidiomycota. Examples of specific plant pathogens include the following. The words in parentheses is damage caused by the plant pathogen (plant disease). Puccinia sorghi Puccinia polysora Setosphaeria turcica=Exserohilum turcicum Physopella zeae Cochliobolus heterostrophus Bipolaris maydis Colletotrichum graminicola Cercospora zeae maydis Kabatiella zeae Phaeosphaeria maydis Phaeosphaeria Stenocarpella maydis+Stenocarpella macrospora diplodia Fusarium graminearum+Fusarium verticilioides+Colletotrichum graminicola Ustilago maydis Physoderma maydis Physoderma Cochliobolus carbonum Phyllosticta maydis (corn rust), (corn Southern rust), (Norther corn leaf blight), (corn tropical rust), (=: Norther corn leaf spot), (corn antracnose), -(corn grey leaf spot), (corn eye spot), (corn leaf spot), (corn ear rot), (corn stalk rot), (corn smut), (corn brown spot), (Northern corn leaf spot), (corn yellow leaf blight). In the above plant pathogens, variations within the species are not particularly limited. Namely, the pathogens also include any plant pathogens having reduced sensitivity (or resistance) to specific fungicides. The reduced sensitivity may be attributed to a mutation at a target site (target site mutations), or may be attributed to a factor other than target site mutation (non-target site mutations). Target site mutations include amino acid substitutions in target proteins caused by a mutation in the corresponding open reading frame, and over expression of the target proteins caused by deletion of a suppressor sequence or an increase of an enhancer sequence at the promotor region, or amplification of gene copy number. The factors of resistance by non-target site mutations include acceleration of efflux of fungicides coming into cells out of the cells by ABC transporter and MDS transporter and the like. It also includes detoxification of fungicides by metabolism. Examples of aforementioned specific fungicides include nucleic acid synthesis inhibitors (such as phenylamide fungicides, acylamino acid fungicides, DNA topoisomerase type II fungicides), mitosis and cell division inhibitors (such as MBC fungicides, N-phenylcarbamate fungicides), respiration inhibitors (such as QoI fungicides, QiI fungicides, and SDHI fungicides), amino acid synthesis and protein synthesis inhibitors such as (anilinopyrimidine fungicides), signal transduction inhibitors (such as phenylpyrrole fungicides, dicarboximide fungicides), lipid synthesis and cell membrane synthesis inhibitors (such as phosphorothiorate fungicides, dithiorane fungicides, aromatic hydrocarbyl fungicides, heteroaromatic fungicides, carmabate fungicides), sterol biosynthesis inhibitors (for example, DMI fungicides such as triazoles, hyroxyanlide fungicides, aminopyrazolinone fungicides), cell wall synthesis inhibitors (such as polyoxin fungicides, Carboxylic acid amide fungicides), melanin synthesis inhibitors (such as MBI-R fungicides, MBI-D fungicides, MBI-P fungicides), and other fungicides (such as cyanoacetamidoxim fungicides, phenylacetamide fungicides). In the present method, mefentrifluconazole may be used in combination with one or more other fungicides. For here, using in combination includes tank-mix, pre-mix, and sequential treatment. In the case of sequential treatment, the order of the treatment is not particularly limited. In the present method, fungicide used in combination with mefentrifluconazole is preferably pyraclostrobin, fluopyram or fluxapyroxad. When aforementioned fungicide is used in combination with mefentrifluconazole, the weight ratio of mefentrifluconazole to other fungicide is usually within a range of 1:0.001 to 1:100, preferably 1:0.01 to 1:10, more preferably 1:0.1 to 1:5. Examples of the specific weight ratios include 1:0.02, 1:0.04, 1:0.06, 1:0.08, 1:0.2, 1:0.4, 1:0.6, 1:0.8, 1:1, 1:1.5, 1:2, 1:2.5, 1:3, and 1:4. These weight ratios may be described with approximately. Approximately means plus/minus 10%, so, for example “approximately 1:2” means 1:1.8 to 1:2.2. The cultivation of field corn in the present invention can be managed according to the plant-nutrition in the common crop cultivation. The fertilization system may be based on Precision Agriculture adopting variable rate application or may be conventionally uniform one. In addition, nitrogen fixation bacteria and mycorrhizal fungi may be inoculated by seed treatment. The present invention will be explained by way of examples, but the present invention should not be limited thereto. Cercospora zeae maydis Field corn is sown to a pot filled with a soil. It is incubated for 7 days in a greenhouse. Mefentrifluconazole spray liquid (prepared by diluting an aqueous suspension concentrate of mefentrifluconazole with water) is uniformly sprayed onto the foliage of field corn at the amount of 200 L per hectare so that the application rate of mefentrifluconazole may be 500 g per hectare. On the next day of the spraying, a pathogen of grey leaf spot (-) is inoculated to the foliage of field corn. The field corn is incubated in a greenhouse for 14 days from the inoculation, and then fresh weight of the aerial part of the field corn is measured. It is confirmed that the fresh weight is equivalent to that of the control where field corn is not treated with mefentrifluconazole and the pathogen is not inoculated (no-treatment-no-inoculation control) and is bigger than that of the control where field corn is not treated with mefentrifluconazole and the pathogen is inoculated (no-treatment-inoculation control). The same procedure of the example 1 is repeated except for replacing field corn with popcorn, sweet corn, or waxy corn. It is confirmed that the fresh weight of popcorn, sweet corn, or waxy corn is smaller than that of the corresponding no-treatment-no-inoculation control and is equivalent to that of the corresponding no-treatment-inoculation control. Cercospora zeae maydis Seeds of field corn are coated with a mefentrifluconazole aqueous suspension concentrate so that the amount of mefentrifluconazole to be applied to each seed may be 0.2 mg. Then, the field corn seeds are sown to a pot filled with a soil at a sowing rate of 100,000 seeds per hectare. That is, the application rate of mefentrifluconazole is 20 g per hectare. Then, the field corn is incubated in a greenhouse for 10 days, followed by inoculating a pathogen of grey leaf spot (-) to the foliage of field corn. The field corn is incubated in a greenhouse for 14 days from the inoculation, and then the fresh weight of the aerial part of the field corn is measured. It is confirmed that the fresh weight is equivalent to that of the control where field corn is not treated with mefentrifluconazole and the pathogen is not inoculated (no-treatment-no-inoculation control) and is bigger than that of the control where field corn is not treated with mefentrifluconazole and the pathogen is inoculated (no-treatment-inoculation control). The same procedure of the example 2 is repeated except for replacing field corn with popcorn, sweet corn, or waxy corn. It is confirmed that the fresh weight of popcorn, sweet corn, or waxy corn is smaller than that of the corresponding no-treatment-no-inoculation control and is equivalent to that of the corresponding no-treatment-inoculation control. Cercospora zeae maydis Field corn is sown to a pot filled with a soil at a sowing rate of 1,000,000 seeds per hectare. The field corn is incubated for 10 days in a greenhouse, and then mefentrifluconazole dilution liquid (prepared by diluting an aqueous suspension concentrate of mefentrifluconazole with water) is dripped onto the primary leaf of field corn so that the amount of mefentrifluconazole to be applied to each plant may be 0.2 mg. That is, the application rate of mefentrifluconazole is 200 g per hectare. On the next day of the application, a pathogen of grey leaf spot (-) is inoculated to the foliage of field corn. The field corn is incubated in a greenhouse for 14 days from the inoculation, and then fresh weight of the aerial part of the field corn is measured. It is confirmed that the fresh weight is equivalent to that of the control where field corn is not treated with mefentrifluconazole and the pathogen is not inoculated (no-treatment-no-inoculation control) and is bigger than that of the control where field corn is not treated with mefentrifluconazole and the pathogen is inoculated (no-treatment-inoculation control). The same procedure of the example 3 is repeated except for replacing field corn with popcorn, sweet corn, or waxy corn. It is confirmed that the fresh weight of popcorn, sweet corn, or waxy corn is smaller than that of the corresponding no-treatment-no-inoculation control and is equivalent to that of the corresponding no-treatment-inoculation control. Cercospora zeae maydis Seeds of three varieties of field corn were separately dipped in a mefentrifluconazole dilution liquid (prepared by diluting an aqueous suspension concentrate of mefentrifluconazole with water) so that the amount of mefentrifluconazole to be applied to each seed might be 0.2 mg. Then, the field corn seeds were sown to a pot filled with a soil at a sowing rate of 1,000,000 seeds per hectare. That is, the application rate of mefentrifluconazole was 200 g per hectare. Each field corn was incubated in a greenhouse for 12 days from the sowing, and then a pathogen of grey leaf spot (-) was inoculated to the foliage of each field corn. Each field corn was incubated in a greenhouse for 7 days from the inoculation, and then plant length of the aerial part of each field corn was measured. The plant length is referred to as ‘the plant length in treatment plot’. As a control experiment, the same procedure was repeated except that dipping treatment with mefentrifluconazole dilution liquid and the inoculation of the pathogen were not conducted. The plant length obtained in the control experiment is referred to as ‘the plant length in control plot’. The results are shown in Table 1. TABLE 1 Plant length Plant length Treatment/ Corn/ in treatment in control control ratio Variety plot (A) (cm) plot(B) (cm) (100 × A/B) Field corn/ 58 61 95 DKC5632 Field corn/ 61 63 97 MAS40F Field corn/ 63 60 105 Koshu The same procedure of the example 4 was repeated except for replacing three varieties of field corn with a variety of popcorn, and two varieties of waxy corn. The results are shown in Table 2. As shown in Tables 1 and 2, field corns were protected much more effectively compared with popcorn or waxy corns. TABLE 2 Plant length Plant length Treatment/ Corn/ in treatment in control control ratio Variety plot (A) (cm) plot(B) (cm) (100 × A/B) Popcorn/ 45 56 80 Yuki-pop Waxy corn/ 51 59 86 Shiro-mochi Waxy corn/ 30 43 70 Ki-mochi Cercospora zeae maydis Four varieties of field corn were sown to a pot filled with a soil at a sowing rate of 1,000,000 seeds per hectare. Then each field corn was incubated for 7 days in a greenhouse. A mefentrifluconazole dilution liquid (prepared by diluting an aqueous suspension concentrate of mefentrifluconazole with water) is dripped onto the first leaf of field corn plants so that the amount of mefentrifluconazole to be applied to each plant might be 0.1 mg. That is, the application rate of mefentrifluconazole was 100 g per hectare. Five days after the application, a pathogen of grey leaf spot (-) was inoculated to the foliage of each field corn. Each field corn was incubated in a greenhouse for 7 days from the inoculation, and then plant length of the aerial part of each field corn was measured. The plant length is referred to as ‘the plant length in treatment plot’. As a control experiment, the same procedure was repeated except that dripping treatment with mefentrifluconazole dilution liquid and the inoculation of the pathogen were not conducted. The plant length obtained in the control experiment is referred to as ‘the plant length in control plot’. The results are shown in Table 3. As shown in Table 3, each field corn was successfully protected from the damage by the pathogen. TABLE 3 Plant length Plant length Treatment/ Corn/ in treatment in control control ratio Variety plot (A) (cm) plot(B) (cm) (100 × A/B) Field corn/ 62 60 103 Pioneer 2088 Field corn/ 63 61 103 DKC5632 Field corn/ 64 63 102 MAS40F Field corn/ 65 60 108 Koshu The same procedure of the example 5 was repeated except for replacing four varieties of field corn with two varieties of popcorn, a variety of sweet corn, and two varieties of waxy corn. The results are shown in Table 4. As shown in Table 4, each of popcorns, sweet corn and waxy corns was not successfully protected from the damage by the pathogen. TABLE 4 Plant length Plant length Treatment/ Corn/ in treatment in control control ratio Variety plot (A) (cm) plot(B) (cm) (100 × A/B) Popcorn/ 53 63 84 Yuki-pop Popcorn/ 45 56 80 Maru-pop Sweet corn/ 31 34 91 Rancher 82 Waxy corn/ 43 59 73 Shiro-mochi Waxy corn/ 29 43 67 Ki-mochi INDUSTRIAL APPLICABILITY Field corn can be safely protected from damage by a plant pathogen according to the present invention.
The Green Climate Fund issues a range of publications for our direct and indirect stakeholders on a diversity of issues – for the general public to industry experts. Stay in tune with our mission and activities around the planet with the issues that speak to you. SAP Technical Guidelines: Cities and Climate Change June 2019 Traditionally, cities are defined as large human settlements, with no one standardized definition applied internationally. Frequently, national governments have criteria they use to define urban areas. These criteria can vary to include aspects such as administrative boundaries, living conditions or population density. Another approach is what is termed “urban agglomeration”, which considers the extent of an urban area, or sometimes, that of a built-up area, to identify a city’s boundaries. SAP Technical Guidelines: Transport May 2019 The publication provides technical guidance for the preparation of SAP proposals. Although there is no one standard definition of the transport sector, it can be described as including all kinds of transportation, such as road, rail and maritime transport, and aviation. SAP Technical Guidelines: Ecosystems and Ecosystem Services May 2019 The thematic area of ecosystems and ecosystem services encompasses all natural environments and the productive uses that are based on them. This can range from environments not directly impacted by human activities – for example, remote rainforests, alpine regions or coral reefs – to environments that are intensively managed – such as agricultural areas or managed forests for timber production. Given that there are thematic areas specifically addressing water security, agriculture and food... Turning ambition into action: How GCF catalyses transformational change May 2019 GCF is the world’s largest dedicated fund helping developing countries reduce their greenhouse gas emissions and enhance their ability to respond to climate change. It was set up by the UNFCCC in 2010, and has a crucial role in serving the Paris Agreement, supporting the goal of keeping average global temperature rise well below 2C. It does this by channelling climate finance to developing countries, which have joined other nations in committing to climate action. This publication provides a... Adaptation: Accelerating action towards a climate resilient future April 2019 This working paper contributes to adaptation knowledge from the perspective of climate financing. It provides an overview of adaptation and resilience challenges, the distinction between them, and discusses the avenues that GCF is developing to tackle them, together with countries and many stakeholders including the private sector. Ultimately, it is hoped that this paper will inform the design and scaling up of more successful adaptation investments. Through focused investments in these... GCF in Brief: Enhancing Direct Access March 2019 The Enhancing Direct Access (EDA) pilot has been designed to provide Direct Access Entities (DAEs) with opportunities to move beyond the financing of individual projects towards a more comprehensive and stakeholder-driven programmatic approach. This factsheet provides an overview of the EDA pilot. Project Preparation Facility Guidelines March 2019 GCF's Project Preparation Facility (PPF) provides financial support to Accredited Entities (AE) in preparing funding proposals for submission to the Green Climate Fund (GCF). The PPF supports AEs in preparing full Funding Proposals for consideration by the Board, based on a Concept Note that has been cleared for project preparation support vis-àvis GCF investment criteria. This publication provides practical guidelines to help AEs in preparing and submitting PPF requests to the GCF secretariat. GCF in Brief: Support for Technology December 2018 Technology solutions and innovations are instrumental to facilitate the move towards low-emission and climate- resilient pathways. Climate technologies can cover sectors such as energy supply and distribution, industry, transport, waste and agriculture. GCF supports developing countries in mitigation and adaptation actions as well as in capacity-building and technology development and transfer. Simplified Approval Process (SAP) funding proposal preparation guidelines: A practical manual for the preparation of SAP proposals November 2018 These guidelines have been developed to support GCF accredited entities (AEs) in the preparation of funding proposals under the Simplified Approval Process (SAP) pilot scheme. This document provides general clarifications on the indicative content expected in a SAP funding proposal submitted to GCF. More specific guidelines on the type of activities by sector will be developed separately by the GCF Secretariat. This document refers to policies approved by the GCF Board in relation to the... GCF in Brief: Simplified Approval Process May 2018 The Green Climate Fund is moving quickly to build a large and transformative project portfolio, with many projects already under implementation. To simplify and streamline the approval of certain small scale projects, GCF’s Board has approved a new approach: the Simplified Approval Process (SAP). The simplifications in this new approach should lead to a reduction in time and effort required to go from project conception to implementation. GCF in Brief: REDD+ May 2018 REDD+ is a financing model negotiated under the UNFCCC to reduce greenhouse gas emissions from deforestation and forest degradation in developing countries. It is divided into three phases, which are roughly associated with readiness, implementation, and payment for results. This factsheet provides an overview of how GCF offers support for REDD+ across all three phases. GCF in Brief: Direct Access May 2018 One of the Green Climate Fund’s distinctive features is the provision for developing countries to access financial resources through national entities, meaning that climate finance can be channelled to the country directly. This factsheet explains how direct access works and the support provided for direct access Accredited Entities. GCF in Brief: About the Fund May 2018 Learn more about the Green Climate Fund (GCF), an operating entity of the financial mechanism of the United Nations Framework Convention on Climate Change (UNFCCC) and Paris Agreement, dedicated to supporting global efforts to respond to the challenge of climate change. GCF in Brief: Safeguards May 2018 The Green Climate Fund makes sure the climate finance it provides developing countries is not accompanied by negative effects to local communities and the environment. You can find out more about how GCF avoids harm through its Environmental and Social Policy and Indigenous Peoples Policy in this factsheet. Mainstreaming gender in Green Climate Fund projects August 2017 This manual addresses GCF’s potential to mainstream gender into climate finance, building on its mandate to support a paradigm shift to low-emission and climate-resilient development. Developed with UN Women, this toolkit guides GCF partners on how to include women, girls, men, and boys from socially excluded and vulnerable communities in all aspects of climate finance. Gender mainstreaming is central to GCF’s objectives and guiding principles, including through engaging women and men of all...
https://www.greenclimate.fund/publications?page=1
What do leaders from the largest U.S. municipal utility and one of the industry’s more disruptive technology upstarts have to say about the market transformation at the grid edge and what it means for utility evolution? GTM recently had the opportunity to sit down face-to-face with CEOs Doyle Beneby of CPS Energy and Naimish Patel of Gridco Systems to find out. San Antonio-based CPS Energy is the country’s largest municipal utility in terms of total asset base, ownership of generation assets in megawatts and total end customers, serving roughly 1 million combined electric and gas endpoints. CPS also has healthy amounts of utility-scale and residential PV (compared to other utilities in Texas) and is currently deploying Home Area Networks (HANs) with a goal to reach upward of 200,000 homes in the San Antonio area. Patel, founder and CEO of Woburn, Massachusetts-based Gridco Systems, has raised approximately $30 million to design, build and deploy its system to support next-generation grid infrastructure. The company's "emPower" solution consists of its IPR (In-Line Power Regulator), DGC (Distributed Grid Controller) and GMAP (Grid Management and Analytics Platform). A number of top-tier utilities in North America and Europe are using the platform’s ability to control voltages along individual distribution lines with devices that use no moving parts, need no regular maintenance, and keep running for decades, according to the company. (Gridco Systems, along with nineteen more of the industry’s most innovative companies, was recently named to Greentech Media’s Grid Edge 20 list.) Patel and many other market visionaries will be speaking at Grid Edge Live in San Diego, California on June 24-25. Part I: Technologies Driving the Modernized Grid The first part of the discussion touches on how electric grid infrastructure will change in the coming five to ten years in the face of increased distributed PV (GTM Research forecasts more than 300 percent growth in the U.S. over the next four years) and the strain it is placing on existing distribution infrastructure. Source: GTM Research Patel named three trends that are driving the industry (and ultimately provided the rationale for founding Gridco): - The increasing adoption of renewable, time-varying sources of power introduces reliability challenges to the grid - There are an increasing number of capacity constraints associated with transmission and baseload generation. Distribution utilities are being called upon by transmission authorities to provide deeper levels of peak capacity curtailment. But the reliance on traditional demand response providers to provide such deep levels of curtailment is becoming more difficult, as it fundamentally relies on customer behavioral change. - The increasing frequency of large-scale outages as a result of storms is forcing the need for much more efficient and quicker adaptation and recovery. Given these trends, Patel claims that a new class of infrastructure is needed going forward as utilities face the challenge of balancing supply and demand in a reliable and scalable way. This all comes with a new set of requirements focused on fine-grained delivery and management of power to end customers, a set of requirements that the existing class of tools will be increasingly challenged in providing, according to Patel. On the utility side, Beneby outlined several key themes that are priorities at CPS Energy: - Increasing customer participation in energy management services by putting more tools in the hands of customers to centrally and remotely manage their usage - Finding new ways to bundle demand response in a way that helps the system’s responsiveness, whether as a result of needing to shed demand to stabilize the system or if it’s needed for voltage regulation. To this end, deploying distributed, intelligent devices in aggregate will be more responsive than central generating stations. - Focused efforts on increasing customer participation in outage management and helping to reduce O&M costs, via "decentralized customer participation" - Deploying intelligent systems that act as "traffic cops" to ensure that the distribution system remains stable When asked about some of the most pressing issues facing the grid and what needs to be done to address those issues, Patel offered, “The idea of capacity management along with maintaining reliability is a particularly important one -- one that’s going to become more difficult to provide because what once was able to rely on predictable supply, or at least supply under a utility’s own control, is increasingly moving into a world where not only demand is going to become increasingly variable but an increasing percentage of supply will not be under direct control of the utility either. The balancing equation becomes more difficult. Given that it’s the fundamental role of the utility to deliver reliable, safe and efficient power, what must be true is that utilities provide for that control and own the ability to provide for that.” Given the wave of changes occurring at the grid edge, including the growth of distributed PV, new load types such as EVs, the growth of Home Energy Management Systems (HEMS), and other factors, Beneby offered the following insights as to how these trends will shape the way utilities manage distribution grid planning over the next five to ten years. - Utilities will likely have to rethink their entire capital-planning horizon. With that, they are going to need help in developing algorithms that tell them which areas of their service territory can best optimize the different types of technologies. - With the introduction of EVs, delivery infrastructure could be stressed, thus requiring new thought processes around transformers, conductor size, fuse size, etc. All of these things will have to be monitored and in some cases upgraded. - Conversely, in different areas with accelerated solar adoption, facilities could be underutilized, yielding the inverse and potentially causing stranded investment. As a result, CPS is considering where to adopt, or incentivize in some cases, different technologies. “We’re going to have to go out and have tailor-made marketing, perhaps, for adoption of technologies in places that are most advantageous to the grid so we can manage the stranded investment issue,” according to Beneby. All of these issues have led to significant growth of information technology (IT) being deployed by utilities worldwide, in a variety of applications. Patel feels that the monetization of these investments in IT can only be fully realized once they are complemented with operational technology (OT); that is, technology that can enable utilities to respond to the information rendered by the IT systems and do so in an adaptive way -- to actually make changes or enact change on the grid in response to those triggers. “Over the next several decades, what you are going to see is a complementary set of investments, necessary to monetize utility IT investments, in OT change agents,” according to Patel. Alongside all of this are the algorithms to manage those OT assets. Patel sees a future where there will be an increasingly versatile set of power electronics-based devices that can manage, regulate and alter the flow of power. The fundamental question then becomes how these technologies can be used for the purposes of supply/demand balance, capacity management and reliability. “It will be algorithmic-rich, and the algorithms are at the intersection of power systems and scalable computation,” says Patel. “This all refers back to the nexus of control. Much like the internet is composed of a set of change agents (routers, etc.), I think that over time you will see a similar evolution in the grid infrastructure.” Part II: Utilities Transitioning Into an Age of Energy Decentralization Other major technology industries have gone through phases of moving from centralized architectures to highly distributed ones (computing and telecommunications are two examples). These shifts have led to significant opportunities for service providers and technology vendors alike. According to our research at GTM, that transition is already underway in the electric utility industry and will affect everything from technology adoption to business model optimization. GTM: What types of challenges and/or opportunities will arise as this shift to decentralization takes place? Beneby: It’s coming. It has been our view all along that this transformative disruption is occurring and we want to be a part of it. We want to not only jump on the train, but to lead, as opposed to denying its existence or its inevitability. When it gets here, the first issue we’ll have to face is stranded investment. This is a very capital-intense industry. Unlike other industries that may have become decentralized, you probably didn’t start from the same starting point of legacy capital investment that has to be recouped. Although some of it may be solved by commissions or governing bodies, some of it is going to have to be solved by the utilities in finding a way to make that work. It can be a positive catalyst. How do you avoid leaving that capital investment on the table when you have to pay investors and bondholders a return while certain assets are not being utilized and not earning as much revenue? The other part is a social compact. Not all classes or categories of customers are ready for a distributed world. Some of them are heavily dependent upon the utility for the obligation-to-serve aspect of what we do. Figuring that out will be critical because they cannot be left behind. When we look at distributed solar adoption rates, they are congregated around certain higher-economic-level ZIP codes. We’re looking at that and trying to figure out what programs we can offer lower-income customers. The second part is trying to figure out how to bring along all customers. If we don’t figure that out, it will become a red line issue. Patel: Broadening the class of service that Doyle is mentioning is a critical piece of the equation. Fundamentally, we saw that transition in the telecom industry. The deregulation that gave rise to CLECs [Competitive Local Exchange Carriers] was fundamentally a result of the increased level of demand diversification by the customer. Because of some of the new technologies learning curves so steep as PV, we’re seeing the beginnings of that now in the utility space, which is to say not every customer is the same. The social path that was justifiably set to incentivize investment must necessarily change because not all customers are going to be the same. Their demands will be different. The revenue models for utilities, whether IOUs, munis or co-ops, are going to change. In particular, one piece of that is rate design. Rate design is going to be an interesting challenge to tackle, so that you’re not leaving anyone behind but you’re also incentivizing others to ask for and demand what they would like, and incentivizing the utilities to invest to supply that demand. There are cases where it’s difficult to maintain a constant reliability metric across service territories. There are places in the Northeast where this [issue comes into play]. That asymmetry, in and of itself, is also a cause for the need to look at tiered services. That’s an absolutely essential piece to the sustainability of the industry. Beneby: A lot of these tiered services are going to have to be menu-driven. If you add another layer to the concept of tiered services, there may be scenarios where you choose the amount of reliability that you want. And we have to be able to have the gateways, if you will, to be able to do that. That may free up some bandwidth, but it could become that fine in the next ten to fifteen years where you may pay a premium for being restored in a certain amount of time -- under an hour, for example. One of the other challenges is security and protection of customer data. We have to find a way to make this transition [so that] customers are never worried about data security. Patel: Data suggests that a large percentage of cyber attacks in the Unied States are against energy infrastructure. If that’s going to be the case going forward, one necessarily must move to a decentralized architecture for resiliency. I think there is a sentiment in the industry [on the part of] a number of utilities that the upcoming changes are necessarily going to give rise to revenue reductions and therefore impact to their investors, particularly IOUs. I don’t think that will be the case, because this tiered service idea has the ability to enable utilities to operate on multiple points along the demand curve, as opposed to [operating at] one point on the demand curve today. Going forward, there will be some customers that will be lower sources of revenue for the utility, but there will also be other sets of customers that will be higher revenue-per-unit sources for the utility. The utilities that take a proactive approach will be the ones to benefit first. The idea that revenue must reduce is an intrinsically incorrect assumption. Beneby: Moving forward, perhaps in a few years, imagining a solar farm for example, where the subsidies roll off, cash flows change and we have an opportunity to own -- we’ve got to have people that understand this model. The second piece of it is, as we move toward a distributed architecture, we have to have the people that understand it architecturally. A lot of the skills we’re going to have to import, but troubleshooting and the like, we’re going to have to understand, because the value for us, as utilities, is that we still have the first shot at retaining that bond with customers. Part of the advantage to us is that the things we’re talking about are things that not many customers think about. They have a pretty paternalistic relationship with their electric utility. There could be ways that change happens dramatically in the manner we’ve talked about, but we can retain a level of branding with our customers and provide services to them that allow us to continue in a world where it makes business sense. GTM: What are some of the fundamental differences in future utility business models when considering the differences between IOUs, munis and co-ops? Beneby: There is a significant difference. The IOUs have a more acute concern about stranded investment. Typically, they are in decentralized, competitive markets. We are not, here in Texas, yet. It’s ironic because we are working on solutions even though the threat is not as clear and present as it is to other IOUs. Second, it’s a different deal to have to feed investors as opposed to a community. We have to be competitive. We have to make a return. Our shareholders are de facto the city, because there’s a general fund transfer from some of our gross revenues, but that’s much different than your typical investor in the private investment world. Don’t miss the fact that in this model, a big, monolithic multi-billion-dollar asset base utility is working with a company like Gridco. You’re seeing some of the future right here. This is unique and provides a way that will enable utilities to survive. It’s also going to help the marketplace because we’re going to be able to bring to the table a lot of customers and a lot of applications that companies like Gridco and others can have access to. I think that most big utilities ought to be thinking this way. We can help bring low-cost capital to the table and ways to scale up that perhaps small companies can’t. You’re looking at the future here. Early adopters are going to have an advantage. PART III: Technology Game-Changers and Advanced Energy Service Offerings GTM: What are some of the key technologies that will have the greatest impact on the industry going forward? Patel: A new class of devices must be distributed in their deployment because of the challenges posed at the edge of the grid. They must be dynamic in nature in that they will have to adapt in short time frames to varying grid conditions, and they must be able to grow and scale the grid in modular fashion, relying on elements that can decouple or isolate one side of the grid from the other. We’re also seeing things like smart inverters, which are indeed necessary pieces of the system, but unless the utilities have control over such assets, and have that control in a scalable way, it’s going to be very difficult to address the challenges being posed. What we see is not only an opportunity to provide a class of infrastructure for the regulation of power, but also a class of infrastructure that in essence forms the nexus of that control in a distributed way, rather than a centralized way, for scalability reasons. GTM: How important is distributed storage in aiding the transition that we’re discussing? Beneby: It’s essential. There are several game-changers in this industry and storage is one of them. It’s not only critical from a customer perspective but critical for utilities as well. If we can have some of these storage mechanisms to inject into the grid during peak, it can create opportunities for customers to save and not be fully dependent upon the utility, charting their own path, but also helping the grid. I hope that the first wave of the change will be electric vehicles. If we can spur adoption and create a regime where we have the infrastructure where people have an incentive to discharge during peak times, for example, it creates a scenario where we could potentially have several thousand mobile power stations. On the retail side, it’s going to happen. It needs to happen. My view might be that the winning horse may be battery storage in electric vehicles. Patel: If you fast-forward, storage has to be a component of the solution. There are already niche applications where it proves out economically -- peaker plant replacement, as an example. Over the next decade, the cost of storage will be such that the way to monetize it will be to share it across multiple applications or to use it in a way that’s very sparing in its capacity, so the efficiency of its use will be very important. What that will then require is algorithms. How do you make the most efficient use of the storage via intelligent algorithms? Software control of storage will be critical to operationalizing it at scale. GTM: What is the future requirement for back-office systems to enable efficient grid network design and modeling? Patel: Before getting into that, it’s important to consider that such systems often require human resources to be educated on, operate and manage, which is a challenge. With that said, the goal should be, as we move into this more dynamic environment, to achieve autonomous operation of the distribution feeder system. GTM: New partnerships are forming around holistic consumer energy services and systems. Examples include Nest and Sunrun, SolarCity and Tesla. What are your thoughts about the growth of players like this in providing a more holistic consumer energy management service offering? Patel: The general trend of customers being able to gain more insight into their usage and granularity into that insight no doubt seems to be afoot. What that will enable is for customers to not simply be passive consumers of power but rather important elements of the grid infrastructure itself. I can imagine a future, where much like in the case of electric vehicles, there will be a need, where there is strong penetration, to explicitly schedule the charging of those devices, whether that’s induced by price signal or some other mechanism. You can even imagine that, at a finer granularity, all of your home appliances understand their energy consumption needs and can interact in a dynamic way with the grid infrastructure in forecasting their needs and/or dropping out and saying, “I’ll wait for another five minutes to consume power because there may be a capacity constraint upstream.” All of these things feed into the system as a whole as new controllable elements. They’re new resources that once were not viewed as resources in that sense. GTM: Any specific thoughts on microgrids? Patel: I think that microgrids are another example of a customer-owned grid, fundamentally. It’s just another example of where decentralization of control over the power is going to occur. Although they are currently viewed as customer-owned, they can very equally be owned by the utility. It is part of the service delivery of power. It will have natural benefits in terms of reliability and outages. It remains to be seen how the economic and ownership structures around microgrids will evolve. Beneby: The deployment of all of these technologies, for the foreseeable future, will still be required to allow a vast majority of customers to employ them passively. It’s likely that a small percentage of customers are going to be mentally engaged with all of this. There may be an initial point where they work with utilities to program one time, but then it’s got to be, leave it alone. The majority of people don’t have time or interest. They need to hit a button and have it be taken care of. We believe that there are generally four tiers of customers: - For some it’s going to be islanding themselves for purposes of privacy and resiliency. - A younger demographic that like things new and shiny and want to be able to control their appliances, manage things remotely and do it from a smartphone. - There will be some that want to do good. All of this becomes meaningful because it offsets carbon somewhere and it impacts communities in a positive way. - Lastly, and for us most importantly, people want to save money. They want a lower bill, or else all of this means nothing. In any case and whatever the motivation, it’s going to have to be very passive. *** Join Greentech Media, Gridco Systems and many other forward-thinking vendors, utilities, policymakers and industry associations at Grid Edge Live on June 24-25 in San Diego, California, as we explore in-depth many of the topics discussed in this article.
https://www.greentechmedia.com/articles/read/can-utility-revenue-climb-despite-growth-in-distributed-generation