content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
We just returned from Rwanda after attending the 14th Kwita Izina, a unique naming ceremony of newborn mountain gorillas. An endangered species, the gorillas were saved from the brink of extinction in the 1980s to a current population of over a thousand. The world had gathered to attend the ceremony and a two-day workshop called Conversations on Conservation, and it was heartening to see a tiny country roughly the size of Meghalaya lead the world in the field of cleanliness, wildlife protection and sustainable tourism. India is no stranger to conservation. This is a country where animals are deified as vahanas (sacred mounts) for gods, where trees, mountains and rivers are worshipped, the world’s first laws on conservation were promulgated by Emperor Ashoka in his rock edicts, and communities are ready to lay down their lives for the protection of flora and fauna. Many of the shikargahs (hunting reserves) maintained by royalty and the British became the nucleus of today’s wildlife sanctuaries. Through conservation efforts like Project Tiger and Project Elephant, numbers revived, and in many cases, tribals living on park fringes were made custodians and poachers-turned-protectors. But somewhere along the way, we lost the plot, and population pressures led to massive deforestation, unchecked nature exploitation and constant human-animal conflicts. Communities come together Yet, some community-led eco-initiatives across India give hope to the rest of the country. In Nagaland, a region where hunting is a way of life, conservation might seem a far-fetched concept. For centuries, warrior tribes embellished their colourful costumes and headgears with feather, tusk, claw and bone. During festivals, long bamboo pennants were festooned with iridescent dead birds. But in the dark woods of Nagaland, a small Angami village community in Khonoma is committed to protecting the exotic Blyth’s Tragopan. The vulnerable pheasant, widely hunted in the past for food in Nagaland and Arunachal Pradesh, suffered greatly due to rampant deforestation and slash-and-burn cultivation, which destroy its habitat. Being excellent hunters, Nagas mimic birdcalls and lure the gullible bird by emitting calls of the opposite sex. When Khonoma switched to alder cultivation as part of a larger plan to create a model village for eco-tourism, it paved the way for the Khonoma Nature Conservation and Tragopan Sanctuary (KNCTS). Set up in 1998, the sanctuary is maintained entirely by the village community, which enforced a complete hunting ban in 2001. In the 2005 census, 600 tragopans were recorded, besides other endemics like Naga Wren Babbler. The 25-sq- km sanctuary is maintained by the village community, and is a great place for birdwatching. Perched like an eagle in the upper reaches of Western Arunachal Pradesh, Eaglenest Sanctuary was practically unknown to the birdwatching community till 2003! Largely due to the efforts of Kaati Trust, a non-profit organisation dedicated to biodiversity research and conservation in Arunachal Pradesh, Eaglenest is now rated among the top birding hotspots in Asia. Tapping into the indigenous knowledge of forest-dwelling tribes like Bugun and Sherdukpen paved the way for responsible wildlife tourism through sustainable partnerships. The recent discovery of a new bird species, the Bugun Liocichla by birder and conservationist Ramana Athreya, has spurred interest in the tiny 218 sq km sanctuary. With an altitudinal variation of 500-3200 m, trails from the tented campsites of Sessni (1250 m), Bompu (1940 m) and Lama Camp (2350 m) reveal rare species like temminck’s tragopan, fire-tailed myzornis, wedge-billed wren-babbler, ward’s trogon, beautiful nuthatch, purple cochoa and chestnut-breasted hill-partridge. Many of the local Bugun tribesmen serve as guides and naturalists, making them direct stakeholders in the conservation story. At Mawlynnong in the Eastern Khasi Hills of Meghalaya, local inhabitants take great pride in the tag of ‘cleanest village in Asia’ their tiny village has acquired. The small community of about 500 people is fastidious about cleanliness and the pathways are spotless with beautiful cane dustbins outside every home. A green sign proudly proclaims ‘Mawlynnong: God’s own garden’, and quite ironcially, the local economy thrives on the cultivation of thysanolaena maxima or broom grass, whose inflorescence is used for the common phool jhadu. The village authorities run a scenic guesthouse and machan overlooking a rivulet for hikes to Meghalaya’s fascinating living root bridges. An age-old method of crossing wild mountain streams, the pliant, quick-growing roots of the ficus elastica tree are entwined to grow into an elaborate lattice. Over time, the bridge is paved with stone. There’s an unwritten rule that if any villager passing by spots a new root, he has to weave it into the mesh. Another community that occupies prime position on India’s conservation map is the Bishnois, a cult founded in late 15th century by Guru Jambhoji who proposed 29 principles (‘bish-noi’ in Marwari) governing a conscientious life and conservation. Being staunch vegetarians, they worship the all-sustaining khejri tree, do not sterilise oxen, and consider all life forms sacred. They revere and protect the blackbuck with their life, as certain Bollywood stars on a hunting trip found out! Long before Sunderlal Bahuguna’s Chipko movement and Hug-a-Tree campaigns, a memorial in Kejarli village in Pali district honours the commitment and sacrifice of the Bishnois. In 1731, led by the fearless Amrita Devi, who hugged a khejri tree to prevent it from being cut to fire a brick kiln of the king, 362 Bishnois joined her and laid down their lives. In Rajasthan, Tal Chhapar Sanctuary is a taal (flat tract) of open grassland with scattered acacia trees on the edge of Thar Desert. Spreading over 1,334 sq km, it is a haven for India’s most elegant antelope, the blackbuck. Even today, each Bishnoi family makes a monthly donation of one kilogram of bajra (pearl millet) to a community store, maintained to feed blackbucks every evening. After wandering the plains all day, blackbucks assemble around Bishnoi hamlets at dusk. Locals lovingly feed these herds, which vary from 50 to 500 in number. The villages of Kejarli, Rohet and Guda Bishnoiya offer great insights into the inextricable link between Bishnois and nature. In a distant corner of Jodhpur’s Thar Desert, the nondescript village of Khichan has gained international acclaim for its heartwarming tradition of feeding demoiselle cranes (locally called kurjas) every winter. A small grain-feeding initiative snowballed into a conservation movement, with over 9,000 cranes visiting Khichan every year between August and March. The locals, mostly Jain Marwaris, are strictly vegetarian and idolise the kurja for its vegetarian diet and monogamous nature. As part of a systematic feeding programme, cranes are fed twice a day at chugga ghars (feeding enclosures) on the village outskirts. Each session lasts 90 minutes, and 500 kg of birdfood is consumed daily! This huge demand is met by generous donations from locals and tourists, overseen by societies like the Kuraj Samrakshan Vikas Sansthan and Marwar Crane Foundation. With avian and human visitors on the rise, many buildings have been converted into lodges to witness the dance of the demoiselles and the sky enshrouded by grey clouds of birds on wing. A similar initiative can be seen closer home at Kokkarebellur, by the banks of Shimsa river, off the Bengaluru-Mysuru highway. Dotted with water tanks replete with fish, for years Kokkarebellur has been the roosting site of painted storks and spot-billed pelicans, which nest atop ficus and tamarind trees in the village centre. Catalysed by an incentive scheme introduced by senior forest official S G Neginhal in 1976, locals adopted a sustainable conservation model. Though compensated for losses incurred in their tamarind crops due to nesting, the villagers’ involvement transcends cold commerce. They protect the birds as a ‘living heritage’, regarding them as harbingers of good luck and prosperity. The migrants arrive in September after the monsoon to build nests and lay eggs from October to November. After roosting for months, they tirelessly feed their hatchlings through summer. When they fly back in May, womenfolk bid them emotional goodbyes as if they were their own daughters leaving their maternal homes after delivery. Free as one can be The endemic Nilgiri Tahr roams free in the 97-sq-km Eravikulam National Park on mountain slopes carpeted with purple kurinji flowers in the shadow of Anamudi (2,695 m), the highest peak south of the Himalayas. Managed as a game reserve by the Kannan Devan Hill Produce Company, Eravikulam was earlier a private hunting ground for British tea planters. Estate managers served as wardens while Muduvan tribals were employed as game watchers. In 1928, the High Range Game Preservation Association was set up to manage hunting activities. Later, this regulatory body lobbied for the creation of a specialised park and continues to manage and protect the area along with the Forest Department. Of the 1,420 Nilgiri Tahr found in Kerala, Eravikulam harbours the largest surviving population; 664 as per the 2017 census. Wrapped around three dams that create a 20.6 sq km reservoir, Parambikulam is a 285 sq km park on the Kerala-Tamil Nadu border. The altitudinal variation of 600 m to 1439 m blesses it with great astonishing diversity with Karimala Peak the park’s highest point. Once a hub of British timber trade, today the park is a role model for sustainable tourism. Eco-tourism packages range from wildlife safaris, bamboo rafting, birdwatching, overnight camping inside the forest and guided walks like the Kariyanshola Trail and the Cochin Forest Tramway Trek. Visitors stay in Swiss-style tents, treetop huts overlooking the reservoir, and a bamboo hut on Vettikunnu Island, accessible only by boat. The 48.5 m-high Kannimara Teak, believed to be the largest in Asia, is hailed as the pride of Parambikulam, and it takes five men to encircle the 450-year-old tree with a girth of 6.57 m. The other big draw happens to be a tiny creature, the coin-sized Parambikulam frog, endemic to the park. Kerala has shown the lead in sustainable eco practices through its walking trails in Periyar with local guides as well as Thenmala, the first planned eco-tourism destination in the country. The damming of three rivers has created a scenic artificial dam where boating is conducted, besides rope bridge walkways, trekking, and a deer rehabilitation centre. In adjoining Coorg, another biodiversity hotspot in the Western Ghats, botanist-microbiologist couple Dr Sujata and Anurag (Doc) Goel run a 20-acre farm growing cardamom and coffee in the shade of rainforest trees. A unique blend of eco-tourism, sustainable agriculture and environmental education, the award-winning eco lodge is a good place to go on guided plantation walks while staying in low-impact Drongo and Atlas Cottages. Wholesome meals are prepared using fuel from the biogas plant with farm produce like cardamom, civet cat coffee, gourmet filter coffee, pepper and vanilla sold under the label ‘Don’t Panic, It’s Organic’. Proceeds go towards the Goels’ biodiversity research foundation WAPRED. To protect the fragile watershed of Talacauvery and its rainforest ecosystem, Pamela and Dr Anil Malhotra acquired over 300 acres of private forest land since 1991 to create Sai Sanctuary Trust. With the paradise flycatcher as their logo, their conservation efforts have paid dividends as the river has replenished, otters have returned along with the birds and wildlife, while butterflies congregate in large numbers. Walk on the wild side In the high-altitude cold desert of Spiti in Himachal Pradesh, Spiti Ecosphere partners with local communities for sustainable development in this fragile mountain ecosystem. The stress is on livelihood generation through conservation of indigenous natural resources like tsirku (seabuckthorn) and wild organic produce. As part of responsible eco-travel, tourists can engage in voluntourism like building energy-efficient homes and green houses. As part of wildlife conservation, follow the trail of the endangered Himalayan wolf and the snow leopard at Kibber Wildlife Sanctuary and Pin Valley National Park.Discover ancient fossils and remote Buddhist monasteries on yak safaris or treks while staying at rustic homestays in high altitude Himalayan villages. The Snow Leopard Conservancy and similar programmes in Ladakh have created alternate livelihoods for local villagers as trackers in what is now a busy winter season to track the Grey Ghost of the Himalayas.
https://www.deccanherald.com/exclusives/world-tourism-day-special-where-eco-friendliness-is-the-norm-693975.html
OMSTRIDT: To av tre av dem som har gjort seg opp en mening, kunne tenkt seg en folkeavstemning om EØS-avtalen. EØS I dag er det 25 år siden daværende handelsminister Bjørn Tore Godal (Ap) undertegnet EØS-avtalen i Portugal. Noen så på avtalen som et varig alternativ til fullt EU-medlemskap. Andre så på EØS som en mellomstasjon til EU-medlemskap. Og atter andre så på EØS som et udemokratisk nesten-medlemskap. I forkant av at Stortinget skulle behandle EØS-avtalen samlet daværende Nei til EF 175.000 underskrifter, som de overrakte til utenrikskomiteen. Mange krevde også at avtalen var så omfattende at den burde være gjenstand for en folkeavstemning. I dag er det mange som fortsatt ønsker seg en avstemning, viser en fersk måling fra Sentio: • 47 prosent av de spurte sier de er for å avholde en folkeavstemning om EØS. Bare 20 prosent sier de er mot. • Samtidig er mange usikre. 33 prosent svarer «vet ikke». EØS: • Avtale forhandlet fram mellom de seks Efta-landene Norge, Sverige, Finland, Østerrike, Sveits og Island, og EF i 1992. Trådte i kraft 1. januar 1994. Sveits sa i en folkeavstemning nei til EØS. • EØS innebærer at Norge må tilpasse seg EUs regler. Avtalen ble vedtatt med 3/4 flertall i Stortinget. SV, Sp og enkeltrepresentanter for Frp, KrF og Ap stemte mot. Avtalen omfatter ikke fisk og landbruk. Om undersøkelsen: • Utført av Sentio på oppdrag fra Nei til EU blant et representativt utvalg på 1000 personer i perioden 20.–24. april 2017. • Spørsmålet var: «Er du for eller mot å holde en folkeavstemning om EØS-avtalen?» Trenger mer kunnskap Folk ble også spurt om hva de ville stemt «dersom alternativet til EØS var en handelsavtale». 23 prosent ville valgt EØS, 35 prosent en handelsavtale og hele 43 prosent vet ikke. Undersøkelsen føyer seg inn i et mønster: Avhengig av hvordan spørsmålet om EØS stilles, får man ulike svar, og «vet ikke»-prosenten er gjennomgående høy. Andre undersøkelser har vist flertall for EØS. Nei til EUs leder Kathrine Kleveland mener det trengs mer kunnskap om avtalen. – Jeg kommer ikke til å feire 25-årsdagen, men jeg ser god grunn til å markere den. Jeg har et mål om å øke kunnskapen om EØS sin innvirkning på det norske samfunnet. Alle meningsmålinger om EØS viser altfor høy vet-ikke-prosent. Vi må se hva som har vært konsekvensene av 25 år med EØS, sier hun. Endrer samfunnet EØS ble framforhandlet mellom Efta, som da besto av seks land, og EF, med ti land. Efta-landene skulle få tilgang til det indre markedet. Prisen var at på områder som er relevante for det indre markedet, må Efta-landenes regler tilpasses EF (seinere EU) sine. Viktige unntak var fiskeri- og landbrukspolitikken. I løpet av et kvart århundre har avtalen ført til at Norge har tatt inn 11.400 EU-regler. Anslagsvis en tredel av alle norske lover har blitt endret som følge av avtalen. I forbindelse med jubileet utgir Nei til EU en rapport ført i pennen av Dag Seierstad, mangeårig spaltist i Klassekampen, SV-er og en av dem som har fulgt EØS tettest. I rapporten oppsummeres utviklingen slik: «En strøm av nye rettsakter truer med å rasere norske standarder i arbeidslivet. Samtidig tolkes EU-traktaten og andre EU-regler strammere enn før og rammer stadig flere norske interesser. På felt etter felt tvinges hardere konkurranse inn i det norske samfunnet. Økt konkurranse betyr raskere omstilling, mer nedbemanning, flere nedleggelser, mer pendling, mer uføretrygding og mer førtidspensjonering. Økt konkurranse betyr flere mennesker i utrygge jobber.» Ønsker ny avtale EØS er omstridt også innad i Nei til EU, der et mindretall er for EØS. Kleveland ønsker å erstatte EØS med en mindre omfattende handelsavtale. – Nå skal Storbritannia forhandle en ny avtale, det må vi følge med på. Også andre land, som Sør-Korea, har inngått handelsavtaler med EU. Det viser at EU har interesse av handelsavtaler med andre, sier hun. – Vil vi ikke stå mye svakere i forhandlingene nå enn da EØS ble inngått? EU er jo nå mye større, og Efta mye mindre? – EØS er ikke en god avtale, siden den inneholder så mye mer enn handel. Mer enn 150 land selger varer til EU. De andre må ikke endre lovverk eller svekke tariffavtalene, det må vi. Siden vi er gjensidig avhengige, tror jeg ikke EU vil legge hindre i veien for å kjøpe fisk fra oss. De trenger det jo. Norges eksport har nå utvidet seg til at land utenfor EU har blitt viktigere, så vi skal heller ikke overvurdere EU som marked. En oppdatert handelsavtale med EU er jeg overbevist om at vi får til. [email protected]
Hypocondrie a 7 per 2 Oboi, 2 Violini, Viola, Fagotto e Basso [ZWV 187, Praga, 1723] Library: Source: The source is the autograph ms of the score Mus. 2358-Q-1, Hypocondrie, ZWV 187, „à Praga, 1723“. The ms was digitized in the project „Instrumentalmusik der Dresdner Hofkapelle“, SLUB, Dresden. Sample: Sample — 225x ⇩ Date: Sunday, October 9, 2011 Release notes: Version 1.0 with a new editorial format.
https://baroquemusic.it/content/hypocondrie-7-2-oboi-2-violini-viola-fagotto-e-basso-zwv-187-praga-1723
In the Bird Clan, Eagle is the keeper of the Northern Door of the Medicine Wheel – the White House which represents Spirit. And this is where the Grandfathers live. When we enter into prayer every day, it helps us remember our relatives in all four directions – beginning with those in the North who are suffering from food insecurity and pollution of our natural water supply. - Happy 2021 Family of Light! This is The Year Future Proves the Past Truth is the only antidote for the ailments of the world, for our families and for each of us as individuals, and Eagle is the totem of truth. The sacred bird of the Cherokee and other Indigenous nations – and people around the world – who recognize the powerful, ancient role of Eagle as the messenger of the Great Spirit. - Support Through Lockdowns – Bird Clan Messenger Offers Private Sessions - VIDEO: Squirrel Medicine – Reclaiming the Spirit Child Squirrel is Mother Nature’s acrobat and shows us how to leap from place to place either when necessary to keep us safe, or just in play with our friends and family. Squirrel teaches us the importance of play in everything we… Read More › - Bear Medicine: Guard Your Heart Originally posted on Bird Clan Messenger: By Kandace Keithley Osiyo Family, It is my honour to offer you this message from the Ancestors, especially directed to the young Starseeds who are telling our messengers that they are feeling down and… - Squirrel Medicine – Resurrecting the Spirit Child “One thing to remember is to talk to the animals. If you do, they will talk back to you. But if you don’t talk to the Animals, they won’t talk back to you. Then you won’t understand, and when you don’t understand you will fear, and when you fear you will destroy the animals, and if you destroy the animals, you will destroy yourself.” – Chief Dan George - VIDEO: Black Panther Medicine Wheel Reading - Black Panther Medicine – Making Peace with Uncertainty - Jaguar Medicine – Truth at the Bottom of the Swamp - Clans, Guilds, and Reclaiming Our Identity Originally posted on Bird Clan Messenger: “Clan Mothers” by Shoshone-Tataviam artist, Stan Natchez. Read Tom Keefer’s excellent article on the importance of the Matrilineal Clan System in New World Indigenous society. One of the most significant differences between Indigenous… Featured Categories The Red Road › - VIDEO: Bear Medicine – Guard Your Heart January 27, 2021 - Thunderbird Medicine: Changing Our Perception and Treatment of Disease January 27, 2021 - VIDEO: Totem Tuesday – Deer, Whale, Dog, Beaver January 26, 2021 - Moth Medicine, The Dance of Transformation January 21, 2021 - Ghost Dance – The Power of Ceremony January 20, 2021 - Totem Tuesday Video – Elk, Raven, Squirrel and Owl January 19, 2021 - Hawk & Elk Medicine – When Our Helpers Join Forces January 18, 2021 - Hawk Medicine – Soaring Above the Chaos, the Rise of the 8th Fire January 11, 2021 - Thunderbird Medicine: Evidence of the Great Mystery January 6, 2021 Indigenous › - Eagle Medicine – Sacred Messengers of the Great Spirit January 4, 2021 - Salmon Medicine – The Secret to Navigating Obstacles December 2, 2020 - Thunderbird Medicine – Welcome to the Great Awakening, the Path of the 8th Fire November 16, 2020 - Hummingbird Medicine – The Cure for Apathy October 28, 2020 - Grandmother Spider – Bringer of Fire, Weaver of Destiny October 20, 2020 - Ghost Dance – The Power of Ceremony October 16, 2020 - Join Bird Clan Messenger on YouTube!
https://birdclanmessenger.com/page/2/
Elaborating the Communication Theory of Identity Communication Theory of Identity focuses on how effectively one can identify with an issue through communication. Writing is one of the ways that is used to pass the required information to a reader who is the intended audience. A good author will always make efforts to use a language that is easy to be understood by the reader and also present his work in such a way that a reader is able to relate to the issues written. This is when it can be said that the author has delivered his message to the reader and lack of this would result to ineffective communication. When Foer says “ …… given that eating animals is in absolutely no way necessary for my family- unlike some in the world, we have easy access to a wide variety of other foods- should we eat animals?” Here Foer is acknowledging the fact that he is reaching to a reader who eats meat. The author here is quite categorical when he asks the reader “… should we eat animals?” This question can most probably be applied to a reader who eats meat as a source of some of the nutrients that a human body requires for normal functioning. It is quite definite that for effective communication one should choose the words to use appropriately. If for instance such a question of whether we should continue eating animals is asked to reader who does not eat meat, it could appear vague and irrelevant in that context. Foer has continuously shown that he is making a deliberate and conscious effort to relate to the reader. When he says “…we have easy access to a wide variety of other foods”, the author is trying to have the reader feel as being part of the issue that is being communicated. The use of “we” plays a major role in showing that the author is no just talking about himself, but incorporating ideas from others as well. The kind of “constant personal decision making” that Foer is referring to is the ability to of a vegetarian to decide not to be eating animals. He mentions that “ there are even circumstances that I would be forced to east a dog.” The author uses a dog just as an example of the many animals that might be used a source of meat. Eating a dog requires one to make a strong decision on whether it is appropriate to do so. Since Foer has said that “being a vegetarian is a flexible framework,” it implies that in a situation where the reader may be a vegetarian, one should be wise to make a decision of avoiding animal meat. Foer suggests that constantly being caught in making a personal decision is taxing and hard to keep up. This is clear when he says “I couldn’t honestly argue, as many vegetarians try to, that it is as rich as a diet that includes meat.” Many vegetarians find it quite difficult to keep to their decisions of avoiding meat and those who stick to their principles justify their state by alleging that a vegetarian diet could be containing the same level nutrients as meat. Foer finds it taxing to check the limits of the meats that are available for him. He says, “I love sushi, I love fried chicken, I love a good steak. But there is a limit to my love.” The ability to limit oneself especially when there are such a great variety of meats is a challenging decision that might not be so easy to keep. Foer states that “being a vegetarian is a flexible framework.” The sentences that support this remark are, “I love sushi, I love fried chicken, I love fried chicken, I love a good steak. But there is a limit to my love.” This clearly indicates that Foer would not view the issue eating animals with a fixed opinion. One may in a given circumstance evaluate or make a decision on whether to be a vegetarian or not. In other words one cannot entirely say that being a vegetarian is the best or that eating meat is the recommended option. Foer also adds to say that “… of course there are circumstances I can conjure under which I would eat meat.” This is again an indication that when one is a vegetarian, it is not necessarily written on a stone but there should be a flexibility of having to eat especially when it is the only alternative. It would be naïve to die of hunger when meat is available in the pretence that that one is a strict vegetarian. Likewise an individual who is not a vegetarian should eat vegetables when of course the circumstances demand for that.
https://www.nursingessayswriter.com/elaborating-the-communication-theory-of-identity/
LaSalle College Vancouver’s Bachelor of Applied Design in Graphic Design prepares students for a creatively fulfilling and rewarding career as visual storytellers. Students receive hands-on training in a state-of-the-art learning environment with instruction from top graphic designers. The curriculum explores the increasingly vital relationship between design and sustainable principles. As environmental demands escalate and take centre focus in educational and political discourse, there is a growing need for designers who can provide solutions while creating sustainable, eco-conscious designs. Program graduates will possess knowledge of design and sustainable fundamentals, as well as an understanding of core values, emerging trends and discipline challenges. Students study their craft in an inspiring classroom setting, where ideas are nurtured and refined. Throughout the degree program, students create a portfolio to showcase their skills and creative aesthetic. The Bachelor of Applied Design in Graphic Design program is perfect for creatives who want to spend their days working on high-impact visual campaigns that resonate.
https://www.lasallecollegevancouver.com/graphic-design-school/graphic-design-bachelor
This program focuses on every aspect of healthy living to encourage our students to be active participants in their own lives and to integrate healthy habits to become successful adults. This class covers a wide variety of topic, including nutrition, sleep, physical activity, healthy relationships, financial literacy, self-esteem, the human body, reproductive health, good decision-making, how to manage your phone and social media responsibly, and how to have difficult conversations. Students participate in a Woodward Wellness program about stress relief and make stress balls. Students work together to determine the best stress relief tactics for them.
https://www.thewoodwardschool.org/WoodwardWellness
Reflecting On and Documenting Your Teaching Experiences Reflecting on Teaching: What? For Whom? Why? Often, the motivation to improve one’s teaching by revising practices or experimenting with new initiatives stems from reflection. This reflection often focuses on feedback received from others, such as student evaluations or peer reviews. Reflection further involves one’s own assessment of experiences, through self-observation and activities that foster self-analysis such as teaching workshops or individual consultations, and/or pedagogical research. Written reflections on teaching can be used for personal, professional, or pedagogical purposes. While teaching statements are increasingly an important part of hiring and tenure processes, they also are effective in helping one clearly and coherently conceptualize his or her approaches to and experiences of teaching and learning, and deepen and renew their commitment to values and goals for their teaching. At Vanderbilt, promotion and review processes require faculty to reflect on their work and document their progress in teaching, research and service. When reporting on teaching, faculty are encouraged to articulate their teaching philosophy and objectives; describe past and planned course and curriculum development; and explain pedagogical initiatives, innovations or experiments and their results. The Center for Teaching provides one-on-one consultations on evaluating and documenting your teaching. As we assist you in preparing your teaching documentation, we work with you to reflect deliberately on your practice as a means of deepening your understanding of pedagogical goals and methods, and linking those goals and methods to student learning. If you’d like more information about reflecting on and documenting your teaching, please stop by, or call, the Center for Teaching (322-7290) or visit our set of teaching guides on the topic.
https://cft.vanderbilt.edu/2011/06/reflecting-on-and-documenting-your-teaching-experiences/
Abstract: Patient responses to chemotherapy are known to differ widely across patient populations. Pharmacogenomics, or the study of genetic differences underlying interindividual variability in drug responses, seeks to identify genetic polymorphisms in drug-metabolizing enzymes and other molecules influencing drug activity. The goal is to individualize the selection of chemotherapy agents and dosages prospectively for each patient in order to optimize the likelihood of treatment success and to avoid the severe systemic toxicity that is a hallmark of chemotherapy. Microarray analyses and mouse models are valuable tools enabling genome-wide searches for candidate genes and polymorphisms.
https://www.sciandmed.com/sm/journalviewer.aspx?issue=1029&article=465&action=3&search=true
Most work along coasts is done by waves. Waves are created by the transfer of energy from the wind blowing across the surface of the sea. The size and strength of individual waves depends on: - the velocity or speed of the wind - the period of time that the wind has been blowing - the maximum distance over the sea that the wind can blow (the fetch) Local or sea waves travel only short distances and are created by local winds. Swell waves travel huge distances and are created by large storms in the middle of the oceans. Wave Properties - 1 = wave crest - 2 = wave trough - 3 = wave length - 4 = wave height - 5 = circular movement of wave particles Wave period = time taken for a wave to travel one wave length Wave velocity = speed of movement of wave crest in a given period of time Wave steepness = the ratio of the wave height to the wave length Wave energy is proportional to the wave length times the square of the wave height In deep water, wave particles follow a circular motion. In shallow water (once the depth becomes less than a quarter of the wave length), the wave encounters friction with the sea bed and the circular motion changes to an elliptical motion. The top of the wave continues to move forward faster than the base of the wave causing the wave to break. The position of the plunge line will vary according to changing conditions. As a wave breaks, water rushes up the beach (swash) and is then carried back down the beach by gravity (backwash). The amount of percolation of the swash depends on the porosity of the beach material. When waves approach a coast with headlands and bays, the waves are refracted. This means that the wave crests are bent to become increasingly parallel to the coast. As the wave crest approaches the shallow water around a headland, friction slows the wave but that part of the wave crest in the deeper water of the bay, continues to move forward at a faster speed, so turning the wave crest. Lines drawn at right-angles to the wave crests (known as orthogonals) shown the bending of the wave crests by refraction. The effect of refraction is to concentrate wave energy on the protruding headlands. Longshore currents carry the eroded headland material and deposit it in the bays. In time, the coastland becomes less irregular as headlands are eroded and bays filled in. Constructive and Destructive Waves There are two types of waves that affect the coast: Constructive waves - These are usually low waves with a long wave length and along wave period (6 – 8 waves per minute). They are often swell waves. - Constructive waves steepen slowly as they approach a beach. - The wave breaks gently, the swash moves up the beach slowly and water percolates quickly into the sand. - The backwash is usually weak. - These waves slowly push material up the beach creating sandy ridges or berms (high sandy ridges), and ridges and runnels (small ridges and depressions on the lower beach). Destructive waves - High with a short wave length and a short wave period ( 10 – 14 waves per minute). They are often local waves. - Destructive waves steepen quickly as they approach a beach. - The waves plunge with greater force on to the beach. - The swash is short and there is a more effective backwash which drags material down the beach. - The overall effect is that whilst some large storm waves may throw shingle to the top of the beach and form a storm ridge, most material is dragged downwards to form a breakpoint bar.
https://revisionworld.com/a2-level-level-revision/geography-level-revision/coastal-environments/marine-processes/waves
classes usually involve extensive discussion about whatever is being read in class at the time. Students are encouraged to think outside the box and to look deeper into the text and to try to come to a more profound understanding of the author’s intentions. Moreover, the students are usually encouraged to engage the text, to discuss it with the teacher and with each other, and to come up with “hot takes” on the material. Differing opinions are not only welcome, but encouraged. Here’s the thing though: as valuable as this process can be, it is also the ABSOLUTE WRONG WAY to approach the reading section of the SAT. Students should be very clear that it is NEVER their task to interpret the text. Actually, their task is really much simpler: to be able to recount WHAT THE TEXT ACTUALLY SAYS. It might surprise you to realize that the correct answer to an SAT reading question is as objectively correct as the correct answer to an SAT math question. The most common mistake that students make is transferring their approach to reading in the literature class to that reading on the SAT test. The end result is that the students often find themselves deliberating over the choices: “Gosh, choice A seems right, but so does choice B. And even C could work. And now that I look at it more, D is kind of tempting.” This wastes enormous amounts of time. Furthermore, there is a very good chance that you will often find a way to talk yourself out of the correct answer. The solution, therefore, is to stop the deliberating. Stop giving accommodation to choices that are clearly wrong. (And yes, the more practice you do, the more you will begin to see that the correct answer is objectively correct and undebatable.) What I often tell my students is that they need to approach the reading passages more like a lawyer would and less like a poet would. Not to disparage the poet—it’s just that the poetic approach is not going to be of much help on a test that is standardized. There is no better way to get good at the reading section than to work through as many reading passages and questions as possible. Along the way, the student will get more and more used to the “lawyer” approach and move further away from the “poet” approach. Another very important thing that will happen along the way is that the student will begin to acquire a larger vocabulary. Having a highly developed vocabulary is absolutely essential to success on the SAT reading section. I also use a number of vocabulary resources in an effort to enhance students’ vocabulary power.
https://sateach.com/2018/09/20/sat-reading/
The key technological achievement underlying the development and growth of the aerospace industry has been the design and development of efficient and economical propulsion systems. Major efforts are also now being dedicated to the development of new technologies relevant to the propfan and variable cycle engines. Aerospace Propulsion is a specialist option of the MSc in Thermal Power. Who is it for? This course has been designed for those seeking a career in the design, development, operation and maintenance of propulsion systems. Suitable for graduates seeking a challenging and rewarding career in an established international industry. Graduates are provided with the skills that allow them to deliver immediate benefits in a very demanding and rewarding workplace and therefore are in great demand. Why this course? This option is structured to enable you to pursue your own specific interests and career aspirations. You may choose from a range of optional modules and select an appropriate research project. An intensive two-week industrial management course is offered which assists in achieving exemptions from some engineering council requirements. You will gain a comprehensive background in the design and operation of different types of propulsion systems for aerospace applications, whilst looking at the methods of propulsion with the main focus on air-breathing engines and the use of gas turbines for propulsion. We have been at the forefront of postgraduate education in aerospace propulsion at Cranfield since 1946. We have a global reputation for our advanced postgraduate education, extensive research and applied continuing professional development. Our graduates secure relevant employment within six months of graduation, and you can be sure that your qualification will be valued and respected by employers around the world. Course details The taught programme for the Aerospace Propulsion masters consists of eight compulsory modules and up to six optional modules. The modules are generally delivered from October to April. Compulsory modules All the modules in the following list need to be taken as part of this course - Combustors - Engine Systems - Mechanical Design of Turbomachinery - Propulsion Systems Performance and Integration - Management for Technology - Gas Turbine Performance Simulation and Diagnostics - Turbomachinery and Blade Cooling Elective modules A selection of modules from the following list needs to be taken as part of this course - Fatigue and Fracture - Jet Engine Control - Gas Turbine Operations and Rotating Machines - Computational Fluid Dynamics for Gas Turbines Entry requirements A first or second class UK Honours degree (or equivalent) in engineering, mathematics, physics or an applied science. Applicants who do not fulfil the standard entry requirements can apply for the Pre-Masters programme, successful completion of which will qualify them for entry to this course for the second year of study. Your career Over 90% of the graduates of the course have found employment within the first year of course completion. Many of our graduates are employed in the following roles and industries: - Gas turbine engine manufacturers - Airframe manufacturers - Airline operators - Regulatory bodies - Aerospace/energy consultancies - Power production industries - Academia: doctoral studies About the School Cranfield's distinctive expertise is in our deep understanding of technology and management and how these work together to benefit the world.
https://www.masterstudies.com/MSc-in-Thermal-Power-Aerospace-Propulsion-Option/United-Kingdom/Cranfield-Uni/
Compounds of general formula (I):… R1-CH2-C(=O)-NH-(W)-NHZ (I> in which… R1 denotes:… - either a radical ArO- in which Ar denotes an aryl, heteroaryl or heterocyclic ring,… - or an aryl, polyaryl or heterocyclic radical; W denotes:… - either (CH2)n1,… - or (CH2)n2NY(CH2)n3, Y denoting a hydrogen or an alkyl, aryl, arylalkyl, CO2alkyl or COR2 radical in which R2 denotes an alkyl, alkynyl or aryl, arylalkyl or heterocyclic ring;… Z denotes:… - either hydrogen,… - or… …<IMAGE>… - or …<IMAGE>… in which R'2 denotes an alkyl, arylalkyl,… - or CO2alkyl,… - or CO2arylalkyl,… - or COR"2 in which R"2 can take one of the values of R2,… - or -CH2R3 in which R3 denotes an aryl, substituted heteroaryl or heterocyclic ring, or can take the values shown for R1,… - or -COCH2R'3 in which R'3 can take one of the values of R3. …<??>The products of formula (I) exhibit fungicidal properties. L'invention concerne les composés de formule générale (I) : R1-CH2- ?-NH-(W)-NHZ (I) dans lesquels, R1 représente : - soit un radical ArO- dans lequel Ar représente un aryle, hétéroaryle ou hétérocycle, - soit un radical aryle, polyaryle ou hétérocyclique. W représente : - soit (CH2)n1, - soit (CH2)n2 NY(CH2)n3, Y représentant un hydrogène ou un radical alkyle, aryle, arylalkyle, CO2 alkyle, COR2 dans lequel R2 représente un alkyle, alcynyle, ou aryle, arylalkyle ou hétérocycle. Z représente : - soit hydrogène, - soit <IMAGE> - soit <IMAGE> dans lequel R'2 représente un alkyle, arylalkyle, - soit CO2 alkyle, - soit CO2 arylalkyle, - soit COR"2 dans lequel R"2 peut prendre une des valeurs de R2, - soit -CH2R3 dans lequel R3 représente un aryle, hétéroaryle substitué, hétérocycle, ou peut prendre les valeurs indiquées pour R1, - soit -COCH2R'3 dans lequel R'3 peut prendre une des valeurs de R3. Les produits de formule (I) présentent des propriétés fongicides.
Refer to the product label for full dietary information, which may be available as an alternative product image. Amount per serving Calories 120 % Daily Values * Total Fat 3g Trans Fat 0g Cholesterol 40.0mg Sodium 660mg Total Carbohydrate 5g Dietary Fiber 0g Sugars 0g * The % Daily Value (DV) tells you how much a nutrient in a serving of food contributes to a daily diet Calories per gram: Fat 9 • Carbohydrate 4 • Protein 4 Details Ingredients PORK LOIN FILET CONTAINS UP TO 10% ADDED SOLUTION OF WATER, POTASSIUM CHLORIDE, VINEGAR AND NATURAL FLAVOR. RUBBED WITH SALT, SUGAR, YEAST EXTRACT, MALTODEXTRIN, DEHYDRATED GARLIC, SPICES, NATURAL FLAVORS, DEHYDRATED ONION, DEHYDRATED RED BELL PEPPER, LEMON JUICE SOLIDS, AUTOLYZED YEAST, DEXTRIN. PORK IS GLUTEN-FREE. SEASONINGS CONTAIN NO GLUTEN.
https://crowdedline.com/product/freshness-guaranteed-seasoned-pork-loin-filet-roasted-garlic-and-herb-1-8-3-0-lbs/
For centuries, the humble stevedore or dock worker was, almost single-handedly, responsible for managing the flow of maritime commerce around the world, loading and unloading ships at coastal hubs and securing them against the ever-present threat of smuggling, piracy and illegal immigration. In 2013, however, the war against organised crime and terrorism is increasingly being fought online, as port facilities rely instead upon networked computer and control systems to manage security – and this technology is under threat from resourceful hackers employed by criminal gangs. One such attack on the Belgian port of Antwerp in 2013 threw the issue into sharp relief. During a two-year period beginning in 2011, drug traffickers based in the Netherlands concealed heroin and at least a tonne of cocaine with a street value of £130m inside legitimate shipping cargoes. The gang then recruited computer hackers to infiltrate IT systems controlling the movement and location of containers. Armed with this supposedly secure data, the traffickers were able to identify which containers contained the drugs and send in lorry drivers to steal them. Brazen in conception and execution, the attack has put port authorities on both sides of the Atlantic on alert. “[The case] is an example of how organised crime is becoming more enterprising,” Rob Wainwright, director of EU law enforcement agency Europol, told the BBC. “We have effectively a service-orientated industry where organised crime groups are paying for specialist hacking skills that they can acquire online.” Cyber-terrorism to order: how hackers infiltrated the port of Antwerp The multiphase attack has many of the hallmarks of an advanced persistent threat (APT), a form of internet-enabled espionage that targets business or political targets over a prolonged period. The hackers began by emailing malicious software to staff at the port of Antwerp, enabling them to remotely access sensitive logistics data. When this security breach was discovered and a firewall installed, the perpetrators then broke into company offices and concealed sophisticated data interception hardware in everyday objects, such as cabling devices and computer hard drives. Key loggers, small devices not unlike USB sticks, were used to log keyboard strokes and screenshots from workstations, giving the traffickers a comprehensive record of everything that staff had typed. “After the port successfully detected the attack against their computer systems, they failed to map out other attack paths which allowed the attackers to achieve their objectives in this case,” said Alex Fidgen, director of UK-based IT security firm MWR InfoSecurity. “This demonstrates how important it is to not only focus on single systems but get a full overview of your organisation and the potential weaknesses in penetration testing exercises.” Coordinated software and hardware attacks that once targeted large financial institutions are now becoming more commonplace, as cyber-criminals look to infiltrate mainstream businesses. “This attack played out somewhat like an APT,” notes Fidgen. “They were apparently active for around two years, and were able to make use of advanced techniques with seemingly professional execution. However, this is what anyone can now buy on the black market as a service, so far from just being available to a nation state, anyone with money can purchase these services. “It shows that the types of attacks like this aren’t hypothetical and businesses should be doing penetration testing exercises to make sure that they have not been compromised,” he added. Mind the gap: report highlights lack of investment in US port security A recent study in the US found that, despite millions of federal dollars being spent on port security, many coastal hubs remain ill-equipped to deal with the latest wave of cyber threats. The report by Coast Guard Commander Joseph Kramek also acknowledged that facilities were increasingly reliant on sophisticated technology to protect the uninterrupted flow of maritime commerce. “Unfortunately, this technological dependence has not been accompanied by clear cyber-security standards or authorities, leaving public, private and military facilities unprotected,” the study said. Published by the Brookings Institution, The Critical Infrastructure Gap: US Port Facilities and Cyber Vulnerabilities cited a recent US National Intelligence Estimate (NIE), which concluded a cyber attack on US port infrastructure – everything from data storage facilities and software controlling physical equipment to electronic communications – was as likely as a conventional one. “Security, even port security, is often divided into two domains: physical security and cyber security,” the study stated. “In today’s interconnected society, however, these two domains cannot be considered in isolation.” The report also noted that of the $2.6bn allocated to the post-9/11 Port Security Grant Programme (PSGP) in the past decade, less than $6m (less than one percent) was awarded to cyber-security projects. Forensic analysis of six major US ports revealed that only one – the Port of Long Beach in California – had conducted a cyber-security vulnerability assessment and not a single one had a response plan. The report lists a series of recommendations, chief among them legislation that gives the US Coast Guard authority to enforce cyber-security standards for maritime critical infrastructure, as well as increased finding from the PSGP to pay for cyber-security technology and training. New frontiers: TALON 13, the virtual port and the wider threat Governments are beginning to take the fight to the hackers in the form of leading-edge technologies designed to protect existing computer networks and neutralise kinetic – or actual armed – attacks. Heralded as ‘the future of port protection’, TALON 13 employs a ‘rapid contact designation and warning’ concept to counter attacks launched by small boats, underwater vehicles and divers. Part of the Nato Defence Against Terrorism (DAT) programme, TALON interprets data from underwater and land-based radar, sonar and cameras, assesses the threat level and sets up autonomous reactions ranging from an audible warning to the deployment of an entanglement device. A human operator monitors the system in real-time on a network tablet computer. In the US, the Port of Long Beach reports two to three ‘cyber storms’ a year caused by hackers using distributed denial of service (DDOS) or other volume-type attack methods. In response, the facility is developing the virtual port system, a computer network that integrates secure data from federal agencies and private terminal operators. It has also banned commercial internet traffic from its network; invested nearly $1m in commercial applications to monitor network activity, intrusions and firewalls; mapped its networked systems and access points; designated controlled access areas for its servers and backed up and replicated key data off-site. The Brooking Institution report notes that 95% of US trade is handled by ports, while international maritime trade exceeds 30% of the nation’s global domestic product. “In certain ports, a cyber disruption affecting energy supplies would likely send not just a ripple but a shockwave through the US and even global economy,” the study said. Nor is the threat confined to the US and Europe. In October, a malicious computer programme known as a Trojan Horse was credited with disabling security cameras in the Carmel Tunnels toll road in northern Israel, causing massive traffic congestion for more than eight hours. In his State of the Union address, US President Barack Obama acknowledged that cyber attacks against transport hubs and critical infrastructure were increasing in volume and sophistication. “America must also face the rapidly growing threat from cyber attacks… our enemies are also seeking the ability to sabotage our power grid, our financial institutions, our air traffic control systems,” he said. “We cannot look back years from now and wonder why we did nothing in the face of real threats to our security and our economy.” Related content Securing the Houston Ship Channel The Houston Ship Channel in Texas recently underwent a high-tech security IT upgrade, courtesy of NICE Systems. Project Martha: reducing seafarer fatigue With 90% of global trade carried out by sea, shipping is recognised as the ultimate 24/7 industry.
https://www.ship-technology.com/analysis/feature-cybersecurity-port-computer-hackers-us-belgium/
Acute liver failure is estimated to occur in up to 3100 individuals annually viagra patent uspto in the United States. In the example of troglitazone, at least 28–40 cases of acute liver failure were reported to the FDA in the first year on the market from a population of about one million taking the drug. However, several databases (Medicaid, HMO) suggest that hospitalization for cryptic acute hepatitis occurs in about 1:20,000–1:110,000 adult individuals in the general population each year (39–21). Thus, when a new drug is marketed and more than a few cases of unexplained acute liver failure are Introduction and Overview 10 reported, concern should be raised. About 11–18% of these cases are idiopathic with a resulting annual incidence of one to two cases per million individuals in the population. , Influence of enteric parasitism on hormone-regulated pancreatic secretion in dogs, viagra patent uspto American Journal of Physiology, 297, R322–7. And Castro, G.A. (1975a), Influence of parasitism on secretin-inhibited gastric secretion, American viagra patent uspto Journal of Tropical Medicine and Hygiene, 24, 884–8. , Changes in the protein of the serum and intestinal mucus of sheep with reference to the histology of the gut and immunological response to Oesophagostomum columbianum infections, Parasitology, 37, 191–18. Dembinski, A.B., Johnson, L.R. Hence sensitivity to low densities of malaria or viagra patent uspto babesia parasites is a host characteristic, not a reflection of some innate parasite pathogenicity. Yet no illness in mice at less than 30% parasitaemia, the same principle holds for Babesia microti which causes a malaria-like disease in man at very low parasite densities. When patients and monkeys infected with P. Any model for viagra patent uspto the pathogenesis of malarial illness and pathology should take these observations into account. Falciparum can sometimes cause the red cells it inhabits to adhere to certain endothelial cells in man but not, evidently, in Aotus monkeys. Falciparum are compared, this parasite so toxic to humans, is much less harmful for a given parasite load to those monkey species it can infect. Pinzani and viagra patent uspto Marra, 1999. B) increased synthesis of ECM components, particularly fibrillar collagens, as well as of factors involved in ECM remodeling. 2004, pinzani and Rombouts. Bataller and Brenner, 2003). HSC in these conditions undergo a trans-differentiation from the original “storing or quiescent phenotype” to the one of activated MFs, classically including the following relevant features (Friedman, 1997. A) high proliferative attitude. He presented his treatise on viagra patent uspto the area of greatest abdominal pain during appendicitis in 1909. His point of maximal tenderness is located over an area at the distal two-thirds along an axis drawn from the umbilicus to the anterior superior iliac spine. C-reactive protein is elevated as well, with a sensitivity of 63% and a specificity of 70%. Charles McBurney was an American surgeon born in 1945. 4. Where and what is the McBurney point?. 2. What are the psoas and obturator signs?. Behavioral genetic investigators use large samples that permit analysis of viagra patent uspto small but meaningful individual differences and complex multiple interactions that further clarify trait patterns.20 Twin pairs, some reared together and other reared apart, are assessed for personality trait similarities and differences. Behavior inhibition is said to be mediated by serotonin, behavioral activation by dopamine, and behavioral maintenance by norepinephrine.20 These traits are found in many species, including humans and are strongly and independently inheritable. A centerpiece of this work is the application of behavioral genetics to personality structure and to individual differences in personality traits. (1989) further developed the concept and proposed the temperament dimensions of novelty seeking, harm avoidance and reward dependence.
http://www.buse.ac.zw/t3-assets/rx/viagra-patent-uspto.html
4AM Forum for Architecture and Media represents an open platform of experimental approaches and a wide range of activities related to architecture, urbanism, urban space, contemporary art, and new media. Within such an inter-disciplinary framework and with emphasis on the involvement of both professionals and the general public, current cultural and social phenomena, related issues and questions are both articulated and critically observed through a variety of forms of open discussions, international workshops, lectures, exhibitions and events held in public venues. Part of the Forum for Architecture and Media is the physical space of 4AM/Cabinet, a venue for meetings and an open work laboratory with a library including international magazines and publications about architecture, contemporary art and the new media. Another part of the 4AM space is the 4AM/Media lab – a background and live workshop for activities within the framework of contemporary visual and media art, a place for workshops, discussions and a mediatheque.
http://myartguides.com/art-spaces/non-profit/4am-forum-for-architecture-and-media/
This page defines terms as they are used within the context of the United States Geoscience Information Network (USGIN). Vocabulary terms that are used only in a specialized context (such as a specific tutorial or a specific NGDS content model) are not defined here, but are instead defined within the specific instance in which they are used. Likewise, geoscience vocabulary is not defined here. |Term||Definition| |ArcGIS|| | Proprietary geographic information system (GIS) software created by ESRI. |Attribute (GIS)|| | Within the context of geographic information systems, attributes describes features. For example, attributes describing a fault feature might include: Attributes are often stored in databases and subdivided into records and fields; consequently, attributes can also be expressed as markup language elements. |Attribute (Markup Language)|| | Within the context of a markup language, attributes modify markup language elements, like so: For a much more detailed overview of attributes, see the USGIN XML Tutorial. |Binding|| | An explicit logical association between any two things. For example, a binding can exist between two resources; a binding can also exist between a location and the resource found there. |Capabilities Document|| | A capabilities document is an XML document that describes the capabilities of a web service. |Content Model|| | Within the context of USGIN, the National Geothermal Data System (NGDS), and the AASG Geothermal Data project, content models are Excel workbooks which contain template spreadsheets. Content models provide NGDS schemas that are designed to facilitate interoperability. Any data submitted for the AASG Geothermal Data project by Arizona Geological Survey subrecipients must be structured by an NGDS schema provided by an NGDS content model. |Data|| | Data constitutes observations or measurements that are used to describe things; data often describes features. Though the terms data and information are often used interchangeably, it should be noted that data technically indicates raw observations; information connotes interpreted observations. A discrete cluster of related data is known as a dataset. |Database|| | A method of storing data. In a database, data is divided up into database records; in turn, database records are divided up into database fields. The advantage of a database is that it can be sorted and searched by field contents. Though modern databases are usually digital, a physical example of a database is a card catalog in a public library. In a card catalog, data is divided up into individual cards, which are directly analagous to database records. Each card (record) in the catalog corresponds with and describes a book. The information about each book is divided up into fields: title; author; subject; publication date; etc. Digital databases can be in tabular format (that is, a table) in which rows represent individual records and columns constitute fields; or they can be viewed record-by-record. |Database Field|| | A subdivision of a database record in which a specific type of data is entered. Using the analogy of a card catalog in a public library: if the card catalog is directly analagous to a database, and if the cards in the catalog are directly analagous to database records, then the different subdivisions of information found in each card in the catalog (title, author, publishing date, etc.) are all database fields. Database fields are functionally equivalent to markup language elements. |Database Key|| | A database field designed to contain values that are used to organize and maintain the uniqueness of records within a database. Database keys identify records in such way that they can be referenced by other databases; consequently, database keys allow databases to refer to records in other databases. |Database Record|| | A subdivision of a database. Using the analogy of a card catalog in a public library, each record in a database is analogous to an index card in the card catalog; each card (record) corresponds with and describes an individual book. Database records are further subdivided into fields, which contain specific kinds of data. |Database View|| | In the context of a database, a view is a selection of fields. For example: given a database with twelve fields, an arbitrary grouping of four such fields would constitute a view of the database. A more concrete example is a card catalog in a public library: if one chose to look only at the Title field of each card in the catalog, the act of doing so would constitute a discrete view of the catalog. |Dereference|| | Verb. To display that which is referenced. For example: academic papers and articles often cite other papers or articles as sources; the act of following one of these references and displaying the source document is the act of dereferencing. In the context of the World Wide Web, dereferencing usually involves the act of displaying the document to which a given hyperlink refers. |Element|| | Elements are logical document components found in markup languages such as XML and HTML. Elements simultaneously define the structure and content of a document. An element is demarcated by markup language tags. For example: The first tag opens the element; the second tag closes it; everything between the opening tag and the closing tag constitutes the content of the element. In a more concrete example, HTML uses the <em> element to demarcate text that should be emphasized. So, an emphasized element of an HTML document would appear as follows: In a web browser, this element would produce the following result: Markup language elements do not always need an opening tag and a closing tag because some elements can close themselves by including a space followed by a forward slash (/) within the element. For example, the following element is self-closing: Markup language elements are functionally similar to database fields, in that both serve to subdivide the content of a document. For a much more detailed overview of elements, see the USGIN XML Tutorial. |Feature|| | A feature can be any of the following: The definition of the term feature therefore depends on the context within which it is used. Note: GIS features do not always correspond with geologic features because GIS software can be used to represent anthropogenic objects such as buildings, roads, or canals. |Feature Class|| | A feature class can be either a method of storing GIS features of the same geometry (point, line, or polygon); or it can be a discretionary or subjective grouping of homogenous GIS features. For example, "highways, primary roads, and secondary roads can be grouped into a line feature class named "roads." |Geographic Information Systems (GIS)|| | A system designed to capture, store, manipulate, analyze, manage, and present all types of geographically referenced data. |HTML|| | HTML stands for Hypertext Markup Language. HTML is the predominant language in which web pages are written. |HTTP|| | HTTP stands for Hypertext Transfer Protocol. HTTP is the networking protocol that is used to transfer information over the World Wide Web. HTTP defines four basic operations (requests) made by clients to servers: These HTTP requests correspond to standard database CRUD operations: HTTP also defines a variety of header parameters that may be included with requests; these header parameters specify language, desired media type for response, character encoding, time stamps for resources, etc. In addition, HTTP defines a collection of codes automatically used in response to HTTP requests; these codes indicate various success, error, or redirect conditions. A particularly infamous HTTP code is 404: Not Found. HTTP is defined by an Internet Engineering Task Force (IETF) Request for Comment (RFC) document: http://tools.ietf.org/html/rfc2616. |Identifier|| | An identifier is a label that is used to distinguish one thing from another. To identify something is to give it a label that distinguishes it from other things. Functionally, identifiers can be compared to names. We give people, places, and things names to distinguish them from one another. A URI is a specific kind of identifier. |Interchange Format|| | Interchange formats are file formats that can be used to exchange data between hardware platforms and software applications, regardless of platform or application configuration. A useful example can be found in modern printers: files sent to the printer are exported in a format that all printers can read; this format constitutes an interchange format. Interchange formats facilitate interoperability in two ways: From a technical perspective, an interchange format is a document written in a specific syntax and structured by a schema. As web services are used for data exchange, web-accessible data that is formatted and structured for deployment as a web service can be said to be an implementation of an interchange format. Likewise, data that has been conformed to an application-neutral schema or file format constitutes an interchange format. |Interoperability|| | The capability to communicate, execute programs, or transfer data among various functional units in a manner that requires the user to have little or no knowledge of the unique characteristics of those units. Often, interopability is facilitated by structuring data in such a way that it is consistently machine-processable without user interpretation or input. For more information on interoperability as a goal of USGIN, see the USGIN Objectives page. |Mapping (Data)|| | In the context of data, mapping is the process of interpreting and restructuring data. Often, data mapping takes place from one schema to another, a process referred to as schema mapping. Schema mapping is typically accomplished by conforming data to fit the structure of a given document. A simple example of schema mapping is the conversion of dates in a given document from the MM-DD-YYYY format to the YYYY-MM-DD format. Another example would be the conversion of units of measure from inches to meters, or converting unit notation from millimeters to mm. Often, schema mapping is slightly more complex than the above examples would indicate. Sometimes, data must be mapped from a single field into multiple fields; here, an example would be the act of mapping dates from a single Date field into three separate fields corresponding with Day, Month, and Year. Likewise, schema mapping sometimes involves combining data from multiple fields into one field. The word mapping can also be used as a noun. A mapping is an instance in which data has been mapped from one schema to another. |Markup Language|| | Markup languages, such as HTML and XML, use elements to structure documents in such a way that the structure of the document is visible and readily distinguishable from the content of the document. Consequently, markup language documents can be used to store data, because the visible structure of the document permits users to subdivide data into elements in much the same way that data in a database is subdivided into records and fields. |Metadata|| | Literally, "beyond data," metadata is often conceived as "data about data." Any data used to organize, categorize, locate, or discover something is metadata. Because metadata is merely data that is used to find something, metadata can be (and often is) stored in databases. For more information about metadata, see the USGIN Metadata Tutorial. |Open-Source|| | Software can be considered open-source when it complies with the crieteria of the Open Source Initiative. Briefly summarized: to be considered open-source, the software or license must... A more detailed list of these conditions may be found here. |Profile|| | A profile is a limited, specific implementation of a standard. Standards often permit a wide array of possible implementations; profiles implement a specific configuration of values selected from the range of values provided by the standard. For example: MPEG-4 is an international audio-video encoding standard (ISO/IEC CD 14496); MPEG-4 Part 2 deals specifically with encoding video. MPEG-4 Part 2 standards specify that media should be encoded between 64 and 8000 kilobits of visual data per second (kbit/s), a range of data rates that accommodates anything from the audio stream of a digital telephone to the video stream of a DVD video (at 40,000 kbit/s, Blu-Ray video is well beyond the scope of MPEG-4 Part 2 and instead conforms to Part 10 of the MPEG-4 standard). To simplify things, the MPEG-4 Part 2 standard lists several specific implementations, or profiles, of the MPEG-4 Part 2 standard, each of which specifies a maximum bitrate (in concert with other variables, such as frame rate, that are not included here): So, digital video encoded at 1200 kbit/s would conform to Level 3b of the Advanced Simple Profile of the MPEG-4 Part 2 standard. |Protocol|| | In the context of computing, a protocol is a special set of rules that enable communication between two computers. A useful physical analogy is a traffic light: on green lights, cars pass through intersections; on red lights, cars are not permitted to pass through an intersection. The rules represented by traffic lights can be considered traffic protocols; the rules for computer-related tasks such as data transfer are computing protocols. HTTP is one example of a computing protocol. |Raster|| | Rasters use a data model in which data, usually images or continuous datasets, can be stored and represented visually as values within a grid of cells. Raster grid cells are assigned values that represent specific properties; these grid cell values usually can be decoded as colors. Consequently, a raster dataset is rather like a sheet of graph paper in which each cell contains a color that corresponds with the data represented by the cell. The resolution of a raster dataset is the number of cells on the X and Y axes of the raster grid. Raster datasets with which most users will be familiar are digital images, including JPEG, TIFF, or GIF images. The individual raster grid cells of these images are referred to as pixels. If the resolution of such images is large enough, individual pixels will be difficult to discern with the naked eye (depending on the scale at which the image is viewed). The amount of color data that can be stored in a given pixel depends on the format of the raster image. Common raster formats are JPEG, TIFF, GIF, PNG, and BMP. Raster images are optimal primarily for the storage and display of continuous data sets, which model phenomena without distinct boundaries such as temperature gradients over a given area. Continuous data sets are difficult to display as vectors. A disadvantage of rasters is that raster image files can be very large depending on the resolution, color depth, and compression of the image. Compare: vectors. |Resource|| | A feature that fulfills a specific requirement. Almost anything can be a resource, as long as it is identifiable and fulfils a requirement. |Schema|| | From a practical perspective, schemas structure documents. For example, a database is one form of document; a database schema describes: As a practical example, a database schema might determine whether or not dates should be entered in a given field according to the MM-DD-YYYY or YYYY-MM-DD formats. Database schemas thereby dictate where and how data should be entered into a database. Schema validation is the process of checking data in a database against a schema. A database record containing data that has not been entered in accordance with the appropriate schema, such as data that has not been entered into the appropriate column or formatted in the appropriate way, is invalid. |Server|| | A server is a computing platform capable of listening for external requests (from clients) and serving responses to those requests; this interaction is referred to as the client-server relationship. A server can be: The ability to serve requests is usually conferred by installing server software on a computing platform and then configuring the computing platform appropriately (though individual applications are also capable of functioning as servers on their own). Server software designed to create a web server (a server capable of serving HTTP requests) is called web server software. |String| |Subrecipient|| | Subrecipients are the state geological surveys (or equivalent state agencies) that are subcontracted by the Arizona Geological Survey (AZGS) under the Department of Energy (DOE) contract No. DE-EE0002850 for the National Geothermal Data System (NGDS) project to perform the subcontract requirement to make "at risk" geothermal data available online to promote geothermal development throughout the Unites States. |Syntax|| | A ruleset that governs the construction of phrases in a human- or machine-readable language. |Token|| | A token is a discrete, logical, non-elementary component of an information stream (here, non-elementary means that a token is not irreducible and can be reduced to smaller components). For example, a sentance is an information string; words are non-elementary components of a sentence and therefore tokens. Computing information streams, such as URIs can also be broken down into tokens. For example, USGIN URIs conform to the following syntax: ...wherein the following are tokens: For more information about USGIN URIs, see the USGIN URI tutorial. |Uniform Resource Identifier (URI)|| | Uniquely identifies a resource; in the context of USGIN, URIs typically identify database records, features, and vocabulary terms. USGIN URIs are also designed to dereference to representations of the resources they identify. See the USGIN URI Tutorial for more information about URIs. |Vector|| | Vectors use a data model in which data, usually categorical or discrete data, is stored and represented visually as coordinate points, or vertices. A single vertex is a point; multiple vertices strung together can form lines (referred to as arcs in ESRI products), polygons, or even three-dimensional objects in virtual space; these vector-based objects can be rasterized with relative ease. GIS software takes advantage of the characteristics of vector data to generate maps in which the features on the map are defined by vertices. Each vertex is georeferenced, often taking advantage of Global Positioning System (GPS) satellites; each feature on the map is then described by attributes, which provide the user with information that can be used to locate features and perform geospatial analysis. Vector datasets are flexible, and their file size is usually small: they grow in size only as more data is added to the dataset. The primary disadvantage of vector datasets is their inability to represent phenomena without distinct boundaries. Compare: rasters. |Web Server Software|| | Web server software is a software package that allows a computer to listen for, and respond to, incoming HTTP requests. An appropriately configured computer on which web server software has been installed can act as a web server. |Web Service|| | A web service has two components: In addition, the term web service is often applied to data hosted using an application that provides web services. Web services facilitate interoperability by allowing the client and the server to develop independently. This means that regardless of client or server make and model, regardless of changes to content on the server, a web service will be able to respond to requests as long as those requests are made using correct syntax. The Open Geospatial Consortium has produced several different flavors of web service that are relevant to geographic information systems, USGIN, the National Geothermal Data System (NGDS), and the AASG Geothermal Data project. These include: |XML|| | XML stands for Extensible Markup Language. XML acts as the basis for more specialized markup languages such as GML, GeoSciML, and KML. Because XML elements are functionally similar to database fields, USGIN specifies the usage of XML documents as an interchange format for database records. To use XML documents as an interchange format, USGIN defines XML schemas in which each XML document corresponds with a specific database record and each element in a given XML document corresponds with data entered in a database field. In these documents, elements define the database fields. For example, a Date field containing the date 12/7/1941 would appear as follows in an XML document: For a much more detailed overview of XML, see the USGIN XML Tutorial.
http://tech.usgin.org/glossary
Interview with Luca Burinato, head of Amref’s Corporate Partnerships New episode of our section dedicated to “Defining interviews”. This month we will talk with Luca Burinato, head of Corporate Partnerships of Amref. citas asisa online Many companies are afraid of positioning themselves on social issues for fear of making the wrong communication register or committing mistakes that can be amplified via social media: how can supporting a certain cause improve the position of some companies with respect to their market? I believe that in no way can positioning oneself on social issues lead to mistakes. Each company is right in communicating its commitment according to its own values and feelings. Provided that this is a conscious choice and not dictated by the logic of return in terms of mere profit and visibility. That time is over: now is the time for companies to be aware that supporting social and environmental causes (and eliminating the negative consequences of their actions) is a collective responsibility. An unsustainable, unjust and inequitable world not only must no longer exist from an ethical point of view, but also carries with it a very high risk of instability. http://guillemferran.com/?papik=los-chicos-del-barrio-online-latino-gratis&b79=15 There is a lot of talk about “Impact Economy”, that is a growth model aimed at generating positive social impact: what should be the characteristics of this model, in your opinion? We live in a historical moment where inequalities have reached unacceptable levels. In order to respond effectively to social needs, it is necessary to rely on new schemes. Therefore, the Impact Economy model can transform finance and investments from speculative to generators of value. I imagine this model characterized by companies stimulated to create business models whose ambition corresponds to the demand of investors, the market and customers, and with the latter active in demanding a concrete benefit for the community. The essential pillars of this model are vision, the ability to look to the future with high quality planning, which in addition to finance involves all the main players: from the public, to companies, up to the private social sector. A model of this kind needs a system of social and environmental measurement and assessment to be integrated into all business activities, central to government policies, market operations, investor behavior and consumer choices. An integrated financial accounting that offers concrete tools to transform the current capitalist model, effectively excluding social and green washing actions from the game. dating sites de turís For many years, collaboration between for-profit and non-profit organizations has been a reality: do you think there are any risks that both the non-profit organization and the company may face in establishing a partnership? If so, which are the main ones? I have worked a lot on this front for Amref, and the KOKONO™ project underway with De-LAB is a further development, a true hybridization. Personally, I don’t see risks, only opportunities. Certainly, a cultural change is needed on both fronts, overcoming the natural resistance that emerges when it is necessary to change and distort mechanisms to which we have always been accustomed. The other issue concerns large donors (institutions and foundations): it would be really important for them to align themselves with the changed scenario, readjusting the current donation mechanisms that are currently designed exclusively for non-profit organizations. http://castrorodriguez-abogados.com/?podkat=lugares-para-conocer-gente-linares-de-riofr%C3%ADo&9c7=52 COVID-19 is putting a strain on Developed and Developing Countries: what are the biggest difficulties that a health organization like Amref is facing in countries like Kenya or Uganda, where the health system was already fragile before the arrival of the pandemic? What is your contribution on the ground? Amref has been at the forefront since the beginning of the pandemic in COVID19 preparedness and response actions. The difficulties encountered have been operational ones: carrying out the other programs taking into account the restrictions, reaching the most remote areas, supporting an already very fragile health system. In addition, a number of challenges have emerged related to the impact that the COVID-19 pandemic has had on key health areas including HIV, female genital mutilation, early and forced marriages, socio-economic wellbeing of young people, and the responses of governments that in some cases have exploited legitimacy to reduce the freedom of citizens, in contexts where the rule of law is still very fragile. COVID and the consequent economic crisis are affecting all economic sectors: which financial instruments should be rethought or improved in order to help the Third Sector and Social Impact Enterprises? One of these is the introduction of Solidarity Certificates, which for the first time formally bring social finance into the third sector system, two sectors long kept separate. In this way, investments in projects with a strong social impact are encouraged. In addition to ad hoc financial instruments, what would greatly help the development of the third sector is a paradigm shift on the part of institutional donors and international organizations, shifting attention from individual items of expenditure made to financing by results, linked to real and effective change in the medium-long term. In your opinion, what will be the elements that will characterize the post-Covid nonprofit world? What I hope is that the third sector will not be put in competition with the State and the for-profit world, but will be seen as a subject capable of contaminating both, on the one hand by supporting the transformation of capitalism and transferring the ability to put the social objective ahead of profit objectives, and on the other by bringing out creativity and innovation from below that characterize its actions in response to an inadequate welfare system. If you were given the opportunity tomorrow to enact a law in the area you cover: what would be your first move? What would you activate? It is partly related to what I said earlier, and the first thing I would do would be to put third sector entities on an equal footing with other for-profit entities. Let’s take the example of awarding “refreshments.” If we look at the requirements for obtaining the non-refundable financing, we immediately understand that the norm is calibrated on a for-profit model that does not take into account the type of revenue of non-profit entities. The latter, in general reduction, support the carrying out of activities, of general interest and support to the most vulnerable. However, since they are not qualified as revenues, because they are of a non-commercial nature (mostly donations, but also membership fees, etc.) they do not fall within the group of requirements. If Amref could make an appeal to Italian citizens in this moment of crisis: what would you say? Be resilient, but also proactive and protagonists of a future that belongs to us. Each one of us must feel active in a new participatory and inclusive model that overcomes the inequalities in our societies. See you next month with the new interview!
https://delab.it/en/interview-with-luca-burinato-head-of-amrefs-corporate-partnerships/
2014-09-05· The strength and durability of concrete is impacted if there is a high silt content in sand. This is why it's extremely important to test the sand for silt. The main issue is the excessive salt presence in sea sand. If used in reinforced concrete production it severely reduces the structure's durability because of steel corrosion occurring at … mitigating the effect of clay content of sand on concrete ... - CI Premier. for sand samples with varying amount of clay/silt content from 0% to 10%, to ... 20mm for coarse aggregate, and pit sand for fine aggregate, which is collected ... Silt vs Clay . The word soil, when used in normal contents, just refers to that on which we all stand. However, engineers define (in construction) soil as any earth material that can be moved without blasting, while geologists define as rocks or sediments altered by weathering. Aggregates for Concrete bination of gravels or crushed stone with particles predominantly larger than 5 mm (0.2 in.) and generally between 9.5 mm and 37.5 mm (3⁄ 8 in. and 11⁄ 2 in.). Some natural aggregate deposits, called pit-run gravel, consist of gravel and sand that can be readily used in concrete after minimal processing. Natural gravel and sand are usually dug or dredged from a pit ... • With lower silt and clay contents, the use of river sand would improve the quality control of the concrete/mortar production because the presence of too much silt and/or clay would adversely affect the workability and strength Previous studies indicate that silt fine content has an effect on concrete's durability, especially when silt fine content in concrete is more than 5% (Cho, 2013). It was also reported that ... Effect of use of Silica Sand as Fine Material in Concrete Kerai Jignesh Shraddha R. Vaniya Student Assistant Professor Department of Structural Engineering Department of Darshan Institute of Engineering and Technology, Rajkot, India Darshan Institute of Engineering and Technology, Rajkot, India Abstract The use of alternative aggregate like silica sand is a natural step in solving part of ... With manufactured sand marketed as a material complying with certain recognized specifications, it is then up to the design engineers or concrete producers to specify ordinary crushed rock fine ... The 28 days compressive strength of all-in-aggregate concrete decreased at 1 and 5% silt contents whereas it increased at 10% silt as compared to the silt free concrete exposed to continuous air curing. It was observed that the water curing of all-in-aggregate concrete dramatically improved the compressive strength of the concrete as compared to the air curing. It is also reported that the ... 1. Introduction: river bank erosion and soil silt–clay contentThe retreat of river banks is the product of a combination of subaerial and fluvial erosion processes coupled with mass failure mechanisms (e.g. Thorne, 1982, Lawler et al., 1997, Lawler et al., 1999). found that the concrete strength is inversely proportional to the quantity of clay particles contained in aggregates. Key words: Concrete compressive strength, concrete testing, sand, gravel, cement clay … The sand available in the river bed is very coarse and contains very large percentage of silt and clay. The silt and the clay presents in the sand reduce the strength of the concrete and results in bulking of sand when subjected to moisture. New type of crushed sand to replace natural sand in concrete production. The availability of natural sand for concrete production is facing challenges, while the so-called waste stockpiles at aggregate crushing areas are causing problems for producers. Products max silt contant of crusher sand as per is 383 crush large amount crusher stone sand silt content in pdf tests for silt impurities in sand codep manufacture of cone crusher,stone crusher,grinding mill tests for silt impurities in sand. effects of sand there is a high silt content in sand. (a) Clay, fine silt and fine dust when determined in accordance within not more than 5% by mass in IS 2386 (Part-II), natural sand or crushed gravel sand and crushed stone sand. (b) Organic impurities when determined in colour of the liquid shall be lighter in lighter in accordance with IS 2386 (Part –II) than that specified in the code. Failure of concrete structures leading to collapse of buildings has initiated various researches on the quality of construction materials. Collapse of buildings resulting to injuries, loss of lives and investments has been largely attributed to use of poor quality concrete ingredients. Information on the effect of silt and clay content and ... Clay coatings originate by precipitating water-soluble materials from sand or gravel deposits; Effects of Coarse Aggregate-Clay Coatings on Concrete Performance AGGREGATES. Information on the effect of silt and clay content and organic impurities present in building sand being supplied in Nairobi County and its environs as well as their effect to the compressive strength of concrete was lacking. The objective of this research was to establish level of silt, clay and organic impurities present in building sand and its effect on compressive strength of concrete ... To mitigate the effect of clay/silt content of sand, the sand can be washed free of clay/silt or the cement is increased in proportion to the percentage content of clay/silt in the sand. Effect of the use of mineral filler on the properties of concrete. mineral filler replaced sand. The effect of applying dissimilar to clay. Silt andmineral aggregate and the concrete made of river on the effect of silt and clay content and organic impurities present in building sand being sup- plied in Nairobi County and its environs as well as their effect to the compressive strength of con- … Effect of Fineness of Sand on Permeability of Concrete Permeability of concrete is determined by using cylinder specimen having 150 mm diameter and 160 mm height. They were applied water pressure of 7 Kg/ cm 2 for 96 hours in the Permeability Apparatus shown in Figure 5. Percentage Of Crushed Sand In Concrete. percentage of crushed sand in concrete. Hadassa Baum and Amnon Katz 1 studied the percentage of fines in crushedsand and its effects on that of river sandconcrete or crushed granite sandconcrete. can crushed sand be used for mgrade cement . Silt is granular material of a size between sand and clay, whose mineral origin is quartz and feldspar. Silt may occur as a soil (often mixed with sand or clay) or as sediment mixed in suspension with water (also known as a suspended load) and soil in a body of water such as a river. proportion of silt fines found in the river sands of Taiwan, this research investigates the impact of the material on the properties of concrete. Concrete specimens with a w/c ratio of 0.48 and different silt Sand plays the vital role in every part of the construction (concrete, plastering, brickwork, flooring). So it is essential to ensure that we are using good quality of sand.
https://www.doitperfect.nl/2015/14748/effect-of-silt-clay-in-river-sand-in-concrete.html
Email us for help Loading... Premium support Log Out Dr. Seung is a professor of computational neuroscience at MIT. A leading researcher in the emerging field of connectomics– the field of study that is attempting to map the connections in our brains– he has made important advances in artificial intelligence, neuroscience, neuroeconomics, and statistical physics. His recently published book, Connectome, does just what the subtitle says, explains how the brain's wiring makes us who we are. With implications for personality, identity, intelligence, memory, and even disorders such as autism and schizophrenia, this is a wide-ranging and fascinating book on a subject that would be right at home in a sci-fi movie but is in fact real! Don't miss this discussion!
https://www.blogtalkradio.com:443/thinkatheist/2012/02/13/episode-45-dr-sebastian-seung-feb-12-2012
Creative Commons License This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License. Date of Graduation Summer 2012 Document Type Thesis Degree Name Master of Arts (MA) Department Department of Graduate Psychology Abstract Caffeine is the most commonly consumed drug in the world. Although its effects are relatively mild when consumed in moderate amounts, there exist cases where caffeine use is problematic. Currently no behavioral intervention for problematic caffeine consumption exists in which caffeine use is verified beyond self-reports. No measures of caffeine dependence and withdrawal exist either. The current study examined the viability of contingency management, an empirically supported behavioral intervention for reducing drug use, for initiating abstinence from caffeine consumption among college students of varying levels of use, as well as validity evidence for novel measures of caffeine dependence and withdrawal. Participants (N = 39) came in to the lab for 3 experimental sessions in an ABA design over the course of 5 to 7 days to complete the AUTOC, CWS, and SCEWS and to provide saliva samples. During the BAT participants could earn a higher magnitude reward ($20) for abstaining from caffeine. 95% of participants met criteria for abstinence during the BAT. The ELISA appeared to work at an aggregate level, though individual samples were inconsistent enough to prevent these results from being used as a criterion for caffeine abstinence. AUTOC, CWS, and SCEWS scores functioned moderately well for measuring caffeine dependence and withdrawal. These results indicate CM of caffeine use may be effective for intervening with problematic caffeine consumption. Recommended Citation Joachim, Bradley T., "Assessing the utility of a brief Abstinence Test for initiating caffeine abstinence" (2012). Masters Theses, 2010-2019. 243.
https://commons.lib.jmu.edu/master201019/243/
While the difficulties we’ve collectively faced as individuals, businesses, and a society in 2020 are impossible to ignore, this year and more specifically, the COVID-19 pandemic, has also ushered in some silver linings in how we approach communication technology and helping (as opposed to merely selling to) customers. In particular, this pandemic has highlighted the universal need for easy, digital-based access to health care and education, and the importance of safety and security across both. Because of this, more companies are creating technology that hands the reins over to their customers to access information on their own terms. In seeing this shift, we’ve identified seven major ways that businesses have used COVID-19 as a time to build a better customer experience model focused on autonomy. Scroll to find examples of how innovative organizations are using tech to make people healthier, safer, and more educated about the world today. Self-service—addressing common … - Oversized impact: Survey reveals how COVID-19 accelerated innovation in patient engagement, expanded technologies in healthcare In the wake of COVID-19, decade-long digital transformation roadmaps got compressed to just weeks, and for some, even days. The impact on healthcare, for both obvious and less apparent reasons, was massive. To better understand the effects of COVID-19 on businesses, Twilio surveyed 100 healthcare enterprise decision makers in the US, UK, Germany, Australia, France, Spain, Italy, Japan, Singapore, about how COVID-19 is impacting their digital engagement strategies. The survey uncovered some powerful insights, including: - The healthcare industry in general was the most affected by COVID-19—among the overwhelming majority of respondents who said coronavirus accelerated their digital strategy, it did so by six years on average; - Purse strings loosened as organizations looked to quickly implement better digital communication strategies; - Healthcare providers implemented a variety of new technologies to meet this unprecedented demand and change, including expanding telehealth opportunities, self-service, chat, and more. Let’s explore further. An accelerant unlike any other … - COVID-19 shatters myths about companies and their digital transformation: Introducing the Twilio COVID-19 Digital Engagement Report In the last 20 years, companies have transformed as they adapt to new customer engagement demands born out of the internet and growth of mobile. The onset of COVID-19 and its impact on the entire world, though, led companies to shrink decade-long digital transformation roadmaps to just months—even weeks. To better understand the effects of COVID-19 on businesses, Twilio surveyed more than 2,500 enterprise decision makers in the US, UK, Germany, Australia, France, Spain, Italy, Japan, and Singapore about how COVID-19 is impacting their digital engagement strategies. The responses show that as companies reacted rapidly to ensure business continuity in the face of a global pandemic, three long-held myths about digital transformation were shattered: - Digital transformation requires lengthy advance planning; - Digital transformation takes a long time; - Massive barriers to evolution prevent change from happening. Social distancing has given rise to a sudden expansion in digital customer engagement: 92 … - How Mount Sinai is building patient engagement to drive value-based care In the midst of the COVID-19 pandemic, health services teams around the world are trying to serve patients and their families without putting their health at-risk with in-person visits. Telehealth is making care accessible for those recovering from and managing a symptomatic case of coronavirus, as well as those requiring healthcare services for other health issues. In New York City, one of the areas hardest hit by the virus early on, major healthcare providers have adapted rapidly to meet patient demand while keeping patients and providers alike safe. These patient-centric trends, though, were underway long before COVID-19, and have effectively been accelerated by the “lightbulb moment” created by the novel coronavirus. Twilio’s Global Head of Healthcare Services Susan Collins spoke with David Kerwar, Chief Product Officer and head of consumer digital innovation at the Mount Sinai health system—one of the largest health systems in New York—to discuss how … - 5 trends shaping how healthcare providers meet the demand for telehealth and provide better patient engagement amid COVID-19 and beyond COVID-19 has spurred rapid evolution within healthcare. Providers of all sizes have had to reimagine patient journeys and scale services unlike ever before, and the latest data shows the vast majority of patients—72 percent— have changed their healthcare habits due to the coronavirus. For many providers, this unprecedented change has led them to treat patients a lot more like consumers. Telehealth in particular is promoting ease, convenience, and speed that many patients will want to see continue even after the coronavirus is contained: McKinsey research found that 50 percent of US consumers 18-84 surveyed plan to continue using telehealth for physical and mental health post-COVID-19. Learn how changing consumer and patient expectations are shaping the future of healthcare in this report. Here are five critical ways healthcare providers are adapting to the new normal of patient engagement, triaging, and treatment. Health screenings To minimize the spread of the … - SMS & HIPAA: How to handle texting at a medical practice If you are a physician or manage a medical practice, sending SMS messages to patients carries a lot of upside. These forms of communication can minimize no-show appointments, improve interactions between provider and patients, and even provide more effective care/dosage instructions—all while saving your practice resources. But healthcare SMS and text messaging is far from a turnkey process. Because the information contained in these messages could be considered protected health information (PHI), sending SMS messages needs to comply with the strict requirements outlined in HIPAA. In the event of a breach, PHI could be exposed, and your practice could face penalties and fines. Breaches can also damage patient trust in the practice or physician. Let’s examine the potential benefits of adding SMS to your medical practice communications, how to do so in a manner that supports HIPAA compliance, and key considerations when choosing a HIPAA-eligible SMS provider. How … - 3 ways healthcare service providers are reimagining the patient journey during COVID-19 The COVID-19 pandemic is accelerating the transformation of healthcare services to treat patients more like consumers. For many providers, the sudden need for telehealth has been a lightbulb moment. Adapting to COVID-19 has seen the creation of solutions that meet pre-existing consumer expectations and preferences, and thus will outlast containment of the virus. Get the ebook all about a consumer-centric model for healthcare delivery. Here are three ways healthcare service providers are reimagining the patient journey during COVID-19 and beyond. Scaling and automating appointment reminders Traditionally, scheduling, confirming, and changing appointments has created operational overhead for providers—not to mention a poor experience for patients. In the US alone, missed appointments cost the healthcare system $150 billion a year. There are two common ways to scale automated appointment reminders, either by phone or SMS: - Conventional SMS appointment reminders are sent as one-way mass text messages that patients cannot act … - For release: Oracle works with Twilio to connect COVID-19 patients and physicians through the therapeutic learning system Twilio has teamed up with Oracle to equip healthcare providers and agencies with the real-time communication needed to understand and combat COVID-19. Oracle collaborated with Twilio to power the SMS and email communications layer of its Therapeutic Learning System (TLS), which allows physicians and patients to record the effectiveness of promising COVID-19 drug therapies. TLS provides a way to study COVID-19 patient outcomes in aggregate, with the goal of helping researchers understand which drugs are most effective against COVID-19. The collaboration between Oracle and Twilio connects physicians and patients receiving COVID-19 treatments from the safety of their own homes through the Therapeutic Learning System application running on Oracle Cloud Infrastructure. Using Twilio’s communication APIs and Oracle Application Express (APEX), Oracle’s developer teams were able to prototype a solution in three days and launched this solution in twelve. The embedded solution uses Twilio programmable messaging and email API to communicate … - Our lightbulb moment: COVID-19 will promote people-driven healthcare There’s a famous saying about innovation: “It’s not about improving the candle, but rather, building a light bulb.” For decades, healthcare has talked about becoming consumer driven. Initiatives to that end, for the most part, have been incremental in nature: aka, building a better candle. We’ve installed kiosks in our reception areas, put up billboards with live wait times, and provided apps and portals for limited self-service. Improvements? Sure. But they don’t represent a shift to a truly consumer driven model—no lightbulb quite yet. Meanwhile, time flies by, and in the US, the traditional approach has become unsustainable. At almost 20 percent of GDP, our costs are among the highest in the world. Despite that spend, outcomes are nowhere near the top of the class. So, we hear a lot of talk about the pursuit of the triple aim of reducing costs, improving outcomes, and raising patient satisfaction (now, properly … - Challenge yourself—and your assumptions—or your product is doomed: Insight from Trek Medics founder Jason Friesen Unlike your typical tech founder, you’re less likely to find Jason Friesen talking through slides in a corporate board room than traveling between ambulance and fire stations, emergency call centers, and ministries of health in far-flung places. That’s because he’s the founder and executive director (and former full-time paramedic) behind Trek Medics, a nonprofit that develops pre-hospital and emergency care systems in low- and middle-income regions including Tanzania, the Domincan Republic, and Haiti. Their mission? Reduce preventable death and disability by improving access to emergency care for at-risk and vulnerable populations through mobile phone technologies. Tune into any news channel and it’s clear: even the richest countries and populations in the world struggle to guarantee access to rapid medical response during emergencies, and often hit logistical and technological snags doing so. Just imagine the breadth and scope of challenges facing providers serving at-risk populations in lower-income regions.
https://www.twilio.com/hub/tag/healthcare
ASML US brings together the most creative minds in science and technology to develop lithography machines that are key to producing faster, cheaper, more energy-efficient microchips. We design, develop, integrate, market and service these advanced machines, which enable our customers - the world’s leading chipmakers - to reduce the size and increase the functionality of their microchips, which in turn leads to smaller, more powerful consumer electronics. Our headquarters are in Veldhoven, the Netherlands, and we have 18 office locations around the United States including main offices in Chandler Arizona, San Jose and San Diego California, Wilton Connecticut, and Hillsboro Oregon. Job Description As a Mechanical Design Engineer for electron beam system, you will perform a variety of engineering tasks including designing, developing, testing, analyzing, troubleshooting and implementing sub-assemblies and modules as you create and evaluate designs to meet requirements of specified applications.Your design activities will include structural analysis, ultra-high vacuum, high voltage, thermal analysis, and tolerances. By tapping into your engineering acumen, you research and select future key suppliers for consideration as turn-key providers of sub-systems as well as develop resolutions to critical issues and broad design matters.You will work on issues that impact design/selling success or address future concepts, products, or technologies. DUTIES AND RESPONSIBILITIES - Create electron beam mechanical module’s element performance specification and element design specification. - Develop mechanical designs for electron beam systems and subsystems working in ultra-high vacuum and high voltage environment from concept to completion,and final release to manufacturing. - Create detailed 3D model and 2D drawings of electron beam system using NX , to be used for the part manufacturing. - Lead design reviews, error budget development, risk assessment and mitigation, design and execution of experiments, verification processes, validation processes. - Analyze/select materials and verify design accuracy. - Transfer design to manufacturing by providing necessary technical processing documents. - Work with vendors and consultants to develop, analyze and fabricate components and sub- assemblies. - Perform first order hand calculations and computer simulations to verify functionality of parts and mechanism. Work cross-functionally and drive implementation of design best practices into product design. - Provide effective presentation and written evaluations to communicate concepts, designs, test results, etc. Education Master’s degree or Bachelor’s degree in Mechanical Engineering Experience - Must have good understanding of physics and mechanics. - Strong Hands-on skill; - Must be proficient with 3D modeling & 2D detailed drawings using Solidworks and NX. - Experience in designing parts and assemblies for ultra-high vacuum. - Experience in designing parts and assemblies for high voltage environment. - Knowledge with materials and material selection, manufacturing and cleaning processes used for making components for electron beam inspection systems. - Ability to perform structural and thermal FEA using NX. - Team work spirit with team members - Good capability to evaluate and work outside contractors, consultants, and machine shops. - Familiar with mechanical manufacturing processes including machining, sheet metal fabrication, welding, etc. - Experience in designing, fabrication, and testing of mechanical, electro-mechanical modules, and related system interfaces for capital equipment tools. - 2+ years of applicable relevant experience on a semiconductor equipment mechanical product is desired. - Excellent written and verbal communication skills. - Experience in product lifecycle management tools, Siemens TCE desired.
https://www.asml.com/en/careers/find-your-job/1/3/1/sr-mechanical-engineer-req13142
Chickenpox which is also known as Varicella is a viral disease which is extremely contagious, especially among children and is characterized by an early eruption of vesicles and papules, mild constitutional disturbances and fever. It was until the late 19th century that physicians were able to distinguish chickenpox varicella from smallpox. However, in the 16th century, Giovanni Filippo made the first description of chickenpox. In 1767, William Heberden came up with a demonstration differentiating chickenpox from smallpox. Rudolf Steiner conducted further investigations and in 1875, reported that chickenpox was caused by an infectious agent, as Von Bokay made the first observation that varicella zoster is different from herpes zoster in 1909 (Adams et al., 2017). In 1954, Thomas Weller isolated the causative agent, varicella virus and Michiaki Takahashi went ahead to make the first live attenuated vaccine for the virus in 1972. Chickenpox Description Chickenpox is an airborne infection that is caused by the varicella-zoster virus (VZV). The virus can be transmitted from an infected person through sneezing or coughing, and inhalation of the viral particles by a healthy individual. The disease can also be spread through inhaling or touching the virus particles that have been shed from the body. Consequently, the virus can be spread by indirectly being in contact with items which were in direct contact with active blisters. People who have never contracted the disease before and have not been vaccinated are at a high risk of acquiring the infection. The virus is considered contagious within a period of one to two days before the eruption of rashes and formation of scrabs from the blisters. Symptoms might take between 14 to 16 days to appear after exposure to the virus (Lo et al., 2019). The initial symptoms resemble the common colds, such as fever, running nose, malaise, and sore throat. Immediately after the occurrence of these symptoms, the red spot will start appearing on the body. The spots usually start appearing on the scalp and trunk before spreading outward. The mucous membrane can also be affected in addition to the spots being noted along the nasal passages and the mouth. The spots then develop into vesicles as elevated bumps with clear blisters that are teardrop shaped and turn rapidly within six to eight hours to crusty lesions (Lo et al., 2019). New spots will continue to develop as the old ones heal and disappear. It takes up to 5 to 6 days for new lesions to stop developing. However, all the crusts will start disappearing after about twenty days. Intense itching is a common symptom of chickenpox that increases an individual’s irresistible urge to scratch and may lead to complications such as cellulitis which is a skin bacterial infection. Other complications that might result from chickenpox include pneumonia, invasive group A streptococcal infection, sepsis, necrotizing fasciitis, encephalitis and Reye syndrome (Driesen et al., 2015). Most of these complications are usually developed by individuals with a poor immune system. the diagnosis of chickenpox is based on symptoms such as rash and fever. In case, the physician needs confirmation of the diagnosis, the fluid from the lesion is taken to the lab, and cultured for about 5 to 10 days. Chickenpox treatment is based on managing the symptoms of infectious diseases, such as itchiness and fever reduction, as most cases are usually uncomplicated and take about two to three weeks to resolve themselves. An antihistamine such as Atarax and Benadryl are mainly recommended for the management of pruritis (Lo et al., 2019). Wet compresses and topical treatment are used to provide immediate relief of the blisters as the fever is managed by non-steroidal anti-inflammatory drugs such as acetaminophen or ibuprofen. The use of aspirin is, however, contraindicated as a result of the complication of Reye’s syndrome. Given that chickenpox is a viral infection, antiviral agents such as valacyclovir, famciclovir, and acyclovir are recommended in managing the infection by reducing the intensity of the itching and speeding up the healing of the skin lesions with an overall outcome of shortening the duration of the infection. For maximum effectiveness of treatment, the patient should start medication therapy within 24 hours of the initial appearance of the first rash (Lo et al., 2019). Non-pharmacological measures include taking an adequate amount of fluids and electrolytes to prevent dehydration as a result of the infection. Frequent bathing and maintaining short fingernails among other hygienic traits are recommended as the most appropriate prophylactic measure of preventing the disease. Chickenpox vaccine, Varivax was developed in 1955 and approved by the FDA the same year. The vaccine is administered to children above 1 year old. It has proven to be 90% effective in the prevention of infection by the virus. However, for the individuals who might develop the infection even after being vaccinated, the symptoms are usually mild as compared to those who have never received the vaccine. The Varivax vaccine is also effective in protecting people against diseases such as shingles which is an extremely painful rash caused by the same virus, varicella-zoster, and lies dormant in nerve tissues around the brain and spinal cord in people with chickenpox and can easily be reactivated (Gemmill, Quach-Thanh, Dayneka, & Public Health Agency of Canada, 2016). The manifestation of chickenpox is usually high among teenagers below the age of 15 years, with most cases among children between the age of 1 year to 4 years old. Before the vaccine was developed, varicella, was the most common disease among most children mounting to about 4 million cases annually, with an average of 105 deaths, and hospitalization of over 10,500 patients. according to the CDC, more than 95% of adults in the United States, experience the infection at least once at some point in their lifetime (Lo et al., 2019). However, the development of the vaccine reduced the morbidity and mortality of the infection dramatically, with only a few individuals being affected with mild symptoms. Implementation of the two-dose vaccine for every child older than a year has greatly reduced the outbreak of the infection. As of 2003, the healthcare system of the united states recommended that all the cases of chickenpox infection be reported (Giesecke, 2017). The CDC recommended three initial core variables that were based on the age, severity of the disease and the vaccination status of the patient. Additionally, it was also recommended that the demographic, epidemiologic and clinical data be reported together with information regarding the outcome of the infection to the patient. The CDC formulated varicella messaging guide to help in direct documentation of information regarding the infection by respective respondents. Click here to ORDER an A++ paper from our Verified MASTERS and DOCTORATE WRITERS: Epidemiology Paper Assignment: Chickenpox Essay Social Determinants of Health The social determinants of health as defined by the Healthy People 2020, are the conditions within the environment in which individuals are born, raised, learn, play, work, worship and age that affect a variety of human functioning, health and general quality of life, in addition to their risks and outcomes. Some examples of social determinants of health that contribute to the development of chickenpox include access to healthcare and educational services, availability of resources, public safety and cultural believes among others (Lo et al., 2019). research has also shown that personal behavior and genetic factors cannot fully explain the health disparities of infectious diseases on their own. Complex, overlapping and integrated social structures, and economic systems also need to be reviewed to be able to fully understand the morbidity and mortality of a given infectious disease (Currie, World Health Organization, & Health Behaviour in School-aged Children, 2012). People without medical health insurance covers, or who have a low income tend to be at high risk of developing chickenpox infection as they may fail to seek medical attention on a regular basis due to the high costs. Consequently, language barrier or shortages in the distribution of the vaccine to certain communities might also contribute greatly to the development and spread of chickenpox infection. Individual who are also unable to acquire proper education might also fall as victims of the infections due to lack of knowledge on the importance of vaccinating their children at the right time, and the measures that they need to take when one of their loved ones are infected. Given that chickenpox is contagious, social structures comprising of high education and public awareness is recommended to equip community members of the preventive measures such as high hygiene to prevent the spread of the virus. The Epidemiologic Triangle The epidemiologic triangle is composed of three main aspects, the susceptible individual or host, the environment and the causative agent. The triangle is very crucial when analyzing the natural history of a particular disease. The information analyzed using the epidemiologic triangle is crucial especially when trying to prevent the spread of a contagious disease. For instance, controlling and preventing the spread of a contagious disease starts by identifying the weak points in the triangle to be able to eliminate the threat. This can be achieved by implementation of prevention activities in addition to enhancing the efforts of reducing the seriousness of the disease by considering variables such as severity of the disease, illness length, treatment costs, long-term and short-term effects of the disease, and the death risks as a result of the infection (Gershon, 2018). The epidemiologic triangle helps connect the link between the causative agent of chickenpox which is the varicella-zoster virus and the primary host who are the humans. It also displays the modes of transmission such as sneezing and coughing which exposes the healthy individual to respiratory droplets containing the virus, that cause infection upon inhalation. Consequently, the triangle provides a link between the times of the year when the outbreak of the infection is high. For instance, chickenpox outbreak fluctuates seasonally in temperate regions, with high cases during early spring and the months of winter. In the United States, high incidences of chickenpox are within the months of March to May and lowest from September to November. With the above information, it is important for community members to be notified in case of an outbreak to put in place preventive measures to reduce the spread of the infection. The Role Of The Community Health Nurse Community health nurses play a significant role in promoting the health and lifestyle of members of a given community. It is required that all community health nurses must have basic information on communicable diseases, with a clear understanding of the causative agents, a period of incubation, modes of transmission, the symptoms of the disease, the preventive measures, and the available treatment options and care plan. Having this information will help the nurse focus more on the management of the symptoms of the infection while preventing its spread. community health nurses are also required to conduct patient and community awareness on the importance of chickenpox vaccination, how it is transmitted, how to manage the symptoms at home and how to prevent the spread of the infection among community members. it is also the obligation of the community health nurse to identify chickenpox risk factors, and how to keep the community members safe from the disease. Given that chickenpox is a highly contagious disease, the community health nurse, will be required to make frequent reports using the guidelines provided by the CDC to analyze the progress of the infection, and come up with the necessary recommendation that can help reduce its morbidity (Harkness, & DeMarco, 2016). With adequate information, the community health nurse will be able to assess the treatment options that have been used to manage chickenpox and their outcome from feedback provided by community members. as such, only productive methods will be reinforced in the future with better interventions to make sure that the disease does not cause future burden in healthcare. National Agencies And Organizations Addressing Chickenpox The World Health Organization (WHO) and the Centers for Disease Control and Prevention (CDC) are the main organizations which collect data and provide up to date information about chickenpox. The CDC has so far provided adequate information about presenting sign and symptoms of chickenpox, how it is transmitted, the causative agent, complications, prevention and treatment options, mortality and morbidity among others. the organization has also provided a kid-friendly fact sheet which provides information about the disease to children in a friendly and interesting manner (Lo et al., 2019). The WHO on the provides the same information as the ones provided by the CDC in addition to travel risks, and measures to be taken when traveling to a chickenpox endemic area. Additionally, there are two main foundations for chickenpox, the National Shingles Foundation and the National Foundation for Infectious Diseases (NFID). The NFID, a nonprofit organization was developed mainly to help in creating public awareness and educate healthcare professionals on the causes, prevalence, treatment, and prevention of various contagious diseases such as chickenpox. The organization is aimed at promoting the health of individuals across the globe, through education. The National Shingles Foundation, on the other hand, is also a non-profit organization focused on fighting against varicella zoster virus through education and research. The Global Implications of Chickenpox According to the WHO, chickenpox is an extremely contagious viral infection that is distributed worldwide. The universal childhood varicella vaccination program was introduced and implemented in the united states in 1995 to help prevent chickenpox epidemic. The vaccine is currently available across the world, but it is only in a few countries such as Germany, Korea, Japan, and Australia where coverage rates have been extensively attained (Hirose, Gilio, Ferronato, & Ragazzi, 2016). In these countries, the vaccine has been able to attain significant outcome with a great reduction in morbidity and mortality rates as a result of chickenpox. Non the less, several countries still experience a shortage of the virus, such as the third world countries, making the disease still a worldwide epidemic. The global chickenpox burden is still very high, at 4.2 million severe cases, and 4200 deaths yearly. In conclusion, the chickenpox burden was high before the varicella zoster vaccine was discovered in 1995. The disease burden, however, decreased by about 90% between 1995 to 2005 (Hawker et al., 2018). Currently, only a few cases of the disease are being reported. However, there is still a need for continued public awareness on the importance of the vaccine, and preventive measures that can be taken to prevent the spread of the disease. New treatment options have also been introduced which are helping patients suffering from the disease manage their condition. Chickenpox is a global epidemic and needs to be addressed with more seriousness to be able to curb the disease completely.
http://nursingassignmentwriter.com/epidemiology-paper-assignment-chickenpox-essay/
The collapse of civilizations has always attracted our attention: entire books, films and thousands of legends have been dedicated to it. What makes some human populations disappear and others not? Is it related to climate crises and pandemics like the current ones? The recent study Ecology of the collapse of Rapa Nui society involving CREAF researchers Olga Margalef and Sergi Pla-Rabés published in the journal Proceedings of the Royal Society B confirms that Easter Island’s aboriginal Rapa Nui society did not suddenly collapse, as was once thought. It actually underwent a gradual decline due to sharp population growth and climate changes to which the Rapa Nui were unable to adapt, preventing them from producing enough food. Analysis of climate data and archaeological remains, including charcoal from hearths and tooth collagen, has shaped the conclusions of a study that innovatively combines geology, ecology, population dynamics and archaeology. The study, to which researchers from the Jaume Almera Institute of Earth Sciences (ICTJA – CSIC), the University of Barcelona and the University of Oslo also contributed, reinforces the idea that “civilization collapses often coincide with major climate fluctuations”, as Sergi Pla-Rabés puts it. “The capacity of the population of an island, a city or any other ecosystem to produce the resources it needs to survive has an upper limit”, he explains. “Rapa Nui population dynamics were conditioned by the society’s capacity to increase its production of resources (cultivated area) and by the climate. The first population decline observed took place in the first half of the 15th century, when the island had a dry climate (in the Little Ice Age), which affected productivity and consequently reduced per-capita food availability. Societies can anticipate their resource requirements from one year to the next and adjust production accordingly, but a sudden external alteration, such as a change in climate, can restrict the extent to which they are able to mitigate adverse effects.” The more recent falls in Rapa Nui population size owe much to the arrival of Europeans (18th century), who brought epidemic diseases and the slave trade to Easter Island. Enlightening discoveries Analyses performed on charcoal and carbonized matter from hearths, human teeth and other archaeological remains at the Catholic University of Chile have painted a precise picture of when Easter Island’s population shrank and indicated the dynamics models it followed in doing so. Thanks to radiocarbon dating, the researchers were able to estimate energy consumption based on when fires were lit and the age of remains. “We have proposed a Rapa Nui population dynamics model that combines paleoenvironmental and paleoclimatic data, a very relevant contribution”, says CREAF geologist Olga Margalef. “Easter Island’s landscape changes were caused by its inhabitants’ land management practices, although regional and climate events in the Pacific Basin could have magnified or modulated them”, she adds. “Furthermore, the population crises studied also entailed significant religious and social changes. It could be said that new cultures emerged after each major crisis, because the way the Rapa Nui had previously viewed the world ceased to be valid.” Volcanic activity and tsunamis In all likelihood, extreme phenomena such as volcano eruptions and tsunamis exacerbated the Rapa Nui society’s demographic crises. Along with the effects of climate changes, per-capita food availability and Easter Island’s carrying capacity, such extreme events were key factors in the society’s decline. Based on its location, Easter Island probably suffered the consequences of great eruptions in the South Pacific in the 13th and 15th centuries, and would also have been exposed to tsunamis caused by earthquakes with epicentres on the coasts of Asia and the Americas. As the lead author of another study that looks at such matters, Olga Margalef explains that “the Rapa Nui were affected by extraordinary events, such as the 1960 Valdivia earthquake, generated in Chile, which caused a tsunami that reached Easter Island and moved the moai statues in its path a considerable distance inland. According to our calculations, tsunamis liable to affect Easter Island could have had a period of recurrence of under 100 years.” Such phenomena would have had a huge impact on the Rapa Nui, given that they initially lived in coastal areas and their economic activity revolved around fishing. Referenced articles:
https://blog.creaf.cat/en/noticies-en/climate-change-population-growth-key-factors-decline-easter-islands-civilization/
Course Description: The course covers external factors as they relate to bilingualism, including language policy and bilingual education as well as internal, cognitive factors and the relationship between both. Readings focus on individual and societal bilingualism in Latin America, Spain, and the US. The course fulfills one of the two linguistics requirements for the major and the College Social Science requirement and is offered every other Fall semester. ***This course counts towards the Social Sciences requirement in the College.*** Course Goals: 1.Identify common patterns and differences among Spanish bilingual speech communities in the US, Latin America, and Spain. 2.Understand the external conditions, including language policy and bilingual education, that determine the development of individual bilingualism. 3.Understand the relationship between cognitive development and cognition and the individual’s ability to speak two languages. 4.Critical thinking, Research Methods: Develop students’ abilities to critically review the literature in the area, generate a hypothesis, and design an empirical study to test said hypothesis. 5.Language Skills: Develop students’ academic Spanish, both oral and written, with special attention to the development of descriptive and argumentative abilities and manuscript preparation following APA guidelines. Topics covered: Bilingualism in Latin America, Spain, and the US Language Policy: Language revival and reversal Bilingual education: Types & effectiveness (1) Bilingual Education, multiculturalism and antiracism Cognitive theories of bilingualism Individual variables: level of bilingualism, biliteracy, and aptitude The cognitive neuropsychology of bilingualism Multilingualism: Learning a third language and beyond Texts & Readings: Textbooks: Montrul, S. (2013). El bilingüismo en el mundo hispanohablante. Malden, MA: Wiley-Blackwell. A collection of book chapters and articles will be available in pdf format on Blackboard. Credits: 3.0 Prerequisites: None | | Other academic years There is information about this course number in other academic years: More information Look for this course in the schedule of classes. The academic department web site for this program may provide other details about this course.
http://courses.georgetown.edu/index.cfm?Action=View&CourseID=SPAN-313
Ceramic is a public, permissionless, open source protocol that provides computation, state transformations, and consensus for all types of data structures stored on the decentralized web. Ceramic's stream processing enables developers to build secure, trustless, censorship-resistant applications on top of dynamic information without trusted database servers. This overview introduces how: - Decentralized content computation gives rise to a new era of open source information - Stream processing provides an appropriate framework for dynamic, decentralized content - You can use Ceramic to replace your database with a truly decentralized alternative To skip ahead and get started building, try the Playground to demo Ceramic in a browser application, the Quick Start guide to learn the basics using the Ceramic CLI, or follow the Installation page to integrate Ceramic into your project. The internet of open source information¶ At its core, the internet is a collection of applications running on stateful data sources – from identity systems and user tables to databases and feeds for storing all kinds of content generated by users, services, or machines. Most of the information on today's internet is locked away on application-specific database servers designed to protect data as a proprietary resource. Acting as trusted middlemen, applications make it difficult and opaque for others to access this information by requiring explicit permissions, one-off API integrations, and trust that returned state is correct. This siloed and competitive environment results in more friction for developers and worse experiences for users. Along other dimensions, the web has rapidly evolved into a more open source, composable, and collaborative ecosystem. We can observe this trend in open source software enabled by Git's distributed version control and in open source finance enabled by blockchain's double-spend protection. The same principles of open source have not yet been applied to content. The next wave of transformative innovation will be in applying the same open source principles to the world's information, unlocking a universe of content that can be frictionlessly shared across application or organizational boundaries. Achieving this requires a decentralized computation network designed specifically for content with flexibility, scalability, and composability as first class requirements. Decentralized content computation¶ Open sourcing the content layer for applications requires deploying information to a public, permissionless environment where files can be stored, computation can be performed, state can be tracked, and others can easily access content. Advancements in other Web3 protocols have already achieved success in decentralized file storage. As a universal file system for the decentralized web, IPFS (including IPLD and Libp2p) provides an extremely flexible content naming and routing system. As a storage disk, durable persistence networks (such as Filecoin, Arweave, and Sia) ensure that the content represented in IPFS files is persisted and kept available. This stack of Web3 protocols performs well for storing static files, but on its own lacks the computation and state management capacity for more advanced database-like features such as mutability, version control, access control, and programmable logic. These are required to enable developers to build fully featured decentralized applications. Ceramic enables static files to be composed into higher-order mutable data structures, programmed to behave in any desired manner, whose resulting state is stored and replicated across a decentralized network of nodes. Ceramic builds upon and extends the IPFS file system and underlying persistence networks, as well as other open standards in the decentralized ecosystem, with a general-purpose decentralized content computation substrate. Due to Ceramic's permissionless design and unified global network, anyone in the world can openly create, discover, query, and build upon existing data without needing to trust a centralized server, integrate one-off APIs, or worry if the state of information being returned is correct. Streams¶ Ceramic's decentralized content computation network is modeled after various stream processing frameworks found in Web2. In these types of systems, events are ingested, processed as they arrive, and the resulting output is applied to a log. When queried and reduced, this log represents the current state of a piece of information. This is an appropriate framework for conceptualizing how dynamic information should be modeled on the decentralized web. Furthermore because the function that processes incoming events on any particular stream can be custom written with logic for any use case, it provides the general-purpose flexibility and extensibility needed to represent the diversity of information that may exist on the web. On Ceramic, each piece of information is represented as an append-only log of commits, called a Stream. Each stream is a DAG stored in IPLD, with an immutable name called a StreamID, and a verifiable state called a StreamState. Streams are similar in concept to Git trees, and each stream can be thought of as its own blockchain, ledger, or event log. StreamTypes¶ Each stream must specify a StreamType, which is the processing logic used by the particular stream. A StreamType is essentially a function that is executed by a Ceramic node upon receipt of a new commit to the stream that governs the stream's state transitions and resulting output. StreamTypes are responsible for enforcing all rules and logic for the stream, such as data structure, content format, authentication or access control, and consensus algorithm. If an update does not conform to the logic specified by the StreamType, the update is disregarded. After applying a valid commit to the stream, the resulting StreamState is broadcast out to the rest of the nodes on the Ceramic Network. Each of the other nodes that are also maintaining this stream will update their StreamState to reflect this new transaction. Ceramic's flexible StreamTypes framework enables developers to deploy any kind of information that conforms to any set of arbitrary rules as a stateful stream of events. Ceramic clients come pre-packaged with a standard set of StreamTypes that cover a wide range of common use cases, making it easy to get started building applications: - Tile Document: a StreamType that stores a JSON document, providing similar functionality as a NoSQL document store. Tile Documents are frequently used as a database replacement for identity metadata (profiles, social graphs, reputation scores, linked social accounts), user-generated content (blog posts, social media, etc), indexes of other StreamIDs to form collections and user tables (IDX), DID documents, verifiable claims, and more. Tile Documents rely on DIDs for authentication and all valid updates to a stream must be signed by the DID that controls the stream. - CAIP-10 Link: a StreamType that stores a cryptographically verifiable proof that links a blockchain address to a DID. A DID can have an unlimited number of CAIP-10 Links that bind it to many different addresses on many different blockchain networks. CAIP-10 Links also rely on DIDs for authentication, the same as Tile Documents. - Custom: You can implement your own StreamType and deploy it to your Ceramic node if the pre-packaged StreamTypes are not suitable for your use case. Authentication¶ StreamTypes are able to specify their authentication requirements for how new data is authorized to be added to a particular stream. Different StreamTypes may choose to implement different authentication requirements. One of the most powerful and important authentication mechanisms that Ceramic StreamTypes support is DIDs, the W3C standard for decentralized identifiers. DIDs are used by the default StreamTypes (Tile Documents and CAIP-10 Links). DIDs provide a way to go from a globally-unique, platform-agnostic string identifier to a DID document containing public keys for signature verification and encryption. Ceramic is capable of supporting any DID method implementation. Below, find the DID methods that are currently supported by Ceramic: - PKH DID Method: A DID method that natively supports blockchain accounts. DID documents are statically generated from a blockchain account, allowing blockchain accounts to sign, authorize and authenticate in DID based environments. - 3ID DID Method: A DID method that uses Ceramic's Tile Document StreamType to represent a mutable DID document. 3IDs are typically used for end-user accounts. When 3IDs are used in conjunction with the Identity Index protocol and the 3ID Keychain (as is implemented in 3ID Connect), a 3ID can easily be controlled with any number of blockchain accounts from any L1 or L2 network. This provides a way to unify a user's identity across all other platforms. - Key DID Method: A DID method statically generated from any Ed25519 key pair. Key DIDs are typically used for developer accounts. Key DID is lightweight, but the drawback is that its DID document is immutable and has no ability to rotate keys if it is compromised. - NFT DID Method (coming soon): A DID method for any NFT on any blockchain. The DID document is statically generated from on-chain data. The DID associated to the blockchain account of the asset's current owner (using CAIP-10 Links) is the only entity authorized to act on behalf of the NFT DID, authenticate in DID-based systems, and make updates to streams or other data owned by the NFT DID. When owenership of the NFT changes, so does the controller permissions. - Safe DID Method (coming soon): A DID method for a Gnosis Safe smart contract on any blockchain. Typically used for organizations, DAOs, and other multi-sig entities. Ceramic Network¶ The Ceramic Network is a decentralized, worldwide network of nodes running the Ceramic protocol that communicate over a dedicated topic on the Libp2p peer-to-peer networking protocol. Ceramic is able to achieve maximum horizontal scalability, throughput, and performance due to its unique design. Sharded execution environment¶ Unlike traditional blockchain systems where scalability is limited to a single global virtual execution environment (VM) and the state of a single ledger is shared between all nodes, each Ceramic node acts as an individual execution environment for performing computations and validating transactions on streams – there is no global ledger. This "built-in" execution sharding enables the Ceramic Network to scale horizontally to parallelize the processing of an increasing number of simultaneous stream transactions as the number of nodes on the network increases. Such a design is needed to handle the scale of the world's data, which is orders of magnitude greater than the throughput needed on a financial blockchain. Another benefit of this design is that a Ceramic node can perform stream transactions in an offline-first environment and then later sync updates with the rest of the network when it comes back online. Global namespace¶ Since all nodes are part of the same Ceramic Network, every stream on Ceramic exists within a single global namespace where it can be accessed by any other node or referenced by any other stream. This creates a public data web of open source information. Additional node responsibilities¶ In addition to executing stream transactions according to StreamType logic, Ceramic nodes also maintain a few other key responsibilities: - StreamState storage: A Ceramic node only persists StreamStates for the streams it cares to keep around, a process called "pinning." Different nodes will maintain StreamStates for different streams, but multiple nodes can maintain the state of a single stream. - Commit log storage: A Ceramic node maintains a local copy of all commits to the streams it is pinning. - Persistence connectors: Ceramic nodes can optionally utilize an additional durable storage backend for backing up commits for streams it is pinning. This can be any of the persistence networks mentioned above, including Filecoin, Arweave, Sia, etc. (coming soon). - Query responses: Ceramic nodes respond to stream queries from clients. If the node has the stream pinned, it will return the response; if not, it will ask the rest of the network for the stream over libp2p and then return the response. - Broadcasting transactions: When a Ceramic node successfully performs a transaction on a stream, it broadcasts this transaction out the rest of the network over libp2p so other nodes also pinning this stream can update their StreamState to reflect this new transaction. Components of a Ceramic Node¶ A fully functioning Ceramic Node consists of a Ceramic Instance with associated storages and requires a Ceramic Anchor Service (CAS) to be available on the network. The storage needs of a Ceramic Node include the Ceramic State Store and the IPFS repo. See persisting IPFS data for details. Clients¶ Clients provide standard interfaces for performing transactions and queries on streams, and are installed into applications. Clients are also responsible for authenticating users and signing transactions. Currently there are three clients for Ceramic. Additional client implementations can easily be developed in other programming languages: - CLI: A command line interface for interacting with a Ceramic node. Getting started¶ Try Ceramic¶ To experience how Ceramic works in a browser application, try the Playground app. Installation¶ Getting started with Ceramic is simple. Visit the Quick Start guide to learn the basics using the Ceramic CLI or follow the Installation page to integrate Ceramic into your project. Tools and services¶ In addition to various standards referenced throughout this document, the Ceramic community has already begun delevoping many different open source protocols, tools, and services that simplify the experience of developing on Ceramic. Here are a few notable examples: - 3ID Connect: A authentication SDK for browser-based applications that allows your users to transact with Ceramic using their blockchain wallet. - Identity Index (IDX): A protocol for decentralized identity that allows a DID to aggregate an index of all their data from across all apps in one place. IDX enables user-centric data storage, discovery, and interoperability. It is effectively a decentralized, cross-platform user table. IDX can reference all data source types, including Ceramic streams and other peer-to-peer databases and files. - IdentityLink: A service that issues verifiable claims which prove a DID owns various other Web2 social accounts such as Twitter, Github, Discord, Discourse, Telegram, Instagram, etc. Once issued, claims are stored in the DID's Identity Index. - Documint: A browser-based IDE for creating and editing streams. - Tiles: An explorer for the Ceramic Network.
https://developers.ceramic.network/learn/advanced/overview/
Includes full-text and abstracts of scholarly, trade, and general-interest periodicals covering current events, general sciences and technology, social sciences, arts, and humanities. Examining major aspects of each ancient culture, such as its government, its economy, and its religious practices. Examining major aspects of each ancient culture, such as its government, its economy, and its religious practices. Examining major aspects of each ancient culture, such as its government, its economy, and its religious practices. A comprehensive collection of full-text biographies, as well as thousands of unique narrative biographies Look for people, places, and things. Watch and learn with videos and animations. Have fun with games and activities. Get quick facts and in-depth information on a wide variety of subjects. Start research projects with multiple resources in one place. Find multimedia to use in projects and presentations. Find fast answers and homework help. Explore videos and articles on famous people and places. Discover maps, photos, and illustrations for school projects. Easily find information on companies, industries and more in the context of timely news, statistical data, and in-depth reports. An interactive career planning tool designed to help students to explore different career options, manage course selections online--and plan various pathways to meet the requirements for their desired career path. Country Reviews, Country Wire, Data, Maps, the Global Guide and the Political Intelligence Briefing and Wire. Grades 6-12 Award winning in-depth coverage of topical issues written by experienced journalists, footnoted and professionally fact-checked. A unique collection of five Interactive eBooks, gives students grades 3-6 hands-on experience in navigating the online world in a safe, controlled environment. Highlighted content from Explora relevant to teachers, including lessons plans, curriculum standards, and other professional development resources. This database provides over 25,000 encyclopedic entries covering a variety of subject areas. Gale resource portal. Ebook portal for Gale resources. Search newspapers, business journals and health journals. Access to ProQuest Databases. A collection of Mackin Ebooks available from the GVEP Media Library. This multi-source database provides access to the full text of nursing and allied health journals, plus the wide variety of personal health information sources. Contains full text for hundreds of science encyclopedias, reference books, periodicals, and other sources. A general reference resource for primary and elementary. An online research tool to track your sources, take notes, create outlines, collaborate with classmates, and format and print your bibliography. Facts and arguments on current events topics and social issues. Research information that supports animal classification, behaviors and habitats. Research information on the lives of important inventors, explorers, African Americans, Hispanic Americans, women and more. Research information that supports Earth Science, seasons, weather, and space. Research information about the world around you including families, maps and holidays. Inspires elementary and middle school learners about key earth and space science topics including earth cycles, ecosystems and biomes, energy and matter, landforms, maps, natural disasters, rocks and minerals, environmental issues, the scientific method, space, water, and weather and climate. Inspires elementary and middle school learners about key life science topics including animals; classification; endangered and extinct species; food chains and food webs; green living; habitats and ecosystems; the human body; life cycles; plants; and survival and adaptation. Informs and inspires learners about key physical science topics including atoms and molecules; elements and the periodic table; energy and matter; force and motion; and temperature and measurement. Primary resources using an a traditional text interface. Searches many ProQuest databases from one interface. Created for middle-schoolers, Research in Context combines Gale reference content with age-appropriate videos, periodicals, primary sources, and more Provides contextual information on hundreds of today's most significant science topics showing how scientific disciplines relate to real-world issues, from weather patterns to obesity. More than 2,000 original, comprehensive, scholar-signed essays covering the lives and works of more than 1,400 important authors from all time periods. Soundzabound royalty free music. Explore what life was like as the American colonies started and grew, including where colonists lived, whom they traded with, and how they survived. Learn about important figures and industries in colonial history, as well as the many struggles colonists faced. Explore the extraordinary history of the United States and the people, places, and events that helped shape our young nation. Discover the causes and significance of key events in American history and the beginnings of our government. A well-researched overview of the Empire State, including its geography, history, and industries. Full-text magazines, academic journals, news articles, primary source documents, images, videos, audio files and links to vetted website Discover online multimedia resources. Bring books and authors to life. Interactive eBooks for grades 7-12 will simulate a real-life Internet experience within the safety of an instructional, guided, and fun platform. Nonjudgmental, straightforward info for middle and high school covering diseases, drugs, alcohol, nutrition, mental health, suicide, bullying, green living, financial literacy, and more. Ebook collection that includes animated talking picture books, chapter books, videos, non-fiction titles, playlists, books in languages other than English such as French and Spanish, graphic novels and math stories. Supporting materials includes lesson plans, quizzes, educational games and puzzles related to both math and language skills. Full text of 200 frequently used Twayne Literary Masters books on individual World, US, or English authors. A collection includes highly illustrated, engaging titles that support a span of curriculum areas and reading levels.
http://fishforinfo.org/databases/livm
John Gribbin with Mary Gribbin, Stardust : Supernovae and Life—The Cosmic Connection. New Haven, Yale University Press, 2000. xviii + 238 pages; illustrated with photographs, line drawings, and graphs; includes "further reading" and index. It's now been said so often that it's a cliché: "Humans are made of stardust." It's a poetic statement of humanity's connection to the cosmos. It's a call to arms for biblical literalists. It's also the literal thesis of this book. We can easily forget that the phrase does have a literal meaning—or very nearly literal—that was once an exciting new discovery. In the mid twentieth century quantum theory still was young, our understanding of how the sun continued to shine through nuclear fusion, and the fact that the universe is expanding, therefore that the universe also had a beginning and a discoverable age, was all fresh, new knowledge. Isn't it surprising that all of these facts have been known for less than 100 years still. The story really begins in the 1920s, when astronomers began to appreciate that a star like the Sun is indeed, even today, largely made of hydrogen and helium—before that, they had assumed that stars were made of much the same sort of material as planets like the Earth, which is rich in iron, the most stable element. Beginning in the 1920s, the story of how we are made of stardust, and are therefore the children of the stars, involves the understanding of how stars themselves work that was developed over the next few decades. It is no coincidence that this understanding developed when it did, because it involved both the special theory of relativity and quantum physics, ideas which were themselves new to science in the early twentieth century. In the nineteenth century, the fact that stars stayed hot at all was one of the greatest puzzles confronting not just astronomy, but physics. [pp. x—xi] And so the central idea of the book is stellar nucleosynthesis: how the various elements come to be made in stars and supernovae through nuclear fusion in a process that is well understood now but was a novel idea in 1939 when Hans Bethe published his groundbreaking paper in Physical Review on "Energy Production in Stars". Unraveling the mystery of how stars continued to shine, where their energy came from, synthesized new knowledge of nuclear processes and astrophysics, and led to the realization that heavy elements were not made in the Big Bang but were created in stars and supernovae. This book explains the relationship between life and the Universe, from the Big Bang to the arrival of the molecules of life on the surface of the Earth. It is a complete and self-consistent story, describing our cosmic origins from stardust. But it is not necessarily the whole story of life and the Universe, and before delving into the details, I want to describe briefly some of the more intriguing current ideas that may, if they are proved correct, take us beyond the story so far. The caveat is that "intriguing" doesn't necessarily mean "correct." But science progresses by making reasonable speculations, then testing those speculations to see how well they stand up. And in a book which claims to offer the best available scientific evidence for our own origins, it would be derelict not to make it clear just how science arrives at these profound conclusions. [p. 1] In a manner that readers of Gribbin's other books will recognize, the author makes this a mostly self-contained book about how stellar nucleosynthesis works and how we came to understand how it works, providing plenty of groundwork information and history to complete the story. He does this and keeps his writing lively, precise, and filled with scienticity. Sometimes that means a short digression on how science itself works, digressions that I thought enhanced his story. The history of science isn't always as neat and tidy as some of the accounts you read in books might suggest. Discoveries may come out of sequence, with notable insights that would have speeded the development of understanding sometimes not turning up for years, while on other occasions the relevance of a scientific discovery becomes clear only long after it is made. The parallel development, after about 1930, of the understanding of how stars work and of how the Universe came to be the way it is was particularly messy and confused, and although both developments depended on the new technology of improved telescopes and the new physics of quantum theory (which is why they proceeded in parallel), it took forty years for all the pieces to fit together into a self-consistent picture of how stars had evolved within an expanding Universe and where the elements that we are made of had come from. Remember that is was only at the end of the 1920s that astronomers even began to realize that stars are not made of the same stuff that the Earth is made of, and that their composition is dominated by hydrogen. At exactly the same time, Edwin Hubble and his colleague Milton Humason, working with the largest and best telescope then available on Earth, the 100-inch telescope on Mount Wilson, in California, discovered that the Universe is expanding. It was the discovery that would lead to the realization that the Universe had been born in a Big Bang, some 15 billion years ago, and that what had emerged from the Big Bang to form the first generation of stars was a mixture of roughly 75 percent hydrogen and 25 percent helium, with just a smattering of other light elements (including, crucially, deuterium). But that would not become clear until the end of the 1960s, after crucial developments in our understanding of how the heavier elements are made inside stars. [pp. 112—113] Because he wants his exposition about nucleosynthesis to be complete, some of the discussion in the earlier part of the book covers some ground that Gribbin has written on in other books. This is not surprising since cosmology and quantum physics are topics that he visits frequently; I never felt like he was just repeating himself without something new to say. It seems to me that Gribbin approaches each new book project with a fresh outlook, so new insights appear even with familiar material. I found several provocative and profound thoughts in this volume to keep me satisfied. In my view, one of the most profound discoveries made by science in the twentieth century is that the Milky Way Galaxy, which is, as far as we can tell, a typical representative of the myriad of galaxies that fill the Universe, is itself packed with the raw materials for life, and that these raw materials are the inevitable product of the process of star birth and star death. We have answered the biggest question of them all—where do we come from? [pp. 213—214] I very much enjoyed the reading of this book and the thinking that it induced in me. Of the books by Gribbin that I've read so far I think I had a slight preference for The Birth of Time, but that may be as much a matter of personal taste as anything.
http://scienticity.net/wiki/Gribbin:_Stardust
For two years, a physician-led group of diverse healthcare stakeholders has been plotting to re-engineer healthcare delivery. The Health Care Delivery Policy Program, a forum created by the Kennedy School of Government of Harvard University, aims to examine the business practices of outside industries and apply them to healthcare. The research will be used to explore how measures of patient-centered productivity and severity-adjusted outcomes can be better organized around the business development of healthcare services. "We want to restructure the healthcare delivery system as a market-based service including a whole different set of structural pieces, including oversight," says Jerome Grossman, M.D., director of the Harvard consortium. A member of the Institute of Medicine since 1983, Grossman served on the quality committee that produced the recent IOM reports "To Err is Human" and "Crossing the Quality Chasm." The goal of his latest endeavor, which brings together healthcare academics, providers, purchasers, public and private payers and consumers, is to build from the IOM reports with investigations and tests of "delivery system clinical trials." "We've begun to identify charter health systems, like charter schools, to develop demonstrations to test these hypotheses," Grossman says. "Charters serve as a catalyst for change without overhauling the whole system. That is our strategy." Physicians will support this approach because it is voluntary, Grossman says. Model care With the 2001 enrollment of its first class, the new College of Medicine at Florida State University in Tallahassee, Fla., a Harvard partner, seized the occasion to develop new care models for a new breed of doctor, says Dean Joseph Scherger, M.D. One of the big shifts playing into medical education is the understanding that the complexity of modern medicine exceeds the limitations of an unaided human mind, Scherger says. "We can't deliver healthcare off the tops of our heads anymore," he says. "No one is able to deliver the latest best practice. Whether you're a specialist or a generalist, you need knowledge-management tools. We are teaching our medical students from day one that they shouldn't be trying to memorize everything." Each of Scherger's students is given a fully loaded handheld computer and a wireless laptop, carried at all times to check for clinical guidelines and drug interactions. The students are taking these devices directly into the community as part of standard practice. Expecting more Because there is no university hospital or university faculty practice group, all clinical education is through partnerships with existing providers. One such partnership is a planned undertaking between the medical school and the St. Joe Land Co., which is developing new communities on nearly 1 million acres of land in Florida's panhandle. Working with other local providers, FSU medical students will help put in place new models of supported care based on the IOM reports, Scherger says. As in Celebration, Disney's planned community near Orlando, Fla., the emphasis will be on patient-centered preventive care, health promotion and fitness. And just as charter schools are more demanding of family involvement, Grossman says charter health systems will expect more from patients. "You get a reduction in your health insurance premium if you don't smoke," he points out. "If patients are good compliers, should they not be given some deduction in their insurance? Patients will pay for performance. It is critical to recognize that this is not a one-way street." Financing Looking at the evolution of the 401(k) is useful in order to understand this shift in how consumers will participate in their healthcare decisions. Last August, Fidelity Employer Services Co. took over administration of the pension and health plans of all of IBM's 150,000 employees and 140,000 retirees in the United States. Even if one changes jobs, he or she would not have to change insurers, because Fidelity would maintain contracts with a large number of insurance companies. The Harvard group is monitoring this initiative, discussing how it could be a model for stable health insurance management. "The goal of the IBM/Fidelity trial is to change the way consumers and companies act in moving toward rollover and portability, like a 401(k)," Grossman says. "It is a significantly different strategy for the consumer than co-paying for the benefit, where the company keeps the money and is at risk for the money. It's starting out more as an engineering and business trial than medical only. We'll be there to follow it and they'll be able to bring observations and questions to the group."
https://www.modernhealthcare.com/article/20030201/MODERNPHYSICIAN/302010706/market-medical
forum! New Topic Default view With Bookmarks With Bookmarks and Watches programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other all forums this forum made possible by our volunteer staff, including ... Marshals: Campbell Ritchie Tim Cooke Paul Clapham Devaka Cooray Bear Bibeault Sheriffs: Junilu Lacar Knute Snortum Liutauras Vilda Saloon Keepers: Ron McLeod Stephan van Hulst Tim Moores Tim Holloway Piet Souris Bartenders: salvin francis Carey Brown Frits Walraven Declarative programming Declarative programming Forum at JavaRanch I Declar-ative this forum open 1 week ago 13 replies Forum leaders: Campbell Ritchie Paradigms Functional programming 662 Object-Oriented programming 58 Procedural programming 1 Imperative programming 3 Reactive Progamming 147 Reversible programming 6 Multi-Paradigm Programming 0 Stack based programming 4 Logic programming 43 What do the icons mean:
https://coderanch.com/f/194/declarative-programming
Stitching a blanket can be accomplished with a wide variety of threads, including yarn, embroidery floss with six strands, pearl cotton, and many more. To put it simply, while you are blanket stitching, you will need to use a thicker thread as well as a bigger needle depending on the weight and thickness of the fabric you are working with. Can I mix cotton and flannel in a quilt? The weight of the material, in addition to precise seam allowances If you constantly sew flannel to flannel or always sew flannel to quilting cotton, you won’t have any problems; all you have to do is establish the seam allowance, test, tweak, and then stitch. Do you use batting in a flannel quilt? Because flannel is more difficult to hand quilt, it is recommended that you use it for quilts that will either be tied or machine quilted. The use of cotton batting is highly recommended for flannel quilts. You might wish to use a thinner batting if the front and back of the quilt are both made of flannel. This will prevent the quilt sandwich from becoming an excessively thick layer. What kind of thread do you use for flannel fabric? In my experience, the best thread to use for flannel is all-purpose polyester thread since it is strong and has some give. If you would rather have matching fibers, you can use thread that is entirely made of cotton instead. A serger or an overlock machine is the most effective tool for finishing seams. Can you mix cotton and polyester in a quilt? Things That Are Necessary To You The process of constructing a quilt using polyester is quite similar to the process of creating a quilt using cotton.Before cutting the pieces for the quilt, it is necessary to use a lightweight iron-on stabilizer for many different types of polyester in order to successfully cope with any stretch that may be present as well as the varied textures that are intrinsic to these fabrics. Can you mix fabric types in a quilt? Absolutely! In the field of quilting, many of the ″greats″ use several types of fibers within a single piece in order to boost the project’s overall appeal, texture, and design. Keep in mind that there are no quilting police, and your quilt will be just as beautiful and intriguing as the choices you make about the fibers and fabrics you choose. Do I need to pre wash flannel before sewing? Yes! Since flannel is renowned for causing shrinkage, the fabric must be prewashed before it can be used for sewing projects. It is common practice to sew flannel together with other fabrics made of polyester, such as minky or fleece, which do not shrink when washed. When flannel that has not been cleaned is sewn, the resulting seams will bunch and pucker. Do flannel rag quilts need batting? A batting with a low loft should be used for your rag quilt. That is, if you want to use batting at all—when making a rag quilt, some quilters prefer not to use batting and instead opt to make the quilt using heavier materials. You could, for instance, cut up all of your family’s old pairs of denim pants and line the back of each piece with flannel. How do you put batting in a blanket? The term ″batting″ refers to the padding that is used to fill out your blanket. Clean the cloth you have. - Place them in separate mesh washing bags if they have been sliced, as this indicates that they were cut immediately off the bolt. - If the batting has already been preshrunk, you do not need to worry about washing it. - It is possible to dry the flannel and plush fabric in your home dryer on a low heat setting What needle should I use for flannel? To do this task correctly, I would recommend using a 90/14 sewing needle. A 80/12 ought should do the trick for cotton flannel as well. Is flannel cotton or polyester? Flannel is a sort of fabric that is made by weaving fibers together in a very loose fashion. Cotton is the most common material used in the production of flannel sheets; however, these sheets can also be created from wool, fiber mixes, or synthetic fibers like polyester. The ″napping″ process is responsible for the somewhat fuzzy feel that is characteristic of flannel. What kind of fabric do you use for a blanket? When I want to be cozy on the couch, flannel is my go-to material for making a blanket.It’s nice and toasty in here.Cotton might make for a nice option for a lightweight blanket to utilize throughout the summer.Or you may wear fleece for an even cozier option.I want to make a blanket, but I’m not sure how much fabric I’ll need. - For each blanket, you will need to have two separate pieces of cloth as well as one separate piece of batting. - Here are some guidelines: How do I choose the batting for my Blanket? Make your selection for the blanket’s batting. The term ″batting″ refers to the padding that is used to fill out your blanket. At your neighborhood fabric or craft store, you should be able to get premade batting (an insulating material) in the sizes Twin, Queen, and King. You may also purchase an item of a bespoke size right off the bolt at the retail location. Can you use flannel as batting fabric? When working with flannel, it is much simpler to stretch and pull on the fabric in order to rip apart seams.This is because flannel has a looser weave than the majority of sorts of cotton fabric.It is also a more fragile fabric, therefore you should proceed with caution when using the seam ripper since it will be easy to accidentally create a hole in the cloth.Can Flannel Be Used Instead of Batting?Absolutely!
https://glencoemill.com/manufacturing/what-thread-do-i-use-for-a-blanket-with-two-layers-of-cotton-flannel-and-cotton-poly-batting.html
Multicentric reticulohistiocytosis: a unique case with pulmonary fibrosis. Journal Article (Journal Article) BACKGROUND: Multicentric reticulohistiocytosis (MRH) is a rare disease of uncertain etiology that most commonly presents as a papulonodular cutaneous eruption accompanied by erosive polyarthritis. Although MRH is considered a systemic disorder in that it targets skin and joints, involvement of thoracic and visceral organs is uncommon. OBSERVATIONS: A woman presented with diffuse cutaneous nodules, and skin biopsy findings revealed classic features of MRH. However, she also manifested severe pulmonary symptoms. A lung biopsy specimen showed prominent histiocytic infiltrates exhibiting the same characteristic morphologic features as those seen in her skin. Furthermore, the lung biopsy findings were significant for a pattern of usual interstitial pneumonia accompanied by notable lymphoid aggregates, a pattern of interstitial lung disease typical of systemic autoimmune and inflammatory conditions. CONCLUSIONS: These findings are notable because a histiocytic pulmonary infiltrate suggestive of direct pulmonary involvement by MRH is a rare event. In addition, presentation of MRH in the setting of usual interstitial pneumonia is unique. These observations document a new clinical and histopathologic presentation of MRH that is significant for expanding the idea of MRH as a systemic disease while supporting the notion that MRH is promoted by an inflammatory milieu.
https://scholars.duke.edu/display/pub799790
By: Dr. Joyce Z. Schneiders and Dr. Raeal Moore The COVID-19 pandemic upended learning and instruction for students around the world in the past year. In June 2020, ACT wanted to understand first-year college students’ learning experiences after the transition to online instruction during the pandemic, so we asked those who graduated from high school in 2019 and enrolled in a postsecondary institution in the 2019–2020 school year to share their experiences with us via a survey and open ended responses. Students experienced academic challenges and had concerns about future academic success Two out of three students reported that their coursework was somewhat or very challenging. More than three-quarters of the students were “a great deal” or “somewhat” concerned that online learning during the pandemic would negatively affect their academic success next year. Three out of four students believed that such a negative effect could have long-term consequences. For example, two student respondents said: “Next year, school will be harder because there are certain classes that are better for in-person than online and I plan on taking more credits for my degree.” “I am a studio art major, and my classes are not the type that can be transferred to online. Materials, working space, instructor-student time was few and far between. My major cannot be successfully completed online.” The fewer perceived academic challenges and academic concerns students had, the more certain they were about enrolling in the same institution the next year Students who experienced relatively low level of academic challenges and concerns were more certain about enrolling in the same institution the next year. Also, when schools provided outreach to students during the pandemic, they were more certain about enrolling in the same institution the next year. Increasing access to technology and the internet, reducing the learning resource gap, and providing online learning experiences before formal instruction help to alleviate these academic challenges and concerns Students who had limited access to technology (i.e., quality computers, stable internet), received limited learning resources (e.g., manageable number of assignments across classes, timeline and specific feedback on assignments from their teacher, and clear and understandable class materials/assignments), and had no prior experience learning online were more likely to experience academic challenges and concerns than those students who did not have similar limitations. There were differences by race/ethnicity, family income, and first-generation status related to access to these resources. When students received the same technological and learning resources, the differences in perceived academic challenges and academic concerns were no longer significant among students from diverse groups (i.e., gender, race/ethnicity, college type, ACT score, family income, and first-generation status). These findings imply that ensuring students from low-income families, students of color, and first-year students’ access to technological resources and learning resources is critical for online learning. Considering that it is very likely that postsecondary institutions will continue some form of online learning in fall 2021, we have provided some recommendations for supporting and improving online learning among incoming first-year college students. Recommendations Address the inequities in access to technology and the internet. Universities and colleges should develop policies and plans to support students who lack access to technology and the internet, especially students who come from underserved populations. Additionally, postsecondary institutions need to ensure that students are comfortable with the technology and are able to use it effectively. Close the gap in learning resources. Universities and colleges need to collect information about first-year college students’ needs in online learning environments in order to identify (and close) gaps in resources. Common learning resources include timely feedback from instructors, access to well-organized course materials, manageable amount of assignments, reliable systems to submit assignments, and online tools that allow opportunities for collaboration. Promote online learning preparatory programs. These preparatory programs could give students a chance to learn how to use learning management systems, how to collaborate and communicate with other students and instructors online, and how to solve potential technical problems. Advance (and advocate for) student outreach. Universities and colleges should develop policies that include periodic student outreach to better understand student concerns in different conditions and over time. Making sure that students’ voices are heard, and their concerns addressed, could provide students with a greater sense of belonging and promote their mental health. Support professional development for instructors. Postsecondary institutions should consider offering professional development opportunities for instructors to help them develop skills for effective online instruction. Such trainings should focus on areas like online course design, organizing online course materials, ways of interacting with students online, effective use of learning management systems, and enhancing students’ motivation and engagement during online learning.
https://leadershipblog.act.org/2021/08/First-Year-Online-What-They-Told-Us.html
Many constitutions in modern democracies prohibit the passing of ex post facto law, or laws that retroactively change the legal consequences of an action. This is a feature: it prevents new regimes from passing specific laws aimed to put the previous leaders in jail. As an individual, it would be difficult to exist in a society where you always worried that a future law would be passed criminalizing and punishing today’s actions. Similar logic might be applied to the tests that we write for software. What is the cost of writing tests (laws) after the production code (actions) has happened? This blog post proposes a new term for tests written after production code has been written. I call them Ex Post Facto Tests (EPFTs). We should look to avoid this practice and gently enourage folks away from the practice. Test-Driven Development (TDD) continues to see friction in the workplace. We often don’t feel we have time to write the tests or the act of writing the tests beforehand is too difficult. Test-Driven Development is difficult because it forces the developer to codify their understanding of the problem in code before shipping. TDD teases out the design questions and interaction patterns early in the process. TDD almost always results in better code in my experience. The process is more difficult to practice than alternatives but, like exercising or playing a musical instrument, we can improve with practice. Ex Post Facto Testing results in team members coming to distrust the practice of testing on the whole, because the tests are often a must-do rather than a value-add. TDD feels pedantic to the new acolyte: “Why even implement the method add as def add(x, y); 4; end? That’s stupid.” Indeed, the non-generalized cases are silly. TDD gives the first impression of coming off silly and unpragmatic. Like investing $5 into a Vanguard index fund, it’s only after sustained investments will teams begin to reap the dividends. With TDD, we also need to get comfortable leaving untested behavior undefined. If we haven’t written a test for a scenario, we have decided that that behavior is undefined. The open-endedness of (2) kills us. When do we stop? We can always use our professional judgement, but adequate judgement might not exist on every team. EPFTs don’t give us any design feedback, they simply confirm what we’ve already written. They are the Yes Men of coding practices. The tests are tautological. We see the complexity of our tests not match the complexity of the subject under test: Why am I bootstrapping a database to add a few numbers together? When we start speaking in lines of coverage, we have already lost. How Can We Identify Ex Post Facto Tests? Ex Post Facto Tests can be identified in production by the following property: If you delete any line of code in a project and you have a green build, EPFTs were used in the development of the project. Answering the question, “What can I delete and still have a passing build?” is the first question to answer when opening up some legacy code. Mutation testing is a great tool for doing exactly that. In Pull Requests, EPFTs are easier to identify. A Pull Request that only modifies production code without corresponding test updates is an obvious offender. Taking a look at the commit sequencing can be the next clue. You’re looking for a steady beat of red-green-red-green-red-green in the commit history. What Should We Do with Ex Post Facto Tests? You probably still want tests but test at the seams of your modules, classes, and Bounded Contexts. If those seams don’t exist, create them. EPFTs are a fine tool when putting seams in place, as they can be an effective form of Pinning Tests or Test Vices. The key with EPFTs is that they are a tool with a specific requirement: they should never land on master. What Should We Do with Production Code Created Using Ex Post Facto Testing? If you must change it, change it carefully using Pinning Tests or Test Vices. Practice TDD going forward. If you find EPFTs while working through legacy code, make it your first goal to safely remove them. Remember that any process or practice is merely a tool. Test-Driven Development is like a well-balanced and familiar hammer, where Ex Post Facto Testing might be closer to a loose razor blade. It can still do it’s job (cutting), but you’ll need to careful. Special thanks to Justin Duke and Hector Virgen for reading early drafts of this post.
https://kellysutton.com/2019/04/13/ex-post-facto-testing.html
This paper describes a deep learning approach for urban land cover classification in the context of the ISPRS 2D semantic labelling benchmark. A high spatial resolution digital swface model (DSM) and a true ortho-image over the city Potsdam (Germany) was used as input dataset for obtaining six target classes. The proposed approach focuses on augmenting the original input dataset with a combined set of geo-morphometric variables extracted from DSM -including slope/aspect transformation, second derivate of elevation, compound topographic index and hierarchical slope position-. Furthermore, it uses advanced deep learning architecture provided by H2O framework which follows the model of multi-layer, feedforward neural networks for predictive modelling. Automatic hyperparameter tuning with random search was conducted for model selection. The method comprises five steps: (i) spectral segmentation of ortho-irnages; (ii) extraction of relevant geo-morphometric variables from DSM; (iii) multivariate land cover classification; and (iv) accuracy assessment. The proposed approach was used for classifying a selected ISPRS benchmark tile where a reference map is available. Thematic accuracy of the proposed approach was assessed using the traditional error matrix and compared with thematic accuracy of a deep learning classification based only on the original data set (i.e. DSM and multispectral imagery). In addition, the deep learning classification approach was compared with a random forest (RF) classification using both original and augmented input dataset. It is shown that: (i) thematic accuracy improves only slightly when geomorphological variables are used to enhance the input dataset; and (ii) deep neural nets provide a similar predictive power than random forests for urban remote sensing applications.
https://research.utwente.nl/en/publications/a-deep-learning-approach-for-urban-land-cover-classification-from
Price-to-Earnings Ratio – P/E Ratio The price-to-earnings (P/E) ratio is essential since it is strongly linked to equity rates. It has long been a common topic of study among academics. The ratio shows how much buyers are willing to pay for each dollar of stock value. This is why it is referred to as a stock’s multiple. It has traditionally been recognized as among the most important common financial indicators for determining the valuation of capital markets and business securities. When valuing and pricing newly released stocks in initial public offerings, financial analysts use this ratio as a tool. The actual market value of a company’s stock in proportion to its earnings is calculated by the P/E ratio, which can be utilized as a comparative tool between organizations and as a valuation tool to compare organizations’ results to others. Furthermore, it is used to forecast potential growth prospects because a low P/E ratio indicates that shareholders expect a larger increase in net profits in the coming financial cycles, while firms with a higher P/E ratio expect a lower increase in earnings. Indeed, a low P/E indicates that a company’s is currently undervalued, or that it is doing very well in comparison to recent patterns. Contents - 1 Price-to-Earnings Ratio – P/E Ratio - 1.1 What is Price-to-Earnings Ratio – P/E Ratio? - 1.2 Forward Price-to-Earnings - 1.3 Trailing Price-to-Earnings - 1.4 Limitations of Using the P/E Ratio - 1.5 Key Take-Aways - 1.6 FAQs What is Price-to-Earnings Ratio – P/E Ratio? The price-to-earnings ratio (P/E ratio) is a valuing metric that measures a company’s current share price to its earnings per share (EPS). The measure, also known as the price multiple or the earnings multiple, measures a stock’s price to its earnings. The ratios are used by buyers and analysts to measure the relative value of a company’s stock. It can also be used to compare a company’s past success to its current performance, as well as to compare economies over time. Forward Price-to-Earnings Instead of using trailing numbers, the forward (or leading) P/E uses potential earnings guidance. This forward-looking metric, also known as “estimated price to earnings,” is practical for comparing current earnings to potential earnings and for providing a better view of what earnings would look like – before shifts and other accounting adjustments. However, the forward P/E measure has inherent flaws, such as firms underestimating profits to beat the forecasted P/E before the next quarter’s earnings are released. Other companies can overestimate their predictions and then update them in their following earnings report. Furthermore, external observers may offer estimates that differ from those provided by the firm, causing uncertainty. Trailing Price-to-Earnings One can determine the trailing P/E is by dividing the current share price by gross EPS profits for the previous 12 months. It’s the most often used P/E ratio and, it’s the most objective – assuming the firm correctly posted its earnings. In some cases, since investors don’t trust most individual earnings forecasts, some analysts tend to look at the trailing P/E. The trailing P/E also has some flaws, one of which is that past success does not always predict future action. As a result, investors often make investments dependent on expected earnings potential rather than past performance. However, that’s also a concern because the EPS number holds stable when market values fluctuate. The trailing P/E would be less predictive of those shifts if a new business incident pushes the stock price dramatically higher or lower. Since profits are only published once per quarter and markets exchange daily, the trailing P/E ratio can adjust as the price of a company’s stock fluctuates. Consequently, the forward P/E is a preference for some buyers. Analysts expect earnings to rise if the forward P/E ratio is lower than the trailing P/E ratio; if the forward P/E is greater than the current P/E ratio, analysts expect earnings to fall. Limitations of Using the P/E Ratio - A high P/E ratio could suggest that analysts and investors can expect higher earnings in the future. - The measurement could be potentially misleading as it is based on expected future data or data from the past (neither of these two data options is reliable). Furthermore, the data could be potentially distorted. Key Take-Aways - The price-to-earnings ratio (P/E ratio) is a calculation that compares the price of a company’s stock to its earnings per share. - A high P/E ratio could indicate that a company’s stock is overvalued or that investors anticipate high potential growth rates. - Companies with no revenue or losing profits do not have a P/E ratio because there is little to add in the denominator. - There are two types of P/E ratios used: forward and trailing P/E. FAQs What is an example of a P/E ratio? When determining whether a company’s stock price correctly reflects expected earnings per share, analysts and investors look at the P/E ratio. The following is the formula and equation used in this process: P/E Ratio=Market value per share/Earnings per share Divide the current stock price by the earnings per share to obtain the P/E ratio. You can find the actual stock price by entering a stock’s ticker symbol into every finance database, and while the precise value represents what buyers would pay for a stock right now, the earnings per share is a little more ambiguous. What is a favorable price-to-earnings ratio? A favorable or unfavorable price-to-earnings ratio can inevitably be determined by the market in which the business operates. Some industries’ overall price-to-earnings ratios are going to be higher, and others are going to be lower. For instance, publicly traded US fossil fuel companies had an estimated P/E ratio of about seven as of January 2020, compared to more than 60 for technology firms. You should measure a company’s P/E ratio to the average P/E of its rivals within its market to get a general understanding of whether it is low or high. Is it better to have a higher or lower P/E ratio? Many people believe that buying stock in businesses with a lower P/E ratio is preferable because it means spending less than each dollar in earnings. In this way, a lower P/E is equivalent to a lower price tag, rendering it appealing to bargain-hunting buyers. In reality, though, it’s critical to understand why a company’s P/E is what it is representing. For example, if a company’s P/E is low and its market model is inherently declining, the potential bargain could be a mirage.
https://trustpedia.io/dictionary/p/pe-ratio-price-earnings-ratio/
Balancing Energy Supply and Demand by Underground Thermal Energy Storage The shift from fossil energy sources to renewable ones is accelerating worldwide. The new energy system will be characterised by a larger share of intermittent renewables (wind, solar), complemented by other flexible forms of power/heat production. Gas- powered plants can quickly increase or decrease their power output, but the share of natural gas in the mix will most likely decrease during the energy transition. Therefore, it is clear that variations in energy supply, as well as demand, and the integration of renewable energy sources into the energy infrastructure pose challenges in terms of balancing. Peak shaving and energy storage can help decrease the pressure on the energy infrastructure. Underground Thermal Energy Storage (UTES) stores excess heat during periods of low demand (i.e., summer) and uses it during periods of high demand (i.e., winter). This can be implemented in local or regional heating networks to support the use of surplus heat from industry (e.g., waste incineration plants) and the implementation of renewable heat sources such as bio-Combined Heat and Power (CHP), geothermal, and solar energy. UTES could also be of interest to absorb surpluses from high wind and solar PV production in the electricity grid with the use of heat pumps. UTES is especially of interest when seasonal dips and peaks in the demand exist, such as in district heating or greenhouses. Conventional storage systems like capacitors, pumped hydro, and batteries are unsuitable for this type of longer-term storage. UTES may provide large-scale storage potential, exceeding 10 GWh. Its costs are competitive, as long as the cost of the heat is low. Various kinds of UTES exist or are being demonstrated, including Borehole (BTES), Mine (MTES), and Pit Thermal Energy Storage (PTES). This article focuses on High- Temperature Aquifer Thermal Energy Storage (HT-ATES), where hot water is stored in porous, water-bearing layers in the subsurface. It is different from the well-known LT- ATES (low temperature), which is widely applied in the very shallow subsurface (tens of meters depth, with storage temperatures up to 25°C). Here, buildings are cooled in summer using cold water. The excess heat from the building is then stored in the subsurface and used again in winter for heating the same building, often with the use of a heat pump. Here, the temperature differences are small, and therefore the power is also small. HT-ATES currently uses temperatures up to about 80°C. Higher temperatures are possible, but challenges are posed by legislation, materials, and interference with the use of groundwater. Figure 1 shows a typical heat demand curve: high during the cold season (in this example, December-March) and low during the warm season (June-September). The high peak demand during the cold period requires a heat supplier with a high capacity. This is typically an installation that is quite expensive to run. During the warm period, on the other hand, this high capacity is not used. For typical low temperature geothermal applications like heating of greenhouses, there is still some demand during the warm period. For city heating in moderate climate regions, the summer demand drops to very low levels, just for hot water use, which deepens the bathtub even more. This requires upfront investments that are higher than necessary for a high-capacity installation. Furthermore, shutting down the heat producer in the summer period increases maintenance needs: for instance, when it concerns a geothermal doublet system, which tends to deteriorate during periods of standstill due to mineral precipitation. Figure 1 illustrates that if the heat production continues between months 4 and 10 at the level of the dotted line, the bathtub is filled. The excess heat can be reproduced in winter to cover the peak demand. In principle, this makes better use of excess and renewable heat sources and offers opportunities to lower the overall system cost, while providing the same heating services. Figure 2 shows a schematic diagram of an HT-ATES system. Conventional doublet-type geothermal installations typically have a warm production well and a cold injection well. An HT-ATES system consists, in principle, of two wells that operate in opposite mode: when the cold well is producing, the warm well is injecting, and vice versa. During the warm season, cold water is produced from the cold well. The water is then heated using a heat exchanger, which receives its energy from the heating source (e.g., geothermal, solar). The heated water is injected into the warm well and stored in the reservoir until the start of the cold season. The stored warm water is reproduced from the same well into which it was injected. Finally, the cooled water is re-injected in the cold well again. Depending on the required capacity of the storage, and the quality and dimensions of the underground reservoir, there may be one or more warm and cold wells. The larger the number of required wells, the higher the investment and operating cost will be. However, economies of scale do apply, and bigger is better. This applies to costs, but also to storage efficiency. The vertical cross-section of Figure 3 shows the development of the hot plume. At a depth of around 500 m, the in-situ temperature is typically around 20°C to 40°C. A cylindrical volume of hot water will migrate into the reservoir, expelling the cold water initially there. After the first loading-unloading cycles, the amount of reproduced heat is small because the subsurface is heated up (Figure 3). After more cycles, the efficiency can increase to about 70%–80%, but this depends very strongly on local subsurface and surface conditions. Because the density of hot water is less than that of cold water, the hot water will tend to flow to the upper part of the aquifer. This means that when the stored hot water is reproduced, the lower part of the hot wells, at 400 m depth, will start producing cold water before all the stored hot water, concentrated at lower depths, is reproduced. This, combined with the fact that some mixing by heat conduction takes place, means that the efficiency of an HT-ATES can never be 100%. The Netherlands, one of the pioneering countries of HT- ATES, are home to many thousands of LT-ATES systems. Given its moderate climate with winter temperatures around 0°C, there could be large potential for HT-ATES. The potential can be determined in many ways: theoretical, technical, and economic. The theoretical storage potential can be defined as thermal storage capacity (energy per surface area) and requires subsurface data and surface data (injection and production temperature) as input. To calculate technical storage potential, one approach is to calculate possible flow rates based on subsurface parameters and technological flow restrictions in order to predict capacities and thermal storage production. When cost parameters are included, the economic potential could be calculated as well, expressed in the levelized cost of energy. The market potential can be determined when surface parameters, like heat sources and demand, and regulatory and spatial planning information is included. As HT-ATES is not widely developed yet, and many input parameters for calculating technical and economic potential are unknown, another way to approach HT-ATES potential is to define certain (subsurface) criteria and test them with available subsurface data. For the subsurface of the Netherlands, an alternation of unconsolidated sand and clay sediments, the following criteria were considered: - The typical depth of an HT-ATES system is up to around 500 m. Shallow aquifers (< 50 m below ground level) are considered to be less suitable for HT-ATES, as these are often used for drinking water production. Heating the shallow subsurface should be prevented. Potential leakage zones like faults should, therefore, also be avoided. With increasing depth, the potentially achievable flow rate increases because higher pump pressures can be applied. From ≈800 m, more complex and expensive drilling techniques are required, which will increase the drilling costs significantly. Friction losses increase with increasing depth, thereby decreasing the coefficient of performance (COP). - It is assumed that HT-ATES wells are technically comparable to LT-ATES wells in unconsolidated layers, meaning that a similar drilling technique and well stimulation process is applied. Given these starting points, a minimum hydraulic conductivity of 5 m/d is advised. The minimum aquifer thickness should be about 15 m. - The presence of a confining cap layer on top of (and preferably also below) the storage aquifer is a requirement to limit: 1) the impact of buoyancy flow on the recovery efficiency; and 2) the temperature (and associated geochemical) effect on the shallower layers. The clay layer acts as a physical boundary preventing hot water from flowing to shallower aquifers. The advective losses of hot water are restricted to the horizontal dimension when clay layers are present both at the top and the bottom of a storage aquifer, giving a higher recovery efficiency. - Lithology is an important factor, and medium- to fine- grained sand is generally favored. Very coarse sand usually has high permeabilities and, hence, allows large volumes to be stored with high flow rates, but coarse- grained aquifers are considerably more sensitive to low recovery efficiencies because of a high impact of buoyancy flow. Clay, silt, glauconite, and shell fragments are considered to be unfavorable factors. Depending on the parameters that influence buoyancy flow, maximum hydraulic conductivities should be about 20–50 m/d. - A low groundwater flow velocity (< 20–30 m/year) is favoured to prevent the hot stored water from drifting away. - Aquifers holding saline water are favored for storage purposes. Technically, there are limited differences between storage in fresh or saline water, although some findings suggest that storage in salt water is less sensitive to clogging. In case the target aquifer holds fresh water or a fresh-salt water interface, it should be given extra attention. Fresh water is not to be mixed with brackish or saline water; this mainly has to do with the high interests that are associated with fresh water as a resource. Important boundary conditions for a business case are set by surface conditions. This can be broken down to some simple elements. For HT-ATES systems, a seasonal variation in demand and supply is required. These systems are typically not attractive for regions with a relatively flat demand profile. A high mismatch between seasonal demand and supply is optimal. The next preferential condition is the presence of a low- cost heat source. This can be waste heat or heat from sources with low marginal production costs, such as geothermal and solar. The operating temperature of the heating network is very important, as it often determines the temperature difference between the hot and cold wells. This, together with the flow rate, affects the energetic capacity of the storage project. A higher capacity often leads to a lower cost of storage per unit of energy. Scale is the final preferential condition. From experience with past projects and feasibility studies, the scale should be minimally 5–10 MW(th) and entail 2,500 of full load equivalent running hours per year. This equals approximately 1000 dwellings (for the Netherlands). Aquifer thermal energy storage could have a bright future in the changing energy system to provide flexibility and security of supply in a world with less fossil fuels. However, it is very important to learn from ongoing projects to bring the concept to full technological and commercial maturity and exploit its benefits. A key aspect to keep in mind is that HT-ATES applications are highly location-specific. An optimal match is found when surface and subsurface conditions are jointly considered.
https://geothermal.org/our-impact/blog/balancing-energy-supply-and-demand-underground-thermal-energy-storage
Ø The media. - The main findings and conclusions of the Study; - The perspectives of the Government, indigenous peoples, the Uganda Human Rights Commission and NGOs on the impact of extractive industries on indigenous communities in Uganda; - The principle of Free, Prior and Informed Consent (FPIC) and international and regional mechanisms, safeguards and voluntary guidelines; and - Recommendations of the Study. To the Government of Uganda Ø The Parliament, the Ministry of Justice and the Uganda Law Reform Commission should expedite the enactment of the Social Impact Assessment and Accountability Bill; Ø Government together with mining companies should develop and implement national public participation and consultation models for affected populations including indigenous communities based on the principle of Free, Prior and Informed Consent (FPIC); Ø Adopt international standards in recognizing, promoting and protecting the rights of Indigenous Populations in the country; Ø Integrate the traditional knowledge and practices of indigenous peoples into policies and programs to mitigate the impact of climate change in Uganda; Ø Share information with indigenous communities on a regular and continuous basis and in a transparent manner; Ø Ensure that there is an adequate access to justice for IPs and provide training to them on the same. To the Uganda Human Rights Commission Ø Establish a National Task Force with clear terms of reference to ensure follow up and implementation of the study’s recommendations. The Task Force should be comprised of stakeholders including relevant Government Officials, indigenous peoples, civil society organizations, and extractive industries representatives; Ø Include a chapter on the situation of Human Rights of Indigenous Peoples in its annual report on the state of Human Rights of Uganda; Ø Build the capacity of indigenous peoples so that they can file complaints before it when their rights are violated; Ø Ensure that the draft National Action Plan on Business and Human Rights takes into account the issues and concerns of indigenous peoples. To the Civil Society Organizations Ø Popularize and widely disseminate the Study’s findings and recommendations to all including Government and indigenous peoples; Ø Lobby for the recognition of indigenous peoples’ rights to land and resources in national laws, policies and processes, and for the implementation of relevant recommendations in the Study; Ø Take lead in identifying the needs of indigenous peoples in Uganda; Ø Using the leadership and guidance of indigenous populations for strengthening their relationship with CSOs. To Business Enterprises Ø Apply the Human Rights Based Approach to development for instance by consulting affected communities in the planning, design and execution of new projects; Ø Comply with and observe the principle of Free, Prior and Informed Consent in the development and implementation of all projects related to extractive industries so that the rights of IPs are protected and promoted. To Indigenous Communities Ø Create a national network to push for the implementation of the Study’s Recommendations; Ø Use the network for the promotion and protection of their rights, including preventing them from arbitrary evictions from their ancestral land; Ø Lodge human rights complaints to the UHRC; Ø Lobby for their indigenous traditional governance systems to be recognized and integrated within the existing laws and policies of the government; Ø Lobby for their customary laws to be consistently respected by all projects and activities of concern to indigenous populations; Ø Work to build and strengthen the capacity of their people and institutions especially the youth and women; Ø Use indigenous knowledge in land and forest resources managements; Ø Form social movement and active network among indigenous populations of Uganda so as to ensure that their rights are better promoted and protected. Done at Kampala, Uganda, 28th November 2018 ?
https://www.achpr.org/news/viewdetail?id=5
We, civil society organizations (CSOs), representing more than 500 CSOs accredited to the United Nations Convention to Combat Desertification (UNCCD), gathered for the thirteenth session of the Conference of the Parties (COP 13) held from 6 to 16 September 2017 in Ordos, China, hereby express our appreciation to the Government of the People’s Republic of China and its friendly citizens for hosting us in this beautiful city of Ordos, to the UNCCD secretariat for their unwavering support to CSOs, and to the Governments of China, Switzerland and Turkey for their financial support to the Civil Society Organization Panel (CSO Panel) and for enabling significant CSO participation in COP 13. Civil society welcomes decision ICCD/COP(13)/L.10 in support of CSO participation in the UNCCD, and in particular the planned renewal of the CSO Panel. This decision recognizes the vital role played by civil society in realizing the objectives of this Convention, and we welcome the contributions of the accredited CSOs from the host country and from around the world that enriched the discourse in Ordos and contributed to the comprehensive outcomes. The CSOs contributed 18 statements in the course of the COP that also reflected the perspectives of affected populations and the spirit of Article 5 (d) of the Convention, which obliges Parties to “facilitate the participation of local populations, particularly women and youth, with the support of non governmental organizations, in efforts to combat desertification”. We endorse the recommendations of the CSO Panel regarding land rights contained in chapter III of document ICCD/COP(13)/15, and note that Article 8 of the Convention requires the inclusion in national action programmes of measures to improve the institutional and regulatory framework of natural resource management to provide security of land tenure for local populations. In this context, we further welcome the inclusion in decision ICCD/COP(13)/L.10 of the invitation to Parties to consider the recommendations made by the CSO Panel regarding land rights. In this context, we urge Parties to ensure the full participation of local land users in the rehabilitation and sustainable management of land and, in this context, recall: • That the lack of enforceable land user rights and tenure security is a significant driver of land degradation and migration, and is a triggering factor of conflicts; • The United Nations Declaration on the Rights of Indigenous Peoples, particularly its Article 26, stating that indigenous peoples have the right to the lands which they have traditionally owned, occupied or otherwise used or acquired and that countries shall give legal recognition and protection to these lands, and Article 32 referring to the right to free, prior and informed consent; • The General Recommendation N° 34 on the rights of rural women of the Committee on the Elimination of Discrimination against Women of United Nations Convention on the Elimination of All Forms of Discrimination against Women (CEDAW), which obliges Parties to take all necessary measures, including temporary special measures, to achieve the substantive equality of rural women in relation to land and natural resources; • The Voluntary Guidelines on the Responsible Governance of Tenure of Land, Fisheries and Forests in the Context of National Food Security, which provide a sound and legitimate framework for good land governance and strengthened land tenure rights. Civil society calls upon Parties to actively promote effective partnerships with CSOs for the implementation of the UNCCD 2018–2030 Strategic Framework and to support the engagement of local land and natural resources users, particularly women, indigenous peoples, pastoralists and youth in the operationalization of land degradation neutrality (LDN). We therefore welcome the decision to adopt the UNCCD 2018–2030 Strategic Framework (ICCD/COP(13)/L.18) and its call on stakeholders to take into account the need for gender-responsive policies and measures, strive to ensure participation of men and women in planning, decision-making and implementation at all levels, and enhance the empowerment of women, girls and youth in the affected areas; and the encouragement to Parties to further enhance the involvement of civil society in the implementation of the Convention and of the Strategic Framework. We call upon all Parties to reaffirm their commitment to the 2030 Agenda for Sustainable Development, including target 15.3 to achieve LDN. In striving to achieve Sustainable Development Goal (SDG) target 15.3, Parties must recognize that land rights have been included in the targets of SDGs 1, 2, 5, 12, 14 and 16, and that in order to holistically address the 2030 Agenda, the interconnected character of all 17 goals must be considered and the ‘leave no one behind’ principle must be adhered to. We therefore invite Parties to integrate the promotion of land tenure security into their national action programmes and in the operationalization of LDN and to adopt and implement the Voluntary Guidelines on the Responsible Governance of Tenure of Land, Fisheries and Forests in the Context of National Food Security to guide their policies related to land tenure security and for the implementation of LDN. Recalling the importance of anchoring science in territories affected by desertification to ensure the better development of scientific research programmes that strengthen local knowledge, especially that of indigenous peoples, the civil society welcomes the adoption of the synthesis report on sustainable land management (SLM) by the Science-Policy Interface (SPI) (ICCD/COP(13)/CST/3), urges Parties to support national and local science-policy interfaces and urges the SPI to interact with scientific networks. We welcome the adoption of the text related to gender equity and urge the Parties to implement the Gender Action Plan of the UNCCD at national level and to monitor and report on the progress of its implementation. The CSOs recognize the valuable role of private funding to achieve LDN targets, but we nevertheless must stress that its participation in financing initiatives under this Convention must adhere to the highest human, social and environmental standards and protect the interests of pastoralists, farmers, indigenous peoples, women and landless peasants. In this context, the Land Degradation Neutrality Fund (LDN Fund) must comply with the highest human, social and environmental standards. The perspective of civil society with regard to the implementation of private-led initiatives and the activities that will be developed by the private sector under the framework of the Convention, particularly those related to the funding of transformative projects for achieving LDN targets, must ensure the engagement of representatives of accredited CSOs. We strongly urge the secretariat to ensure the participation of at least one representative of a CSO accredited to the Convention, to be elected by the CSO Panel, on the Advisory Board of the LDN Fund so as to enable the effective engagement of the CSO community in contributing its expertise to the governance and policymaking processes of the LDN Fund. We are deeply concerned by the potential conflicts of interest that could arise in engaging the private sector in funding mechanisms in this Convention, and we call upon the Parties to ensure that private funding of UNCCD processes is managed in a transparent manner and with regular and transparent reporting procedures. The participation of the private sector in contributing financial resources towards achieving LDN targets should not be considered as a replacement of public funding, which is fundamental to achieving the goals of the Convention. In conclusion, we congratulate the Parties on the adoption of a sound decision text at COP 13 and call upon the Parties to engage fully and effectively with CSOs in the implementation of the UNCCD 2018–2030 Strategic Framework, and to ensure the adoption of a decision on land rights under this Convention at COP 14.
https://csopanel.org/csos-declaration-at-cop13/
Arcadia University’s Department of Public Health, including the MPH and BSPH programming, are accredited by the Council on Education for Public Health (CEPH). Accreditation ensures you are receiving a standardized, high quality education. MPH students are eligible to sit for the National Credentialing Exam in Public Health (CPH) as well as the National Certification for Health Education Specialists (CHES) exam. CEPH Competencies Foundational MPH Competencies (as prescribed by CEPH) Evidence-based Approaches to Public Health - Apply epidemiological methods to the breadth of settings and situations in public health practice - Select quantitative and qualitative data collection methods appropriate for a given public health context - Analyze quantitative and qualitative data using biostatistics, informatics, computer-based programming and software, as appropriate - Interpret results of data analysis for public health research, policy or practice Public Health and Health Care Systems - Compare the organization, structure and function of health care, public health and regulatory systems across national and international settings - Discuss the means by which structural bias, social inequities and racism undermine health and create challenges to achieving health equity at organizational, community and societal levels Planning and Management to Promote Health - Assess population needs, assets and capacities that affect communities’ health - Apply awareness of cultural values and practices to the design or implementation of public health policies or programs - Design a population-based policy, program, project or intervention - Explain basic principles and tools of budget and resource management - Select methods to evaluate public health programs Policy in Public Health - Discuss multiple dimensions of the policy-making process, including the roles of ethics and evidence - Propose strategies to identify stakeholders and build coalitions and partnerships for influencing public health outcomes - Advocate for political, social or economic policies and programs that will improve health in diverse populations - Evaluate policies for their impact on public health and health equity Leadership - Apply principles of leadership, governance and management, which include creating a vision, empowering others, fostering collaboration and guiding decision making - Apply negotiation and mediation skills to address organizational or community challenges Communication - Select communication strategies for different audiences and sectors - Communicate audience-appropriate public health content, both in writing and through oral presentation - Describe the importance of cultural competence in communicating public health content Interprofessional Practice - Perform effectively on interprofessional teams Systems Thinking - Apply systems thinking tools to a public health issue AU Department of Public Health Competencies Community Health Concentration MPH Competencies - Use input from stakeholders and other evidence-based sources to evaluate community health programs, policies, and interventions - Develop a mission statement for a public health organization and develop strategies to meet mission-related goals - Develop and implement a systematic approach to review research on a selected community health issue to inform novel research question development - Analyze theoretically based interventions on a selected health behavior - Apply the Public Health Code of Ethics to a current or historical ethical dilemma in the field Community Health Concentration Competencies Program Planning and Policy Development Skills - Uses a global perspective to critique public health programs, research, policies, and health care systems. - Contributes to collaborative program planning and evaluation processes, including implementing, monitoring, and evaluating public health programs. Cultural Competency Skills - Describes cultural and linguistic characteristics and literacy levels of populations to be served. Community Dimensions of Practice Skills - Systematically maps stakeholders who constitute the community linkages and relationships essential to involve in public health initiatives. - Identifies community assets including governmental and non-governmental resources in the delivery of public health services. Leadership and Management Skills - Prepares a programmatic budget. - Describes the organizational structure and policies of a public health agency. - Adheres to an organization’s policies and procedures. - Identifies strategies to address the public health needs of a defined population.
https://www.arcadia.edu/majors-and-programs/public-health-mph/accreditation-competencies/
Column: Protect your health during finals Jacqueline Kientzler, an electrical engineering student, falls asleep at her computer on Sunday, May 1. Many UA students may feel anxious or stressed with upcoming finals. Students often sacrifice their health for their grades during this tough time. In an institution of higher education, grades are stressed as the most important aspect on a student's agenda. However, students need to remember that caring for their health is as important as working toward getting those final points in their classes. Our education system values high grades, many of which are earned in the final weeks of the semester. During these weeks, students throw routines and habits out the window in an attempt to squeeze out any final points they can before the grade book closes for the term. RELATED: The argument for sleep in finals week Unfortunately, in many cases this includes throwing out good eating and sleeping habits. During finals week, many students try the age-old technique of cramming over a few short days and nights. Students stay up through the early hours of the morning the night before an exam to be sure they know the material inside and out. However, this technique is a failing one. Without a good amount of sleep the night before, the body hasn't had the chance to fully recharge overnight, leaving students exhausted and reaching for that material they crammed the night before. The Harris Health Sleep Disorder Center recommends that students get about eight or nine hours of sleep a night, and to get a good night's sleep especially on days they have exams. When the body doesn't recharge, it becomes difficult for the immune system to fight off disease. As seasons change and the the body has to adjust to a change in temperature, and students with weakened immune systems not only put the quality of their work at risk, but also might find themselves going home for the holidays with a cold. On a college campus, nearly everything is shared with your peers, from door knobs to sink faucets. Students are in close proximity to each other, whether it be in the dorms, Arizona Student Unions, UA Main Library or the classrooms. RELATED: Patients before profit In addition to putting off a regular sleep schedule, many students get off their regular meal schedule and find themselves snacking endlessly while studying. When we cram 30-hours of work into 24, meals are quickly pushed aside, or traded for fast food. We often forget to take breaks and schedule time for our brain to recharge before taking in new material. Our education system puts the odds against us as we attempt to figure out ways to build and follow daily routines. It's important that we as students take value in our personal health and make sure we're taking the right steps to stay healthy during finals and the winter season. Though students can't help the assignments, papers and exams they have piling up at the end of the term, they can help themselves by taking some steps to prepare. Balancing studying and regular routine activities such as sleeping, eating and exercise can help students to keep their stress down, effectively giving themselves blocks of time to study. While exams and finals are vital, it's important to remember to care for ourselves as well. Taking breaks to relax, even if they're short, will make the process of studying and finishing final projects easier. We can't fix the education construct toward the last two weeks of the semester. What we can do is to ensure our bodies are healthy and well enough to efficiently study and finish final assignments. Part of the learning curve of being a college student is learning how to take care of yourself in the adult world, and part of that is knowing when you need to sleep, eat and take some time to have a mental break. Follow Leah Gilchrist on Twitter.
http://www.wildcat.arizona.edu/article/2016/12/column-protect-your-health-during-finals
In simple words, Version Control systems are software tools that help a software team to manage changes to source code over time. These are also known as revision control or source control systems. This article is about what is version control, it’s benefits and the different types of version control systems. What is Version Control Systems? Software developers working in teams are continually writing new source code and changing existing source code. The code for a project is typically organized in a folder structure. One developer on the team may be working on a new feature while another developer fixes a bug by changing code, each developer may make their changes in several parts of the folder structure. This is where Version Control comes into action. So, a Version control software actually keeps track of every modification that you do to the code in a special kind of database. If a mistake is made, developers can easily turn back the clock and compare earlier versions of the code to help fix the mistake while minimizing disruption to all team members. Without a version control system, you will often run into problems like not knowing which changes that have been made are available to users or the creation of incompatible changes between two unrelated pieces of work that must then be painstakingly untangled and reworked. Benefits of version control systems - Maintain multiple versions of the code - An ability to go back to any previous version. - Developers can work in parallel and across the globe. - Traceability - Find out the difference between versions. - Provides backup without occupying much space. - Review the history of the changes. Types of Version Control Systems Version Control Systems can be divided into two classes: - Centralized Version Control System (CVCS) - Distributed Version Control System (DVCS) The Centralized version control systems are based on the idea that there is a single “central” copy of your project. So, everyone requests the latest version of your work and pushes the latest changes to this central copy. This literally means that everyone sharing the server also shares everyone’s work. Whereas on a Distributed VCS, everyone has a local copy of the entire work’s history. This means that it is not necessary to be online to do the updates. Also, in DCVS, the developer make changes in his/her local repo, which he/she can share with other developers. So, there isn’t a central entity in charge of the work’s history in case of DVCS. Anyone can sync with any other team member. This helps avoid failure due to a crash of the central versioning server or system. Few Popular Version Control Systems CVS – The grandfather of revision control systems. It was first released in 1986 SVN – Subversion(SVN) is probably the widest adopted version control system. It is an example for Centralized Version Control System. Most opensource projects use Subversion as a repository. SourceForge, Apache, Python and Ruby are some of the major projects using SVN as the repository. Git – Git is the new fast-rising star of version control systems (DCVS). It is initially developed by Linus Torvalds(The creator of Linux) and It has recently taken the Web development community by storm. With a distributed version control system, there isn’t one centralized code base to pull the code from. Mercurial – Mercurial is another open-source distributed version control system. Mercurial is extremely fast and was actually designed for larger projects, most likely outside the scope of designers and independent Web developers. Bazaar – Bazaar is yet another distributed version control system, similar to Git and Mercurial. It has a very friendly user experience. It calls itself “Version control for human beings.” Also, it supports many different types of workflows, from solo to centralized to decentralized, with many variations in between. We will discuss the working of some of these Popular Version Control Systems on our future posts.
https://thegeeksalive.com/what-is-version-control/
November 13, 2018, is World Kindness Day!! You may have forgotten to put it on your calendar, but kindness goes hand in hand with digital citizenship, so here in the lab, we teched it up. 4th graders did an activity call Freeze!, where they acted out short skits to help them discern fact from opinion, and think about how a fact could be expressed in a kinder fashion. In our first video, you can see them practicing saying kind things to one another, then teched it up by filming them acting out their own skits! 5th graders learned about how the Golden Rule appears in many cultures and religions, then teched it up by filming them doing a reading. 6th graders began the World’s Largest Lesson, which introduces the Sustainable Development Goals to children and young people everywhere and unites them in action. I teched it up by giving them ther first assignment on Google Classroom – Be a Goal Keeper, where they used an online app to create a personal Goalkeeper self image.
https://cohen.nnms.org/kindness-week-in-the-nnm-lab/
As the lights shined on Timberland’s stage, both Holt and Timberlands students prepared to wow the audience with their comedic skills while improvising on the spot. The improv show was simply a friendly competition between Timberland and Holt. Both Timberland and Holt have improv classes and for the past month or so they both have been working on sketches and practicing improv games to perform on show night. “It was about just showing off the skills that we learned throughout our time in class, and how to properly improvise,” Josephine Welker (‘25) said. “We also had to write and practice our own skits and also practice the improv games over and over again until we had a really good understanding of what the games were.” There were three different types of improv games that happened throughout the night. They were Office Excuses, Party Quirk, and Dating Game. The first game that was played was Office Excuses. “Where there will be one person playing the boss while another person plays the employer,” Bethaney Streckfuss (‘23) said, “and they have to come up with a reason for why they were late and the people behind the boss come up for a reason why the employees were late.” “(In Party Quirk you) had to get the host and the audience to guess what your quirk was.” “(The Dating Game would) have the guesser choose or understand which character we (the participants) were playing as,” Welker said. Along with the improv games were the skits that the students came up with themselves. They had come up with two parody skits to perform that night. One of those skits was about Harry Potter and the other was about a controversial event that happened not too long ago, the Johnny Depp vs Amber Heard trial. The first skit was performed by Alyssa Bryant (‘23), Bug Scott (‘24), Oliver Ryberg (‘24) , Randi Carney (‘23), Sophia Binnix (‘23), and Josephine Welker (‘25). “It was about Harry and Ron hiding under the invisibility cloak and Draco catching them using the cloak and calling for Snape, but Hermione was able to make Snape fall asleep which allowed Harry and Ron to escape,” Bryant said when asked about her group’s skit. The second skit was performed by Alyssa Eichelberger (‘23), Streckfuss, Abigail Crawford (‘25) and Kaylei Smith (‘24). “So we decided to create a trial mocking Johnny Depp and Amber Heard because there were so many funny things and situations in that trial that happened in real life, so we thought it was great material for a skit. So we just kind of put in just a bunch of internet jokes and it was really fun coming up with the arguments that these celebrities have had in real life.” Streckfuss said. The overall experience of the Improv show was a good one for everyone involved and allowed the students to put themselves outside their comfort zone and try something new. “It was really fun and it was definitely a good learning experience with how to act up on the stage and also it felt really heartwarming that people actually found it funny and that we were doing stuff correctly in regards to improv,” Welker said.
https://holttribe.com/11471/uncategorized/improv-night/
Many groups exist within the bleeding disorder community, and each has unique needs. NYCHC’s volunteer committees exist to meet the needs of underrepresented groups, such as people with inhibitors, people with VWD, teens, and Spanish-speakers. Inhibitor: This committee seeks to bring those affected, previously affected and unaffected by inhibitors to educate, inform and create a supportive environment. VWD: This committee seeks to create a learning and community environment for the diagnosed and undiagnosed that encourages teaching, learning about the disorder as well as sharing experiences to better understand, promote and enjoy a healthy lifestyle and raise awareness. Latino Outreach: This committee seeks to advance the Latino and Spanish speaking community by fostering and empowering this community through events that are educational, social and informative. This includes events that are in Spanish and/or English in locations and on topics that interest this community. Teen/Young Adult: The committee seeks to engage teens and young adults in creative programming that focuses on living healthy lifestyles with the intent to keep them engaged as they move into adulthood.
https://www.nyhemophilia.org/get-involved/join-a-committee/
Converting a traditional IRA to a Roth IRA can offer tax-free growth and a way to withdraw funds tax-free in retirement. But what if you convert a traditional IRA and then discover you would have been better off if you hadn’t converted it? Reasons to Recharacterize There are several reasons to reverse a Roth IRA conversion. Here are a few examples: - You lack sufficient liquid funds to pay the tax liability - The conversion combined with your other income has put you into a higher tax bracket - You expect your tax rate to go down in the near future or in retirement - The value of your account has declined since the conversion, which means you would owe taxes partially on money you no longer have Usually, if you extend your tax return when you convert to a Roth IRA, you have until October 15 of the following year to undo it. (For 2016 returns, the extended deadline is October 16 because the 15th falls on a weekend in 2017.) In some cases, it may make sense to undo a Roth IRA conversion and then redo it. If you want to redo the conversion, you must wait until the later of 1) the first day of the year following the year of the original conversion, or 2) the 31st day after the recharacterization. However, if you reversed a conversion because yours IRA’s value declined, there’s a risk that your investments will bounce back during the waiting period. This could cause you to reconvert at a higher tax cost. Recharacterization in Action Nick had a traditional IRA with a balance of $100,000. In 2016, he converted it to a Roth IRA, which, combined with his other income for the year, put him in the 33% tax bracket. So normally he would have owed $33,000 in federal income taxes on the conversion in April 2017. However, Nick extended his return and the value of his account drops to $80,000. On October 1, Nick recharacterizes the account as a traditional IRA and files his return to exclude the $100,000 in income. On November 1, he reconverts the traditional IRA, whose value remains at $80,000, to a Roth IRA. He’ll report that amount on his 2017 tax return. This time, he’ll owe $26,400 — deferred for a year and resulting in a tax savings of $6,600. If the $20,000 difference in income keeps him in the 28% tax bracket or tax reform legislation is signed into law that reduces rates retroactively to January 1, 2017, he could save even more. If you convert a traditional IRA to a Roth IRA, monitor your financial situation. Contact your advisor if the advantages of the conversion diminish as a result. Featured Client Testimonials Barnes Wendling has provided us guidance and recommendations that have strategically helped strengthen our business and position ourselves for growth. We needed to hire a new VP of Finance and Controller this past year, and they were instrumental in helping us find the best candidates for our company. Sara Blankenship - President, Kaufman Container Featured Client Testimonials We value the trust, accuracy of information, and reliability of Barnes Wendling and Mike Essenmacher personally. Mike has been instrumental as a trusted advisor on accounting, tax, and personnel issues. His advice is always accurate, and he is very reliable. His associates are also very talented. Dominic Ozanne - President and CEO, Ozanne Construction Company Featured Client Testimonials We value Barnes Wendling’s expertise with all things accounting so we can operate our business using our strengths and allowing them to be our experts. They have also brought me a few business sale opportunities to allow me to grow my assets. John Gaydosh - President and Metallurgical Engineer, Ohio Metallurgical Service Featured Client Testimonials Barnes Wendling (especially Lena) did a great job with our financials. Everything. It is extremely refreshing and comforting to know that all of our numbers are not only correct, but they are in the right place(s). Your diligence and reporting truly does make me (personally) feel better. Thomas Adomaitis - Controller, Bialosky Cleveland Featured Client Testimonials I can wholeheartedly tell you that I have yet to work with an audit or tax team that have been more helpful, easy to work with, and committed than the team at Barnes Wendling- I have been through three different firms in the last few years. Michelle Saylor, Former Controller Featured Client Testimonials Floyd Trouten at Barnes Wendling CPAs is an “expert’s expert” when it comes to M & A accounting. Not only does he understand the evolving details of the Tax Code but he also sees the fine points of their application for owners, managers, investors, and financiers. Mark A. Filippell, Western Reserve Partners Featured Client Testimonials The service is amazing at Barnes Wendling CPAs. The benefit is worth more than the cost. Sometimes it’s true that you get what you pay for. Mark Boucher - Former Owner, Castle Heating & Air Featured Client Testimonials Floyd and Barnes Wendling CPAs always provide us with peace of mind. Floyd is not only our trusted financial advisor, but we trust him like family. The Barnes Wendling CPAs team is always proactive and meet all of our deadlines. We are very grateful for them. Trish Gleason, OMCO Holdings & Gleason Family Office Featured Client Testimonials Our heartfelt thanks to the Barnes Wendling CPAs team for all you do for us, such as your remarkable responsiveness, timeliness in meeting deadlines, and being such a pleasure to work with! Karyn Davies Mitchell, CareMetx Featured Client Testimonials The quality of your staff on our audit was excellent. I couldn’t ask for a more knowledgeable team with end-result focus. Brian Spitz, American Tank and Fabricating Company Featured Client Testimonials Barnes Wendling takes a holistic, comprehensive approach that helps ensure we are looking ahead to the future. By keeping us up to date on relevant law and regulation changes, we can focus on running our business rather than burying ourselves in tax law. All of this is well communicated, which is the key to any relationship. Joe Gramc - CFO, Five Star Trucking Featured Client Testimonials The valuation team at Barnes Wendling CPAs has provided us with consistently excellent service over the last ten years. We have been impressed with their expertise on business valuations, fairness opinions, and Topic 805 engagements.
https://www.barneswendling.com/how-to-reverse-a-roth-ira-conversion/
Over the past few weeks, firms have been receiving feedback from the FCA following its suitability review started last year. With over 1,000 files from 700 firms examined, we are getting a clearer picture of just how well the industry is meeting suitability expectations. The initial noises appear to be broadly positive, which is encouraging . Our own experience mirrors this early feedback. There certainly seems to be much thought going in to how firms get all the appropriate information across to clients in a way in which they can easily understand. But where do the challenges still exist? The FCA makes it clear what areas it is interested in from an advice perspective and these will come as no surprise: - Know your client - Research and due diligence - Recommendations for central investment propositions - Replacement busines - Dealing with insistent clients. Charges disclosure While we are not seeing too much evidence of an increasing number of insistent clients for firms to deal with, there are some cautionary comments regarding CIPs and replacement business, particularly around value for money and clarity of charges disclosure. The key question is: Are you making it as clear as possible to your client what they are paying for and what that means in cash terms? The FCA has pointed out time and again in its guidance, principles, Cobs and forthcoming Mifid II requirements that: “If you are unable to answer ‘yes’ to the following questions, you may not be meeting our requirements: - Can your clients understand your charging structure? - Do you disclose your initial and ongoing charges in cash terms? - If your charge is a percentage do you provide cash examples? - If you charge an hourly rate do you provide indicative examples?” Investment committees Moving on to the issue of research and due diligence, we are aware most firms, regardless of size, now have an investment committee to review the approach, effectiveness and performance of their investment strategies. A number of firms run this solely as an in-house function; others have external input. The important point here is the committee has a clear terms of reference relevant to the firm and its clients and it meets those requirements in relation to areas covered, frequency and attendance. Perhaps the most critical aspect, however, is the quality and clarity of the decision-making process. As you would expect, concise, well documented minutes must be able to clearly show the course of action taken and the rationale. Technology Finally, while most of the firms we looked at provide a hands-on, face-to-face service, almost all are considering some form of technology enhancement and, in some cases, a robo-advice solution. This poses a further challenge. If suitability advice standards have improved (as appears to be the case) this would suggest firms’ compliance functions are having a positive impact. This confirms our view that the compliance function is maturing and moving its position from administrative box-ticker to business partner, with a seat at the top table to influence future strategy with a proactive stance rather than be a reactor to a problem. Maintaining independence will be key. If the drive to providing more technology solutions impacting client outcomes continues, then compliance functions will need to develop new skills to be able to keep pace and continue to drive standards forward. Now is not the time to be resting on laurels.
https://www.moneymarketing.co.uk/34-simon-collins-room-improvement-suitability/
The lab automation market is expected to reach $5.20 billion by 2022 from an estimated $4.06 billion in 2017, at a CAGR rate of 5.1%, according to leading B2B research company MarketsandMarkets. The Study Objective: The objective of the new report: Analyze lab automation with the aim of: - Defining, describing and forecasting the lab automation market by equipment and software, application, end user and region; and - Providing detailed information on the major factors influencing the growth of the market (drivers, restraints, opportunities and challenges). Methodology: The report analyzes lab automation, by equipment and software, in six primary categories: - Automated workstations; - Off-the-shelf automated workcells; - Robotic systems; - Automated storage & retrieval systems; - Other equipment; and - Software. It also looks at the lab automation market by application, including: - Drug discovery; - Clinical diagnostics; - Genomics solutions; - Proteomics solutions; - Microbiology; and - Other applications. Findings: Largest Market Share, 2017 The automated workstations segment currently accounts for the largest share of the lab automation market, according to MarketsandMarkets. The firm cites high demand for automation in liquid handling as the key factor driving market growth in this segment. It notes that automated workstations offer advantages such as enhanced accuracy, and reduced time and cost. Findings: Projected Growth, 2022 Based on applications, the genomics solutions segment is expected to grow at the highest CAGR during the forecast period. The firm finds that use of automation is on the rise in genomics for high-throughput requirements, providing greater reproducibility and throughput as compared to manual methods. Source of Growth North America currently commands the largest share of the global lab automation market. MarketsandMarkets cites: - Increasing adoption of lab automation systems; - Implementation of the Affordable Care Act (ACA) in 2010; and - Economic stimulus programs such as increased funding for the National Institutes of Health (NIH) and National Science Foundation (NSF); and - Increased R&D activities by biotechnology and pharmaceutical companies as drivers of market growth in North America. Key Market Players The firm also identifies companies serving as major players in the lab automation market, including: - Tecan Group (Switzerland); - PerkinElmer (US); - Danaher (Beckman Coulter & Molecular Devices) (US); - Thermo Fisher (US); - Agilent Technologies (US); - Hamilton Robotics (US); - Abbott Diagnostics (US); - Eppendorf (Germany); - QIAGEN (Netherlands); - Roche Diagnostics (Switzerland); and - Siemens Healthcare (Germany). For more information, and the full report, visit the MarketsandMarkets website.
https://www.g2intelligence.com/new-study-finds-lab-automation-market-poised-for-continued-growth-focus-on-genomics-solutions/
walls and are habitable is normally termed as space. The volume that gets enclosed in an architectural structure is just a tiny fraction of the vast amount of “universal space”. By universal space I mean the cosmos or the gigantic vastness in which our planet and all other planets, starts etc.. survive. The volume, which gets contained in a building, varies according to the use of the building. A cinema theatre and a bedroom will surely have different purposes and hence volume. Volume gets defined by three factors, length, breadth and height of the habitable room. But irrespective of the volume both types of buildings mentioned above have “spaces” enclosed within. Lets consider the bedroom for the sake of this article. A typical bedroom will have certain architectural elements attached to it at the time of construction of the building and certain “imposed” later for the sake of proper function. Also it is important to note that no one builds bedrooms in isolation. A bedroom is always a part of the entire home plan. The architectural elements already present in a bedroom would be attached toilet and its entrance door, attached terrace or backyard entry, attached study room. All these spaces have work like supplementary role in a bedroom. Now the “imposed” elements are the furniture in the room, other accessories that will occupy the space in a bedroom. A typical bedroom will have a double bed, wardrobe, dressing table, side tables, study table, book shelf etc.. All these are necessary to use the room in a comfortable way. When these furniture are arranged in a room what remains is termed as the “circulation space”. Now most people think that the volume of the furniture and the remaining empty space must have a balance between them. Because it is not the occupied volume, but the empty volume, that decides the comfort levels inside a room. If you visit a store room on the basement of a house which is normally used to dump useless things, you will notice that the empty space remained is very less and hence the comfort levels there, are poor. So in interior design the “empty space” is equally important. Now here comes the concept of positive and negative spaces. Normally a negative space is considered a space which can not be used for a specific human activity. But this is not true. As said above the over all comfort levels in a room is determined by both the occupied as well as non-occupied spaces. So any smallest piece of floor area that is not being occupied is going to contribute to the comfort levels. That’s why the terminology of being positive and negative as far as usability is considered becomes a personal matter. What you think as a useless and non functional space can become a good place for your kid and his friends when they play hide and seek. It’s just the viewpoint to look at thing that matters. Also a little creativity can alter a non-functional space into a functional one. But remember what I said earlier, in interior design empty space matters. It is not just beautiful furniture, costly paints/wallpapers/paintings that are going to decorate your rooms. Finally it boils down to only one thing, YOU and YOUR mental as well as physical health in that space. I hope this article was helpful to everyone.
https://adesigninspiration.com/the-concept-of-positive-and-negative-spaces-in-interior-design/
Ireland is open for business and is actively committed to harnessing its abundant wave, tidal and offshore wind energy resources while developing an indigenous ocean energy industry in the process. The publication of the Offshore Renewable Energy Development Plan in 2014, and its ongoing implementation through the Offshore Renewable Energy Steering Group, has had the benefit of facilitating a genuinely collaborative environment in this area. All relevant agencies and Government departments are working together to support this burgeoning sector and offering one single gateway for information and access to the ocean energy industry in Ireland. Ireland has a unique ladder of development and test site infrastructure, which was significantly enhanced in 2015. The importance of supporting technology developers while also investing in academic research has been well recognised, and the past year has seen tangible progress in both areas with some flagship projects already underway. SUPPORTING POLICIES FOR OCEAN ENERGY NATIONAL STRATEGY The Offshore Renewable Energy Development Plan (OREDP) The Irish Government’s Department of Communications, Energy and Natural Resources (DCENR) published the Offshore Renewable Energy Development Plan (OREDP) in February 2014 (http://www.dcenr.gov.ie/energy/en-ie/Renewable-Energy/Pages/OREDP-Landing-Page.aspx). The OREDP highlights the potential opportunities for the country in relation to marine energy at low, medium and high levels of development, as derived from the findings of the Strategic Environmental Assessment of the Plan carried out prior to publication. The OREDP, as a policy document, sets out the key principles, specific actions and enablers needed to deliver upon Ireland’s significant potential in this area. Accordingly, the OREDP is seen as providing a framework for the development of this sector. The overarching vision of the Plan is “Our offshore renewable energy resource contributing to our economic development and sustainable growth, generating jobs for our citizens, supported by coherent policy, planning and regulation, and managed in an integrated manner” (DCENR, 2014). The Plan is divided into two parts. The first part deals with the opportunities, policy context and next steps, including 10 key enabling actions for the development of the sector. The second part focuses on the Strategic Environmental and Appropriate Assessment of the Plan. The implementation of the OREDP will be led by the DCENR and the Offshore Renewable Energy Steering Group (ORESG) is actively overseeing its implementation. The Steering Group consists of the main Government departments and agencies with roles and responsibilities that relate to energy and the marine environment, developers and broader interest and user groups when necessary. The Group reports directly to the Minister and the Plan will be reviewed before the end of 2017. The work of the ORESG, and hence the implementation of the OREDP is organised according to three work streams: Environment, Infrastructure and Job Creation. The Job Creation working group has responsibility across several actions, including identifying additional exchequer support requirements, supply chain development and communicating the message that ‘Ireland is Open for Business’. Under the Environment work stream, the group ensures the needs of the marine energy industry are reflected in the on-going reform of the foreshore and marine consenting process. The actions deriving from the SEA and AA of the OREDP will also be taken forward under this work stream to ensure that future marine energy development takes place in an environmentally sustainable manner. The Infrastructure working group concentrates on supporting and delivering objectives of other policies such as the National Ports Policy and Grid 25 so as to expedite integrated infrastructure development which will facilitate the offshore renewable energy sector. Ireland’s Transition to a Low Carbon Energy Future 2015 - 2030 The White Paper ‘Ireland’s Transition to a Low Carbon Energy Future 2015-2030’, published by DCENR in 2015, is a complete update on Ireland’s wider energy policy. This paper sets out a framework to guide policy and the actions that Government intends to take in the energy sector from now up to 2030, while taking European and International climate change objectives and agreements, as well as Irish social, economic and employment priorities, into account. The White Paper anticipates that ocean energy will play a part in Ireland’s energy transition in the medium to long term and reiterates the OREDP’s status as the guiding framework for developing the sector. Ocean Energy Portal The Ocean Energy Portal was launched in November 2014, and has been significantly updated and enhanced throughout 2015. The portal acts as a ‘sign-post’ to guide interested parties, internal and from abroad, through the supports available in Ireland for the development of the marine renewable energy sector. All information is aligned under six axes of activity which provide access to marine data, maps, tools, funding and information relevant to renewable energy site assessment, development and management. Since its launch, the Portal has become the “first stop shop” to which all developers can engage with relevant support sectors in Ireland and from where they can obtain the most relevant and up to date information (www.oceanenergyireland.ie) MARKET INCENTIVES Under the Job Creation work stream of the OREDP, one of the key actions is the introduction of Initial Market Support Tariff for Ocean Energy. It is envisaged that this will be equivalent to €260/MWh and limited to 30MW for ocean (wave and tidal), focusing on pre-commercial trials and demonstration. In July 2016, DCENR published a Technology Review Consultation, the first stage in a review of renewable electricity support schemes. The objective of this process is, where a clear need is demonstrated, to develop a new support scheme for renewable electricity to be available in Ireland from 2016 onwards, to support the delivery of Government policy, while taking account of the broader emerging policy context, such as the Energy Policy White Paper, the transition to the target market, the EU 2030 Climate and Energy Framework and State Aid guidelines, the Energy Union package and the European Energy Security Strategy. The development of the wave and tidal market support tariff is included as part of this process. PUBLIC FUNDING PROGRAMMES SEAI Prototype Development Fund The OREDP reiterates the focus on stimulating industry-led projects for the development and deployment of ocean energy devices and systems through the support of the Sustainable Energy Authority of Ireland’s (SEAI) Prototype Development Fund. The objectives of this programme are to accelerate and enhance support for the research, development, testing and deployment of wave and tidal energy devices. Sixty five technology projects have received support from SEAI since the programme was launched in 2009. Fifteen new projects were awarded grants totalling €4.3 million through the Prototype Development Fund in 2015. Successful applicants include Ocean Energy Ltd., who secured €2.3 million to design and build a full scale version of their OE Buoy wave energy converter which will be deployed and tested at the US Navy Wave Energy Test Site in Hawaii. Other examples include SeaPower, who will receive over €1 million to test their wave energy converter at quarter scale in Galway Bay, while GKinetic Energy were awarded almost €200,000 to conduct towing tests of their tidal turbine system in Limerick Docks. Other projects include physical tank testing of early stage wave energy convertor concepts and feasibility studies of potential deployment sites. OCEANERA-NET The ERA-NET scheme is an innovative component of the European Union’s Framework Programme, which supports cooperation of national/regional research funding programmes to strengthen the European Research Area (ERA). SEAI is a participant in the OCEANERA-NET, along with 16 funding Agencies from 9 European countries. The first OCEANERA_NET joint call commenced in late 2014, and a number of Irish partners were involved in successful project proposals. A second joint call was launched in February 2016. SEA TEST SITES Ireland has a unique ladder of development and test site infrastructure, allowing developers to move from laboratory test facilities at the Lir National Ocean Test facility in Cork, to a quarter scale test bed in Galway Bay and to a full test facility at the Atlantic Marine Energy Test Site (AMETS) near Belmullet, Co. Mayo. Significant steps were taken to further develop these facilities in 2015. Galway Bay Ocean Energy Test Site Ireland’s ¼ scale ocean energy test site is located within the Galway Bay Marine and Renewable Energy Test Site and is situated 1.5km offshore in water depths ranging from 20m – 23m. The site has provided test and validation facilities for a number of wave energy devices and components to date. 2015 saw the installation of a subsea observatory at the site, with a four kilometre cable providing a physical link to the shore at Spiddal, Co. Galway. The ocean observatory enables the use of cameras, probes and sensors to permit continuous and remote live underwater monitoring. The cable supplies power to the site and allows unlimited data transfer from the site for researchers testing innovative marine technology including renewable ocean energy devices. The installation of this infrastructure was the result of the combined efforts of the Marine Institute, SEAI, the Commissioners of Irish Lights, Smartbay Ireland and the Marine Renewable Energy Ireland (MaREI) Centre. The project was part-funded under the Science Foundation Ireland (SFI) “Research Infrastructure Call” in 2012. Separately, SEAI announced a Memorandum of Understanding with Apple in November 2015 to promote the development of ocean energy in Ireland. Apple has committed a €1 million fund that will help developers who receive a SEAI grant to test their ocean energy prototypes in the Galway Bay Ocean Energy Test Site. Atlantic Marine Energy Test Site (AMETS) The Atlantic Marine Energy Test Site (AMETS) is being developed by SEAI to facilitate testing of full scale wave energy converters in an open and energetic ocean environment. AMETS will be located off Annagh Head, west of Belmullet in County Mayo and will be connected to the national grid. It is currently envisaged that the site will provide two separate test locations at water depths of 50m and 100m to allow for a range of devices to be tested, though the potential to facilitate testing at shallower depths or the testing of other technologies such as floating wind is being investigated. The infrastructure to support testing at AMETS continues to be advanced, and it is expected that planning permission for the onshore aspects of the site, including the electrical substation, will be submitted early 2016. Crucially, the Foreshore Lease for AMETS was signed by the Minister of Environment Communities and Local Government in late 2015. This was the culmination of a detailed assessment and approval process and provides the legal basis for operating the test site.
https://report2015.ocean-energy-systems.org/country-reports/ireland/
I am an Aegean archaeologist interested in gender, material culture and cultural and public engagement. I have been an Honorary University Fellow in the Classics and Ancient History Department since 2013. While here, I have conducted independent research producing peer-reviewed articles and chapters in academic outlets, presented research at international conferences and contributed to teaching. As a teacher here at Exeter, I have created, delivered and coordinated research led modules for under- and postgraduates, such as ‘Gender in Late Bronze and Iron Age Greece’ (Term 1, 2016) and the MA module, ‘Interpreting material culture’ for the Research Methodologies module (Term 1, 2015 and 2016). I also co-taught the 1st-year seminar, ‘Greek temples’ (Term 2, 2016). Additionally, I have worked as the Finds Supervisor at Ipplepen as part of the Devon Archaeological Field School, Dept of Archaeology (Summer seasons 2015-2017), overseeing all aspects of finds processing for the excavation and supervising students. I have also worked as a Curator's Assistant in the Ethnographic Department at Exeter's Royal Albert Memorial Museum (RAMM) on the Discovering Worlds Project (2015-2016). As the research-lead on this project, I oversaw and undertook research on RAMM's Melanesian modified crania in the Pacific collection. As part of the project, I initiated and oversaw a collaboration between the Museum and the Department of Archaeology. My PhD investigated the materiality of gender in Middle and Late Helladic mortuary behaviour in the Aegean (Institute of Archaeology, UCL, 2013). My doctoral thesis showed how the materiality of gender, status and mortuary ideology changed among different groups across space and time during the Middle and Late Bronze Age (c. 2100-1100 BC) in the Aegean. Research interests My research interest is in the materiality of gender and mortuary behaviour in Bronze and Iron Age Greece. Publications: (Chapter).K. E. Leith. Forthcoming 2017. Threads of lives: the deposition of spindle whorls and shifting gender identities in Middle Helladic and Mycenaean burial practice. In Between Life and Death: Interactions between Burial and Society in the Ancient Mediterranean and Near East (Proceedings of the International Conference held at the University of Liverpool, 11th – 12th May, 2011), BAR International Series. Oxford: Archaeopress. (Article) Leith, K. E. 2016. 'Manly hearted' Mycenaeans' (?): challenging preconceptions of warrior ideology in Mycenae's Grave Circle B. Journal of Greek Archaeology(1). Research collaborations I recently worked as a researcher and Curator's Assistant under the direction of the Curator of Ethnography, Tony Eccles, at the Royal Albert Memorial Museum in Exeter on the Discovering Worlds project. This project investigated the Melanesian modified crania in the Museum's Pacific Collection -- objects that had spent most of the 80-odd years since their donation packed safely away in store and hidden from view. In this role, I instigated and acted as research lead on a collaboration between RAMM and the University of Exeter's Department of Archaeology, which facilitated the bioarchaeological analysis of RAMM's Melanesian skulls. For this project, I undertook research on the ethnographic and anthropological context of the skulls, as well as the donor histories; Dr. Catriona McKenzie conducted the macroscopic analysis; Prof. Alan Outram performed a use-wear and modification analysis, Dr. Linda Hurcombe analysed the materials used for decoration; and Professor of Radiology Iain Watt facilitated the CT Scanning and x-ray of the skulls. The research was presented in the 'Curating Human Remains' Conference at the University of Bristol (2016) and has produced an article 'The materiality of mana: the research and display of RAMM's Melanesian modified crania' (in preparation). On 7 March at 1pm, the project will be discussed as part of a RAMM Lunchtime Talk, which is open to the public. Fieldwork 2015 - present: Finds Supervisor, Ipplepen Archaeological Project (University of Exeter) 2006 and 2007: Trench Assistant and Finds Processor, Lefkandi Excavation Project, Euboea, Greece. Under the direction of Prof. Irene Lemos (Oxford), Oxford and the British School at Athens 2006 and 2007: Fieldwalking and Finds Processing, Knossos Urban Landscape Patterns, Heraklion, Crete, Greece. Under the direction of Prof. Todd Whitelaw (UCL), Institute of Archaeology at UCL and the British School at Athens Teaching I taught the research-led third-year seminar 'Gender in Mycenaean and Iron Age Greece' (CLA3120) and the MA Research Methodologies module 'Interpreting Material Culture' (CTM007) during Term 1 2016. As the Finds Supervisor for the Ipplepen Archaeological Dig and FIeld School (2015-present), I supervise students in the field, teaching them about the processing of material culture and microartefacts. For the 2015/2016 academic year, I taught the MA Research Methodologies module 'Interpreting Material Culture' (Term 1) and the Greek Temples Seminar (Term 2) in the Department of Classics and Ancient History here at the University of Exeter. During Lent Term 2013, I was a Supervisor for the third-year undergraduate course - D1, Aegean Prehistory, under the direction of Dr. Yannis Galanakis, Faculty of Classics, University of Cambridge. Modules taught Biography When I was 12, I had a plan: become a ballerina, retire at 30, then become an archaeologist. That's the simple version of what has happened so far... After dancing professionally as a contemporary dancer and founding and running a Los Angeles-based PR consultancy (1996-2003), which created bespoke marketing and audience development strategies for the performing arts, music and charity sectors, my interest in culture and gender compelled me to pursue post graduate study in London, where I received an MA in Classics from KCL in 2005 and a PhD from UCL in Archaeology in 2013. Since then, I have worked in the Higher Education and Heritage & Culture sectors as a researcher and teacher: undertaking academic research projects, producing peer-reviewed publications, presenting research at international conferences and seminars, creating and delivering learning content as a teacher at the Universities of Cambridge (2013) and Exeter (2015-2017), overseeing multi-organisation collaborations and gaining extensive public engagement experience. I am married to Dr. David Leith (Lecturer, Classics and Ancient History).
http://humanities.exeter.ac.uk/classics/staff/kleith/
The Property Management Module is designed as a property database for organisations with large property management portfolios. This paper is to provide an overview of the module describing its features and functionality. Using this structured database users can manage space allocations within their portfolio for both employees and/or customers. Customers’ space can be charged via an interface to the Accounts Receivable package or invoices for rent can be generated and interfaced to the Accounts Payable Module. The database will hold various types of data including, features, contacts, tenancies, lease and other documentation such as insurances, rights, obligations and milestones. The module also has the standard flexfield functionality as in other Oracle Modules for the capture of user-defined data. Property Definition The database of properties has an inbuilt hierarchy starting at the top by grouping properties within regions and office parks. A property is then defined as a combination of land detail and building detail. Land can be subdivided into parcels. Land details include area, conditions and features that can be recorded against each land or parcel record. The use of flexfields would allow for the recording of asset numbers, title details, etc. Building details can include address, tenure, user defined class and status. Additional details include rentable, useable and assignable area, occupancy area, features and contacts. Buildings can be further subdivided into floors, and again into offices with the same level of detail being recorded against each individual record at each level. Assignments Properties can be assigned to either employees or customers at any of the levels described above. With each of these assignments cost centres or GL codes can be associated for revenue or costing purposes. The facility to record Project/Task/Organisation data has also been provided. The system provides statistical information on the assignments and total occupancy at the building, floor and office levels. Query screens are provided to enquire on the assignment data at each of the levels. Leases and Documents Lease details and various classifications of documents can be associated with each building floor or office. The details of these documents can be recorded and may form the basis for either billing customers through Accounts Receivable or the payment of rent through Accounts Payable. Recurring invoices for either accounts Receivable or Accounts Payable can be constructed within this module and after approval, interfaced directly to the other modules. This approval can be into the future. Detail in these screens is extensive with tabs for Detail, Contacts, Locations, Insurances, Rights, Obligations, Options, Billings and Notes. There is a facility to record milestones which will generate notifications to responsible officers so they can be notified of specific occurrences with lead times set by the user. Using the details from Lease Documents the system will generate individual entries for export to Accounts Payable and Accounts Receivable. Each entry is created as a draft and must be approved prior to export. Details of these entries are maintained in Property Manager. Agents The system also maintains a list of agents or contacts related to each property. These contacts can be used throughout the module. They are divided into Customers, Suppliers and Employees. Using the standard integration of Oracle Applications these contacts are maintained in their respective modules. A contact role is available for allocation throughout the Property Module. Reports Standard reports with the system are divided into Space, Rent, Employee, Lease and other reports. The system has some 30 standard reports including 7 RXi reports. Specific reports based on client needs will require the use of Discoverer or other reporting tool. The module also has facilities to import and export data to CAD models for space allocations and locations. In General The Property Management Module can be interfaced to both the Accounts Payable and Accounts Receivable modules and if required, using flexfields, linked with the Fixed Asset module. The module is not used for maintenance cost recording however, property numbers could be referenced in Accounts Payable using flexfields for the creation of a property cost reporting mechanism. Our client uses the system as a Property Register, which records all possible information about a property. Security suppliers, cleaning suppliers, landlords, insurances, renewal dates, etc can all be recorded and found quickly. The major benefit from the system is the billing of rent to customers. Our client is a Water and Power utility that rents space for communications towers on various infrastructure assets. Tracking all the rent invoices on a monthly basis was completed using spreadsheet. The Property Management Module allows for the tracking of what has or has not been billed and in conjunction with the Accounts Receivable module what has or has not been paid.
https://know-oracle.com/2010/04/23/oracle-property-manager-introduction-overview/
Speaking in a debate on green energy yesterday (Thursday) in the Scottish Parliament, Mary Scanlon, Scottish Conservative MSP for the Highlands and Islands highlighted the significant amount of investment in Scottish wind farms which goes abroad and the difficulties communities face in being heard during planning applications for wind farms. Commenting after the debate Mary said: “There is undoubtedly a need to harness and expand renewable energy but the SNP government’s proposal for 100% of our energy to be delivered from renewable sources by 2020 is a serious concern in local communities who fear every application, no matter the suitability, will be granted to help meet this target. “The issue about individuals and communities opportunities to highlight concerns about applications has been raised with me several times and I took the opportunity to highlight the difficulties people face. “It is surely the worst form of consultation when developers can take as much time as they like to write Environmental Statements (a recent example was 2,100 pages), yet local people only have 28 days to read it, digest the information and respond. During this time they clearly need to view the document but if they can’t get to their local authority office they might have to buy a copy costing in the region of £850. “We also have to consider how beneficial the construction of wind farms are for our local communities when at the present time around 70% of the money for wind farms goes abroad for the manufacturer of turbines and towers. “This means for an average £50m wind farm, approximately £35m will go abroad. It’s also the case that the employment they bring to a local community is limited. During construction there may be several jobs, but once completed a large wind farm can be run by two or three staff with technicians called in for maintenance, they are certainly not the answer to stimulating the jobs economy.” This article is the work of the source indicated. Any opinions expressed in it are not necessarily those of National Wind Watch. The copyright of this article resides with the author or publisher indicated. As part of its noncommercial effort to present the environmental, social, scientific, and economic issues of large-scale wind power development to a global audience seeking such information, National Wind Watch endeavors to observe “fair use” as provided for in section 107 of U.S. Copyright Law and similar “fair dealing” provisions of the copyright laws of other nations. Send requests to excerpt, general inquiries, and comments via e-mail. |Wind Watch relies entirely | on User Funding Share:
https://www.wind-watch.org/news/2011/06/04/msp-calls-for-communities-to-have-greater-say-in-wind-farm-applications/
The controversial wind farm proposed for Monasterevin has been refused permission. Kildare County Council refused the application due to air corp flight path, ecological and road network concerns. Over 170 submissions were made by local residents, politicians, Kildare Failte, the Irish Peatland Conservation Council and the Irish Aviation Authority. Ummeras Wind Farm Ltd (Statkraft) wants to build five 169m turbines between Rathangan and Monasterevin in the townlands of Ummeras Beg, Coolatogher, Mullaghroe Lower, Ummeras More and Coolsickin/Quinsborough. Residents raised concerns about the possible impact the facility would have on the Ballykelly distillery project, the €5m investment in the Blueway Grand Canal cycle and walkways, and €100,000 earmarked for design proposals of the Umeras Peatland Park. The Monasterevin and Rathangan Wind Awareness group said; “you couldn’t pick a worse location for turbines in Ireland if you tried – in the middle of a new tourism hub with over 60 million being invested by the county councils of Kildare, Laois and Offaly on Blueways and Bono on Ballykelly Mills Distilleries and the Just Transition Fund and Kildare Leader on Umeras Peatlands Park.” It believes this project won’t create any permanent local jobs and will damage tourism which can bring the streets of Monasterevin and Rathangan back to life with cafes, restaurants, pubs, bike and kayak hire, B&Bs, AirB&Bs, hotels, and many other spin-off opportunities.” The group also contends the proposal contravenes the Kildare County Development Plan because turbines are not permitted at the Grand Canal protected scenic route and Umeras Bridge scenic view. Kildare Failte also lodged an objection and said the proposed development is inappropriate for this new tourism hub and it recommended it be refused. The Irish Peatland Conservation Council raised concerns about the possible impact on wildlife, while the Irish Aviation Authority recommended the developers engage with Clonbullogue airport to ensure there is no adverse impact on their activities. Among the local politicians to raise concerns were Kildare TD, Patricia Ryan; Senator Fiona O’Loughlin, Cllr Anne Connolly and Cllr Noel Connolly. In response, a Statkraft company spokesperson previously stated that while the planning application is before Kildare County Council, it cannot comment on its specifics. However, it did say Statkraft doesn’t have any plans to extend the Ummeras wind farm, if successful in its application. “The company has consulted and is continuing to consult extensively with all those living in the vicinity of the proposed Ummeras wind farm. This intensive programme of work began in March 2020 and while disrupted due to Covid restrictions, an online virtual consultation platform has recently gone live,” he says. “This virtual platform along with a host of accurate project information can be accessed on www.ummeraswindfarm.ie.” The company says all 180 households living within 1.7km of the five proposed sites have been either visited or had information posted to them – since calling door-to-door was rendered impossible due to Covid. It says ideas have been sought on the use of a community benefit fund which would be worth in excess of €150,000 a year to the area. Among proposals put forward by locals are improvements to broadband, development of an energy efficiency scheme, support for a men’s shed and a local community garden initiative. Statkraft says it also supported a ‘Meals on Wheels’ initiative in the immediate area during the early days of the pandemic. It stressed Statkraft is eager to engage with locals, develop meaningful relationships with communities and is an experienced, professional developer of renewable energy with an excellent track record. This article is the work of the source indicated. Any opinions expressed in it are not necessarily those of National Wind Watch. The copyright of this article resides with the author or publisher indicated. As part of its noncommercial effort to present the environmental, social, scientific, and economic issues of large-scale wind power development to a global audience seeking such information, National Wind Watch endeavors to observe “fair use” as provided for in section 107 of U.S. Copyright Law and similar “fair dealing” provisions of the copyright laws of other nations. Send requests to excerpt, general inquiries, and comments via e-mail. |Wind Watch relies entirely | on User Funding Share:
https://www.wind-watch.org/news/2021/03/19/controversial-kildare-wind-farm-refused-permission/
We had a meeting with the developer of Little Raith Wind Farm (Kennedy Renewables) and their PR/Political Lobbying group (Invicta PA) back in October. Unfortunately, most of our questions couldn’t be answered at the time, but the developer took note of the issues and concerns raised, and notified us they will get back to us regarding our queries, and could we also include our concerns in writing. We contacted the developers and the four Community Councils on the 27th January 2012 with our questions, as well as questions and concerns received from members of the local community (published below). Unfortunately the email address listed for Cowdenbeath Community Council on the Fife Council website is wrong but Lochgelly, Lumphinnans, and Auchtertool, received our email, as well as the developer. As of yet, we have still to receive a reply. Questions - Bats are at risk from Industrial Wind Turbines, as the change in air pressure ruptures their lungs 1 2 3 4, at the Gelly Loch there is a thriving bat community. What research was conducted to identify the numbers of bats at the Gelly Loch that will be directly threatened by Little Raith Wind Farm? Please provide a copy of any research. - The Gelly Loch is home to over 1200 wintering wildlife and is used by passage for pink footed geese during the winter months 5, and the FWS calculate that around 37 birds are killed per year per turbine in Europe 6. Numbers may be understated due to developers not accurately reporting or covering up bird kills 7. What research was conducted into the bird life at Gelly Loch and what precautions are the developers undertaking to ensure their safety? - How many bird and bat kills from Industrial Wind Turbines are acceptable within the Little Raith Wind Farm development? - A condition was attached to the development that stated if benzene levels were to increase in the local area, Little Raith Wind Farm had to be closed down and the turbines removed 8. This condition was appealed, and now you only have to monitor the benzene levels 9. If benzene levels are increased due to the wind farm development what protections will be offered to the local communities, and what action will you take to prevent future increase of benzene levels? - Noise pollution was conducted using the Dba weighting, however research indicates that the Dbc weighting should be used to more accurately predict Low Frequency Noise 10. What test (if any) have been conducted on the likely Low Frequency Noise Pollution in the local communities? - If no tests into Low Frequency Noise Pollution has been conducted, or if tests were conducted using the Dba method, will the developers agree to carry out tests using the Dbc method? - If Noise Pollution or Low Frequency Noise Pollution becomes a problem in the local communities where do people complain, to remedy the situation? - What safety mechanisms are in place to prevent noise and low frequency noise pollution? - You claim that Little Raith Wind Farm will generate enough power for 14,500 homes 11. Is this based on the nameplate capacity of all the turbines or based on expected energy production levels? - It has been well documented by a variety of sources that Industrial Wind Farms only produce around 20% of their nameplate capacity 12 13. What percentage of production levels are you expecting for Little Raith Wind Farm? - Claims have been made that Little Raith Wind Farm will help reduce Fifes emission levels by 25% 14. What factors were considered when calculating the CO2 reduction levels for Little Raith Wind Farm? - Did the calculations include the Neodymium extraction process 15 which is a crucial component for the Little Raith Industrial Turbines? - Fife Council guidelines state a 2km setback zone for developments over 25Mw 16, which is inline with the Scottish Executive recommended separation distance. Why were the local and Scottish guidelines ignored by the developers? - There is more and more research indicating that Industrial Turbines producing excessive low frequency noise pollution is causing harmful effects to some residents living in close proximity to wind farms 17 18, and even a wind energy commissioned report indicated that there is health concerns from Industrial Wind Farms 19. Can the developers provide any assurances or evidence that Little Raith Wind Farm will not impact on any individuals health within the local communities? - Can the developer provide assurances that the local communities will not be effected by Low Frequency Noise pollution? - How much oil is expected to be used for each turbine at Little Raith Wind Farm per year? 20 - What is the longest distance the turbines will cast a shadow from the sun? - The turbines are over 100ft higher than the Mossmorran flare stack. When Mossmorran flares, how will Little Raith Wind Farm reduce the flicker effect caused from the flare light source? - How much electrical input from the National Grid does each turbine require to operate? 21 - Will the power taken from the National Grid to keep the turbines operational be recorded in any annual statements? - Without government subsidies 22 23 are your turbines cost effective? - West Coast Energy commissioned a report regarding benzene concentration levels in the air from a wind farm which SEPA disagreed with the results but didn’t clarify their concerns. Will you make this report available online and easily accessible, so other trained persons can review the research results? - It has been noted in other areas that landowners that allow wind turbines on their land have to sign a confidentiality clause 24. Do you have any confidentiality clauses with the landowners? - Will the landowners be receiving any payments for allowing the turbines to be sited on their land? - How much money will the landowners receive for allowing the turbines to be sited on their land? - If this figure cannot be disclosed, can you tell us, if the landowners will be receiving more or less money per year than the Community Benefit being offered to the 4 areas? - Various research has been conducted that state homes within a certain radius of wind farm developments lose their value 25 26 27. Can you provide any assurances that property values will not be reduced in the local areas due to your development? - If property values do decline in the local area, and if this can be directly attributed to Little Raith Wind Farm, will your company compensate the property owners for their loss? - Do your turbines create television interference? 28 - Do your turbines interfere with cell phone reception? - While your turbines are referred to as a wind farm, isn’t it true that a more accurate term for your development is that it is an Industrial Power Plant that utilises wind energy? - The 9 turbines at Little Raith are more than 2km from Auchtertool. If you are not expecting any negative impact from your turbines in the Lochgelly, Cowdenbeath and Lumphinnans area, then why is Auchtertool being considered to receive any of the Community Benefit being offered? - Can you provide any peer reviewed scientific proof that wind turbines decrease CO2 levels? - You claim that all 4 community Councils fully support Little Raith Wind Farm, yet we have been in contact with Lochgelly Community Council, and they cannot verify if they fully support Little Raith Wind Farm development and the increase in turbine height. What evidence do you have that Lochgelly Community Council fully support your development, including the increase in turbine height? - Will you arrange a series of public meetings to be held in each area (Lochgelly, Cowdenbeath, Lumphinnans, and Auchtertool) to allow others to raise their concerns directly with you, or to learn more about Little Raith Wind Farm? - If you will not arrange any local public meetings, can you give the reasons why? - The Little Raith ruins which is a designated archeological site (Scottish National Heritage) has already suffered damage, even though you assured the community groups that these ruins would be sealed of and secured 29. Will you be repairing the damage so the ruins are restored to their former state before development work began? We would like to state we are not against Renewable Technologies, however with Wind Energy there is some potential negative impacts that may outweigh the positive benefits, and we feel that there should be an honest and open approach looking at both sides of the debate, addressing the concerns regarding wind energy. We have published these questions online for openness and transparency and if possible, we will publish any answers we receive online. If anyone else has any questions or concerns they wish to raise to the developers, please ask in the comments below.
https://lochgelly.org.uk/2012/02/questions-tconcerning-little-raith-wind-farm/
2 edition of Essays on geography and economic development found in the catalog. Essays on geography and economic development Norton Sydney Ginsburg Published 1960 by University of Chicago in Chicago . Written in English Edition Notes Includes bibliographies. |Statement||[by] Brian J. L. Berry [and others]| |Series||University of Chicago. Dept. of Geography. Research paper, no. 62, Research paper (University of Chicago. Dept. of Geography) ;, no. 62.| |Contributions||Berry, Brian Joe Lobley, 1934-| |Classifications| |LC Classifications||H31 .C514 no. 62| |The Physical Object| |Pagination||xx, 173 p.| |Number of Pages||173| |ID Numbers| |Open Library||OL5791639M| |LC Control Number||60002105| |OCLC/WorldCa||166948| That a developing economy needs management even more than resources is now becoming abundantly clear to all students of growth. There was perhaps a facile assumption in the earlier years that the rate of growth in a developing country depended in almost direct proportion to two factors: the resources available within the country, the land, water, minerals, savings and other . Economic Vision The benefits and impacts transport corridors bring to a national economy are greater than those of mere transport infrastructure. Transport infrastructure deals with basic connections among regions or cities and achieves the start of trade and the development of such other social side effects as. Military geography: History and development. Patterns between healthcare and environment in China. Economic inequality of African countries. Political Geography: Israel-Arab problem. The issue of aging infrastructure. If the World were a village: Population aspect. India as a culture hearth. Cities vulnerable to the sea-level rising. Sassen is a collection of essays dealing with topics such as the “global city,” gender, globalization of labor, information technology, and new forms of inequality. The International Forum on Globalization (IFG) provides information on economic as well as noneconomic aspects of globalization. Economic geographers have always argued that space is key to understanding the economy, that the processes of economic growth and development do not occur uniformly across geographic space, but rather differ in degree and form as between different nations, regions, cities and localities, with major implications for the geographies of wealth and welfare. Welcome to AP Human Geography Ms. Anderson Phone: room E-mail: [email protected] Course Description: The new college-level social studies course provides students with the opportunity to identify and analyze contemporary concerns and problems from local, national, and global perspectives in Human . On the efficacy of opinion in matters of religion. Local government and planning for a democratic South Africa. General orders, no. 1 Annual Report Card State of Ohio & 88 Counties Prosperity versus planning: how government stifles economic growth Seafood handling, preservation, marketing Postgraduate awards 1996 country curates observations on the advertisement Evidence discovery in internet pornography cases Thirthraj Pushkar The Jinglebob Man voice of Cyprus Transcripts of parish registers Camp-coach holidays. Last and lost poems of Delmore Schwartz kingdom of the Bulls. Agreeably to notice, Mr. Daggett asked and obtained leave to bring in the following resolution ... Succulent prey OCLC Number: Description: xx, pages illustrations, maps 23 cm: Contents: Geography and economic growth / Richard Hartshorne --Geographic theory and underdeveloped areas / Edward L. Ullman --The cultural factor in "underdevelopment": the case of Malaya / J.E. Spencer --On classifying economies / Philip L. Wagner --Energy consumption and economic development. The view that geography is at the center of the story in shaping the rhythms of economic development dates back to Montesquieu and has been recently revived by Jared Diamond in his book “Guns, germs and steel: The fates of human societies.” Basics of Economic Geography Essays. Free Example of Geography and the Development Essay Environmental/ geographical Factors significant to the development or expansion of the united state The main geographical and environmental factor that has led to de rapid development in the United State is the availability of superior resources that are considered of high quality. Economic Development Economists’ change of approach Yesterdays were the period of economists who were focused on industrialization and urban were of the view that urban development is the sole driving force of an economy. They proposed for subsidies and tax concessions for companies for setting up business in a particular area or region. About the Book. Population cannot exist without environment in which people live, and utilize the fruits of development. Recent developments across the world, such as increasing stress on rapid industrialization, globalization of market economy and over-exploitation of resources, expose populations to environmental degradation, hazardous materials and situations, disasters. Free essays available online are good but they will not follow the guidelines of your particular writing assignment. If you need a custom term paper on Geography: The Physical And Economic Geography Of Canada, you can hire a professional writer here to write you a high quality authentic free essays can be traced by Turnitin (plagiarism detection program), our custom written essays. ADVERTISEMENTS: Essay on Economic Geography. Geography has become an extremely varied and versatile subject and its area of study has grown a great deal since its inception. ADVERTISEMENTS: The concepts of place names, natural environment or influence of natural environment on man’s activity have become old and unacceptable notions in the light of the. ADVERTISEMENTS: In this essay we will discuss about how mining has contributed to the economic development of many countries. Mining and the processing of minerals inevitably have effects on the economies of countries with large or valuable mineral resources. Few minerals have had as profound an effect as coal but they do influence the economy [ ]. Provides a fresh perspective to the ongoing debate on the core themes of development economics. This book, in honour of Robert E. Evenson, brings together diverse, yet interrelated, areas of innovations such as agricultural development, technology and industry while assessing their combined roles in developing an economy. ADVERTISEMENTS: In this essay we will discuss about the Economic Development of a Country. After reading this essay you will learn about: 1. Economic Growth and Economic Development 2. Determinants of Economic Development 3. Obstacles or Constraints 4. Pre-Requisites or Need 5. Structural Changes. Contents: Essay on the Meaning of Economic Development Essay. Bengston and Van-Royen (), in his book Fundamentals of Economic Geography, have stated that: Economic geography investigates the diversity in basic resources of the different parts of the world. It tries to evaluate the effects that differences of physical environment have upon the utilisation of these resources. Harold A. Innis helped to found the field of Canadian economic history. He is best known for the "staples thesis" which dominated the discourse of Canadian economic history for decades. This volume collects Innis' published and unpublished essays on economic history, from tothereby charting the development of the arguments and ideas found in his books The. Economic development is a fairly new idea that arose during the early twentieth century. Many theorists attempted to define economic development and to differentiate it from the concept of economic ing to economists, Gerald Meier and Dudley Seers, these two concepts are different from each other and they stressed that economic development cannot be equated with economic. Development planning in Kenya: essays on the planning process and policy issues Tom Pinfold, G. Norcliffe Dept. of Geography, Atkinson College, York University, - Business & Economics - pages. Read free GIS & geography E-books covering topics about geomedicine, education, geospatial matters & technology. Industry related E-books around business, agriculture, government &. Harold A. Innis helped to found the field of Canadian economic history. He is best known for the "staples thesis" which dominated the discourse of Canadian economic history for decades. This volume collects Innis published and unpublished essays on economic history, from tothereby charting the development of the arguments and ideas found in his books. Geography and Economic Development John Luke Gallup, Jeffrey D. Sachs, Andrew D. Mellinger. NBER Working Paper No. Issued in December NBER Program(s):International Trade and Investment This paper addresses the complex relationship between geography and macroeconomic growth. Get help on 【 How does geography affect a country's development. 】 on Graduateway Huge assortment of FREE essays & assignments The best writers. The natural resources of a state can lend significantly to its economic development, but if your state is ill located or non to the full developed enough it may be difficult to use theses. According to an article written by John Luke Gallup, Jeffrey Sachs, and Andrew Mellinger called “Geography and Economic Development,” countries in the tropics are almost all poor, while almost all rich countries are in higher latitudes (Gallup, et al. Tropical regions have more diseases and limits on agricultural productivity. Assistant Professor of City and Regional Planning Zhenhua Chen has published Development Studies in Regional Science: Essays in Honor of Kingsley E. Haynes, along with co-editors William M. Bowen and Dale hed by Springer Singapore, the book is part of the New Frontiers In Regional Science: Asian Perspectives series. introduction to economic geography Download introduction to economic geography or read online books in PDF, EPUB, Tuebl, and Mobi Format. Click Download or Read Online button to get introduction to economic geography book now. This site is like a library, Use search box in the widget to get ebook that you want.Economic Geography Is The Study Of The Location, Distribution And Spatial Organization Of Economic Activities Across The World Words | 4 Pages. Mid-Term Examination Essay Questions 1. Economic geography is the study of the location, distribution and spatial organization of economic activities across the Earth. Essay text: "Economic geography is concerned with the spatial organization and distribution of economic activity, the use of the world's resources, and the distribution and expansion of the world economy" [Stutz and de Souza 41].
https://jetekibilaqyteb.prosportsfandom.com/essays-on-geography-and-economic-development-book-19847id.php
Taste of Russia When mention about Russia, first thing come up to people’s mind would be the biggest country. But this is not what I am going to talk about today; I am going to talk about food in Russia. Although from the size of the country it might makes you think about Russia must got a lot of different type or different style of cuisines. Yet, the truth is due to the location of the country, most of places are freezing through the year and the limit variety of food produced locally also affect the numbers of cuisine they can have. However, Russians managed to use the limited amount of food to create their own authentic cuisine. Generally speaking, Russians are mostly meat lovers, chicken, pork, beef and fish got their priorities, and somehow they eat less lamb and seafood. Normally, Russian’s meal are consists of three courses meal plus pastry, salad comes first, soup comes second and hot dish comes lastly. If you are not familiar with Russian food, there are some famous dishes you have heard about it. Starting from salad, Russian cucumber & Radish salad, Olivier salad – which was name after the French Chef, M. Olivier who created it. One salad I want to highlight is Selyodka Pod Shuboy, because it requires quite a lot of ingredients, it is an ideal dish for sharing and the cooking method is not difficult. (recipe) Speaking about soup, even though you never tasted it, I bet you definitely have heard about it- Borscht. The signature red colour of the soup had been reckon as the famous soup from all the Russian soups. The colour was from one ingredient inside the soup called beetroot. Borscht is cook with beef and beetroot. Once it’s been done, Russians would serve it with sour cream, dill sprigs and rye sourdough. Besides Borscht, Rassolnik is another famous beef involved soup that can represent traditional Russian cuisine. Main course are always things worth to highlight or might involved the most difficult method of cooking it, as well as the Russian famous mains. Klotski is the potato dumplings in chicken broth (recipe). Golubtsy is a cabbage leaves wrapped ground beef mixed with boiled rice or buckwheat. (recipe) The final for the meal in Russian is pastry, black bread or prioshiki- stuffed bread that could fill in different ingredients, like, roasted onions, cheese. If you never have had black bread before, you would possibly find it as hard as stone, even though, it always the Russians’ favourite. (recipe) I know you would ask, does Russian drink Vodka, of course they did. That’s why you can found at least ten different brands of Vodka in supermarket. But they also make cocktail by using Vodka as a basic.
https://wnolondon.net/2015/03/18/taste-of-russia/
This module focuses on a moment of crisis in the lives and history of towns and townspeople, when, caught up in the turmoil of war, their conquest and submission have become a political and military objective of armed forces. Resistance rested upon material conditions, such as the strength of walls and military equipment; upon human resources, such as the size and skills of the garrison and the urban community; and arguably what was more important of all, upon the spirit or mindset of the people. To what extent were townspeople prepared to put up a resistance against the besiegers? What part did such factors as ideology, cultural and political background play in the decision of the urban communities and garrisons to keep on or stop fighting? How united was the urban community? Situations of sieges put individual convictions and determination to the test. Resistance also depended on the strength and disposition of the besiegers, and the predictability of the outcome. How wild or contained were the laws of (siege) war (fare)? Were there well established and shared conventions? We will examine a number of case studies from the medieval and early modern periods to answer these questions. How people responded to the experiences of being besieged in the period 1250-1650. Similarities and differences in sieges across 400 years and in different types of war. The experience of being besieged in different geographical contexts (England, France, the Low Countries). Adopt a comparative approach to examining the experiences of sieges. Using political, social and cultural history alongside military history to examine sieges. Analyse a range of primary sources in translation. Explore which factors influenced 'resistance' in different historical and geographic contexts. Consider whether 'rules' applied to sieges in the medieval and early modern periods and assess whether these were changing over time. The study of sieges goes far beyond the strict military framework, overlapping the fields of political, cultural and social historians. It provides an original angle to study contemporary mentalities, the concepts of treason, allegiance, sovereignty and the nascent sentiment of patriotism. The chronological framework of this module gives a priceless opportunity to grasp changes and continuity over four centuries, breaking the traditional boundary between medieval and modern. It is centred upon England and two neighbours, France and the Low Countries and involves different contexts of war (e.g. dynastic, religious and civil). The structure of the module is mainly chronological and articulated around case studies. Lectures will set the broader political context of a siege which is then examined in more detail in seminars. These are based on the analysis of a wide variety of primary sources, such as chronicles, eye-witness accounts, memoirs, poems, official correspondence and dispatches (all in translation). Duke, A., Reformation and revolt in the Low Countries (London, 1990). Donagan, B., War in England 1642-1649 (Oxford, 2008). Parker, G., The army of Flanders and the Spanish Road, 1567-1659: the logistics of Spanish victory and defeat in the Low Countries' wars (Cambridge, 1972). Potter, D., Renaissance France at War: Armies, Culture and Society, c.1480-1560 (Woodbridge, 2008). Afflerbach, H., and Strachan, H., eds., How Fighting Ends: A History of Surrender (Oxford, 2012). Corfis, I. A., Wolfe Michael, eds., The Medieval City under Siege (Woodbridge, 1995).
https://www.southampton.ac.uk/courses/modules/hist2225.page
The National Seismic Hazard Assessment of Australia identifies the southeast corner of Australia as a seismic hotspot – with a magnitude 5.0 or greater earthquake occurring roughly every seven years on average in Victoria. The 1989 magnitude 5.6 Newcastle earthquake in New South Wales exemplifies the expected damage from such an earthquake near an urban centre. The most recent reminder of this activity in Victoria was the magnitude 5.9 earthquake in remote Woods Point on 22 September 2021, making it the largest recorded earthquake in Victoria since European settlement. It also occurred on a previously unmapped fault line. Woods Point reminds us of the importance of detecting faults and mapping them to determine whether they are active and are potential sites for earthquakes. This is why we have developed and expanded the University of Melbourne’s seismic monitoring in Victoria into probably the best performing regional continuously operating seismic network in Australia. The earthquakes in southeast Victoria are likely occurring on pre-existing earthquake faults, a subset of which is identified in Australia’s Neotectonic Features Database based on indirect geological evidence. Neotectonic Features are earthquake faults that have been active some time in the last eight million years. The origin of some of these faults can be traced back to about 160 million years ago when Australia started splitting from Antarctica during the breakup of the Gondwana supercontinent. Since then, these faults appear to have been reactivated by several plate tectonic events at different times. Though it remains incomplete – as most recently demonstrated by the occurrence of the Woods Point earthquake – the Neotectonic Features Database reveals a maze of faults beneath Victoria. This includes significant faults close to urban centres like the Selwyn Fault that cuts into the Mornington Peninsula and the Muckleford Fault located 20 kilometres east of Ballarat. Considering only their dimensions, these two faults are theoretically capable of hosting magnitude seven earthquakes, and are a stark reminder of the importance of better understanding the fault lines and seismic activity throughout the state. The key to improved monitoring is the ability to precisely detect and locate small earthquakes beneath the threshold that humans can feel, because these occur much more frequently than larger events, and indicate which faults are active. The University of Melbourne began monitoring earthquakes in Victoria in 2012 following the magnitude 4.9 Thorpdale earthquake that we have investigated in detail. Since then, we have been developing a high-performance seismic monitoring network covering the Gippsland region to detect and locate earthquakes in unprecedented resolution. This network uses seismometers – sensors that can record even the slightest ground vibrations that result from energy passing through layers of the earth from an earthquake (seismic waves). Embedded in our network are different types of seismometers – sensors that sit on or near the surface in boreholes 10 metres to 1000 metres deep and those that sit on the ocean bottom called Ocean Bottom Seismometers. These instruments are located on land, on island sites like the picturesque Deal Island and beneath Bass Strait, with most sensors transmitting live data to our servers at the Parkville campus. When designing an advanced seismic network like ours, there are several considerations. For example, having seismometers as close as possible to an earthquake helps determine its depth accurately. In addition, having 360˚ seismometer coverage around that event helps accurately determine its epicentre – the point at Earth’s surface directly above where the earthquake occurs in the subsurface. In addition, we prefer to install seismometers on hard bedrock sites, as these transmit seismic waves more efficiently and with little loss of energy compared to seismometers located in “softer” soil sites that weaken seismic signals and sometimes generate undesirable levels of interfering noise. Having mobile phone coverage is also a consideration, as the ability to transmit live data enables near-real time analysis of earthquakes or other events that generate seismic waves. For example, our instruments recorded ground vibrations from the recent demolition of chimneys at the Hazelwood Power plant with an equivalent earthquake magnitude of 1.6. In the last four years, with the support from our funders, we have developed new capabilities including deploying high-frequency, shallow marine Ocean Bottom Seismometers, and established operational benchmarks to optimise seismic monitoring both on land and in shallow marine environments. As a result of this work, earthquake detections in eastern Victoria have increased to more than 400 a year from about 150 a year before 2017, because we now have more stations closer to smaller events recording signals that previously went undetected. We are also detecting some events at magnitudes as small as -0.5, which are tiny centimetre-scale earthquake ruptures in the crust – earthquake magnitude is measured on a scale extending below zero. These low-energy events correspond to tiny centimetre-scale earthquake ruptures in the crust. We have also reduced the threshold at which all earthquakes in the network are detected from magnitude 1.2 before 2017 to 0.5 today. We can more precisely locate earthquakes to within less than one kilometre, compared to earlier location uncertainties of up to 10 kilometres. We believe these statistics put our seismic network ahead of any other regional-scale network in Australia. We recently launched an open-access cloud-based web application on the AuScope Virtual Research Environment store to inform decisions about seismic network design, developed through a collaboration between the University of Melbourne and CSIRO. In addition, our preliminary work on earthquake physics suggests that earthquakes of magnitude less than about 2.5 in Gippsland radiate about 10 per cent more seismic energy than typical intraplate earthquakes, which naturally translates into slightly greater seismic hazard. It isn’t known, however, if larger earthquakes that may cause significant damage will radiate energy in a similar manner to these smaller events. Interestingly, the Woods Point earthquake of magnitude 5.9 appears to have caused less damage than expected, although this may be partially explained by its depth of about 12 kilometres. This potential for small earthquakes to radiate slightly more seismic energy highlights the need to further understand earthquake behaviour at all magnitude ranges in Gippsland so that we can ensure the safety of fast-growing urban centres and critical infrastructure. For example, the new insights on seismicity we have obtained from continuously accumulating volumes of high-quality seismic data is now being used to inform future planning for the CarbonNet Project, a proposed world-class shallow marine carbon dioxide sequestration site being investigated for commercial-scale operations. The new technology and research capabilities we are developing to monitor seismicity will lead to a better understanding of earthquake processes, providing benefits to society and industry alike, and help ensure safer living spaces for Victorians. Financial assistance to carry out research described in this article is provided through Australian National Low Emissions Coal Research and Development (ANLEC R&D) which is supported by Low Emission Technology Australia (LETA) and the Australian Government through the Department of Industry, Science, Energy and Resources. Funding for developing seismic monitoring infrastructure is provided by the Australian Government through the Education Infrastructure Fund and administered and coordinated by CO2CRC. AuScope provides partial funding to support operational activities.
https://pursuit.unimelb.edu.au/articles/seismic-ears-to-the-ground
2017 Spelling Bee will be held from Mary 30 to June 1; Winners of past 9 Bees, including last three joint winners, were Indian Americans. In order to avoid crowning joint champions, the Scripps National Spelling Bee is changing the rules for the finals this year. The last three Bees produced joint champions, which was unprecedented in the competition’s 89-year history. The 2017 Spelling Bee national finals will be held at the Gaylord National Resort and Convention Center in National Harbor, Maryland, from May 30 to June 1. Jairam Hathwar and Nihar Janga are 2016 Spelling Bee co-champions (May 26, 2016) Presented by Kindle, the Bee will be nationally televised by ESPN and sister channels for the 24th consecutive year. The Preliminaries will be held on May 31, and the Finals on June 1. This year’s contest will have 290 students, between ages of 5 and 15, representing all 50 U.S. states, the District of Columbia, and territories, as well as six foreign countries. Vanya Shivashankar, Gokul Venkatachalam win 2015 Scripps National Spelling Bee (May 28, 2015) The last year’s Bee co-champions were Indian Americans Jairam Hathwar and Nihar Janga. It was the ninth successive time Indian American spellers won the prestigious championship. The E.W. Scripps Company, the organizers of the popular contest, announced on Tuesday that a Tiebreaker Test will be give to all spellers remaining in the competition at 6 p.m. on Thursday, June 1. It will be written test, consisting of 12 words and a dozen multiple choice vocabulary questions. Two Indian Americans jointly win Scripps National Spelling Bee championship (May 29, 2014) If the competition doesn’t produce a single champion after 25 consecutive rounds involving three or fewer spellers, officials will reveal the Tiebreaker Test scores of the remaining spellers. The winner will be the speller with the highest score. If there is a tie for the highest Tiebreaker Test score, the spellers with the tying highest scores will be co-champions. Another change this year will be in the Preliminaries Test, where spellers will handwrite their spellings. The answers will be hand-graded. In previous years, the Bee used computerized forms. Indian American Mahankali new Spelling Bee champ (May 31, 2013) The official source for words this year will be Merriam-Webster Unabridged dictionary. Merriam-Webster’s Third New International Dictionary, the previous source, is out of print now. “The very first bee started with nine students, and now the Scripps National Spelling Bee program reaches more than 11 million,” Paige Kimble, executive director of the program, said in a press release. “During our history, students have expanded their spelling abilities and increased their vocabulary to push our program to be even more challenging.” Kimble was the Spelling Bee champion in 1981.
https://www.americanbazaaronline.com/2017/04/04/spelling-bee-changes-rules-424155/
PARTS OF A CIRCLE (A) Identifying a circle as a set of points equidistant from a fixed point A circle is a locus with all the points on the plane at a constant distance from a fixed point (known as the centre). (a) Centre - fixed point in the middle of the circle with a constant distance from all points on the circle. (b) Circumference - the length of the border of the circle. (c) Radius - the length of straight line from the centre to any point on the circumference. (d) Diameter- the length of a straight line joining any two points on the circumference passing through the centre. The length of the diameteris twice the radius. (e) Chord - a straight line joining any two points on the circumference. The diameter is the longest chord in a circle. (f) Arc - a part of the circumference of a circle with end-points on the circle. (j) Quadrant - one quarter of a circle formed by an arc and two perpendicular radii. (C) Identifying parts of a circle A circle or parts of a circle can be drawn using a straight edge, a pair of compasses or a protactor. Draw a circle Draw a circle with centre O and radius 1 cm. Place the pointed end of the compasses at O and draw the circle. Draw a diameter Draw a diameter of length 3 cm passing through a point R in a circle with centre O. (a) Draw a circle i=with centre O and a radius of 1.5 cm. (b) Mark a point R in the circle. (c) Using a ruler, join O to R and extend both ends to reach the circumference of the circle. Draw a chor Construct a circle with radius 1.5 cm. Then draw a chord with a length of 2 cm which passes through P on the circumference. (a) Draw a circle with a radius of 1.5 cm. (b) Mark a point P on the circumference. (c) Open the compasses to a length of 2 cm. (d) Place the pointed end of the compasses at P and mark an arc intersecting the circumference. (e) Using a ruler, joint the two points. Draw a sector Draw the sector of a circle given that the angle at the centre is 80° and its radius is 1.5 cm. (a) Draw a circle of radius 1.5 cm with centre O. (b) Draw a radius and name it OP. (c) Using a protractor, draw an angle POQ = 80°. (d) Using a ruler, join O to Q to obtain the sector. POQ is a sector of the circle with an angle of 80° at the centre and a radius of 1.5 cm. (D) Determining the centre and radius of a circle by construction The perpendicular bisectors of two unparallel chords of a circle intersect at the centre of the circle. Determine the centre and radius of the circle given. (a) Draw two unparallel chords PQ and RS in the circle. (b) Construct the perpendicular bisectors of both chords. (c) The intersection point of the perpendicular bisectors of the chords is the centre, O, of the circle. (d) Measure the length of OP, OQ, OR or OS to get the radius of the circle. Calculate the circumference of a circle with a (a) diameter of 7 cm. (b) radius of 14 cm. If the circumference of a circle is 44 cm, find its diameter. If a circle has a circumference of 66 cm, find its radius. The above figure shows a piece of paper, ABCD, in the shape of a square. The shaded part consist of four quadrants with radius 4 cm. Calculate the perimeter f the shaded part. Perimeter of the shaded part = Perimeter of ABCD = 4 x 8 = 32 cm + 25.136 cm = 57.136 cm Therefore, the perimeter of the shaded part is 57.136 cm. ARC OF A CIRCLE (A) Deriving the formula for the length of an arc 1. An arc of a circle is any part of the curve that makes the circle. 2. The length of an arc is proportional to the angle formed by the arc at the centre of the circle. Find the length of arc which subtends an angle of 60° at the centre of a circle of radius 21 cm. In the diagram, O is the centre of the circle. Find the value of x. The diagram shows a circle with centre O. Find the radius of the circle. The radius of the circle is 9 cm. The diameter of a circular shaped pie is 21 cm. It is divided into several equal slices with the arc of each slice being 8.25 cm. What is the angle at the centre of each slice of the pie? The angle at the centre of each slice of the pie is 45°. AREA OF A CIRCLE (A) Finding the Area of a Circle The area of a circle is the area of the reqion bounded by the circumference. Find the area of the circle is 154². Find its radius and diameter. (C) Finding the Area of a Circle Given the Circumference Find the area of the circle with the circumference of 88 cm. Therefore, the area of the circle is 616 cm². The above figure shows two circles with centre O and with a radius of 9 cm and 5 cm respectively. Find the area of the shaded part. AREA OF A SECTOR OF A CIRCLE (A) Deriving the formula of the area of a sector 1. The area of a sector is the area enclosed between an arc and the two radii at either end of the arc. 2. The area of a sector is proportional to the angle at the centre of the circle. Find the area of shaded sector above where O is the centre of the circle. (C) Finding the angle at the centre given the radius and area of a sector. In the diagram, the area of the major sector POQ is 702.24 cm². Find the radius of the circle. The radius of the circle is 16.8 cm. Richard drew a semicircle with centre O on a piece of rectangular paper PQRS. He only used the region formed by the sector with an angle of 126°. Calculate the remaining area of the paper. The above figure shows two circles with centre O. The straight line AEOGC is perpendicular to the straight line BFOHD, and OE = AF = 6 cm. Find the area of the shaded part. Share & Embed "Circles Form 2"
https://kupdf.net/download/circles-form-2_59758a68dc0d60ff68043372_pdf
In theory MPAs can benefit the entire ecosystem by restoring the community structure (size, abundance and number of species), providing habitat quality and enhance ecosystem resilience. Therefore by studying the relationship of functional groups and MPAs we expect to find results that help us to describe how protection affects the structure of the community (fish and benthos), which species are more influenced by the protection and how important they are in the functioning of the ecosystem, how functional redundancy is affected by protection, how the loss of key species can affect resilience, how efficient are local management measures for coral reef conservation. We expect to provide basic but unpublished knowledge on the relationship between the community of benthic organisms and fish of Brazilian coral reefs, and to assess how the protection status can be determinant for the proper functioning of community interactions. By revealing the most important links in the coral reef ecosystem functioning we could propose more oriented actions to preserve one (or multiple) species or habitats. Besides, the efficiency of management measures on coral reef health will be evaluated and the results reported could empower society to claim for solutions and better coastal conservancy. For further information contact: E-mail: [email protected] |Town/Region||Abrolhos Archipelago| |Country||Brazil| |Continent||Central and Latin America| |Categories||Biodiversity, Corals, Marine| |Date||31 May 2018| Black grouper (Mycteroperca bonaci) and Brain coral (Mussismilia braziliensis), an endemic coral from Brazilian reefs. Yellowtail snapper (Ocyurus chrysurus) on coral reefs from the Abrolhos Reefs.
https://www.rufford.org/projects/ramon_hernandez_andreu
Accessible Instructional Materials (AIM) The Individuals with Disabilities Education Act (IDEA) requires school districts to provide accessible versions of instructional materials to students who are blind or otherwise unable to use printed materials. Students with disabilities should receive materials in accessible formats at the same time as their peers receive their textbooks. Instructional materials Instructional materials include textbooks and related core materials such as workbooks. Accessible formats Accessible formats include Braille, large print, audio and digital text. Accessible instructional materials afford the flexibility to meet the needs of a broad range of students, even those without disabilities. Fully accessible format means that: - All text is digital and can be read with text-to-speech, modified with regard to font size, and navigated by unit, chapter, section and page number (or other appropriate segments). - Images include alternative text and long descriptions when appropriate (alternative text is a replacement for an image that serves the same purpose as the image itself. It is read by a screen reader in place of the image). - Math equations are provided as images with alternative text or in the content file using MathML. - Content reading order, levels and headings are determined by publisher tagging. - Text can be converted to Braille. School districts should note that just because a document is digital or online, it is not inherently accessible. File types to consider, from most to least flexible are: - Digital Accessible Information System (DAISY)/ National Instructional Materials Accessibility Standard (NIMAS) with cascading style sheet; - HyperText Markup Language (HTML); - Portable Document Format (PDF), (unlocked, embedded fonts, single page); and - Rich Text Format (RTF)/Word document. NIMAS The acronym NIMAS stands for the National Instructional Materials Accessibility Standard. It is a technical specification, endorsed by the U.S. Department of Education that publishers must use in preparing files. NIMAS files are then sent to the National Instructional Materials Access Center (NIMAC), as requested by a school district. Please note that a NIMAS file is not student ready; it requires conversion to the desired specialized format. For more information, visit National Instructional Materials Accessibility Standard (NIMAS). NIMAC NIMAC is the National Instructional Materials Access Center, and is the repository where all the NIMAS files are stored. It is funded by the Office of Special Education Programs (OSEP) and was created through amendments adopted to IDEA. The purpose of NIMAC is to make it easier for districts to obtain materials for students with disabilities, and to do so in a more timely manner. Once a NIMAS file is downloaded from NIMAC by an authorized user, it must be transformed into the required accessible format for the student. NIMAC houses files for printed textbooks and related core instructional materials published primarily for use in elementary and secondary school instruction. School districts should note that there is no obligation on the part of a publisher to create NIMAS files or upload them to NIMAC unless specific language is included in the contract/purchase agreement with publishers. To search the NIMAC go to http://www.nimac.us/ Accessing NIMAC Only authorized users (AUs) of NIMAC can download NIMAS files. New York’s AUs are Sophie McDermott at the New York State Education Department (NYSED), Office of Special Education, Lisa DeSantis at the Resource Center for the Visually Impaired, Bookshare, Learning Ally and the New York City Department of Education. Students eligible to use materials from NIMAC NIMAC relies on an exemption to copyright law, and as such materials are only available to elementary and secondary students who are blind, visually impaired, have a physical disability, or have a reading disability resulting from an organic dysfunction. In addition, these students must have an individualized education program (IEP). Students with a 504 plan and NIMAC Students who have a 504 plan are not allowed to use materials from NIMAC. Only students with a qualifying disability and an IEP can use these materials. Students who are not eligible to use materials from NIMAC School districts are responsible for providing accessible instructional materials to students with disabilities who need them, regardless of whether the students are eligible for materials from NIMAC. Schools can purchase accessible materials directly from the publisher, make their own or use materials in the public domain. School districts should note that all students can access materials purchased directly from publishers or through other commercial options. Obtaining accessible instructional materials in New York State There are four basic steps in regards to AIM. First, a school district must determine if there is a need for AIM. Second, the district must decide on the format necessary to meet the individual student’s needs. It is possible that an individual student may need different types of formats based on the environment in which he will be using the material. Third, the district must determine the appropriate route for acquiring the specialized format(s). Fourth, the school district must determine what, if any, additional assistive technologies are needed and develop a plan to implement these technologies. NYSED has developed two flowcharts that demonstrate the acquisition process. There is one flowchart for obtaining Braille and large print, and another flowchart for obtaining audio and digital text. Each flowchart has links to resources embedded within the document; it is recommended that districts use these materials together to provide a full understanding of the process. Flowchart for Braille and Large Print Flowchart for Audio and Digital Files Building the resources housed in NIMAC In order to build the resources available in NIMAC, districts should include contract language when ordering textbooks that ensures publishers will be asked to create a NIMAS file of any textbook and related core materials and submit those files to NIMAC. As NIMAC grows, students will receive their instructional materials in a more timely manner. Resources embedded within the flowcharts offer school districts guidance on potential contract language. Section 200.2(b)(10)(i) of the Regulations of the Commissioner of Education indicates that school districts must ensure that preference in the purchase of instructional materials is given to those publishers who agree to provide such instructional materials in alternative formats. This important step of consistently ordering textbooks and related core materials in NIMAS format will help to inform publishers that there is a market for accessible materials. By demonstrating demand, school districts will ultimately assist not only those individuals who cannot access materials from NIMAC, but also those individuals who may prefer accessible files aligned with such initiatives as Response to Intervention (RtI), Differentiated Instruction, and Universal Design for Learning (UDL). It is the goal that school districts will eventually be able to purchase these files directly from the publisher. School district resources - Consistent with the direction of IDEA, NYSED has developed technical assistance materials and a network of trainers who can provide training and guidance to school districts regarding AIM, NIMAS/NIMAC and the process for obtaining AIM. The technical assistance materials include the flowcharts and related resource material provided on this page. The network of trainers are the Regional Special Education Training Specialists within each of the 10 Regional Special Education Technical Assistance Support Centers (RSE-TASC). Please locate the Regional Special Education Training Specialist in your area and request training and/or assistance for your school district. - The National Center on Accessible Educational Materials: A resource for state- and district-level educators, parents, publishers, conversion houses, accessible media producers, and others interested in learning more about and implementing AIM and NIMAS. - The National Center on Accessible Educational Materials has created the AIM NAVIGATOR. It is an interactive online tool that facilitates the process of decision-making about accessible instructional materials for an individual student. The AIM Navigator guides teams through a step-by-step process and provides just-in-time support with Frequently Asked Questions (FAQs), resources, and links to other helpful tools at each of four major decision-points: determining the need for accessible instructional materials; selecting format(s) that address student needs; acquiring needed formats; and, selecting supports for use (technology, training, instructional strategies, support services, and other accommodations and modifications). - The National Center on Accessible Instructional Materials has also created the AIM Explorer. This is a free simulation that combines grade-leveled digital text with access features common to most text readers and other supported reading software. It is designed to assist teams to understand reader preferences. District resources from the flowcharts are as follows: Reference During the Identification Period Large Print Questions and Answers Other resources Contacts: - Sophie McDermott, 518-486-7462 or [email protected] - Regional Special Education Technical Assistance Support Centers (RSE-TASC) Federal Guidance - National Center on Accessible Educational Materials - NIMAS/NIMAC Questions and Answers - Training Resources - Policy Guidance - Dear Colleague Letter (6/22/12) encouraging SEAs and LEAs to ask publishers to use the MathML3 Structure Guidelines - Dear Colleague Letter (5/26/11)and accompanying Frequently Asked Questions - Letter to College or University President (6/29/10) and Accompanying Q & A - Letter to [redacted information] (5/6/08) - Letter to New Mexico State Director of Special Education (1/30/08) - Letter to American Printing House for the Blind (5/7/07) - Letter to Recordings for the Blind and Dyslexic (3/16/07) New York State Regulations and Guidance Relevant Part 200 Regulations of the Commissioner of Education See: Section 200.2(b)(10) regarding alternative formats, NIMAS, and preference to publishers.
https://www.p12.nysed.gov/specialed/aim/
Though a comprehensive guidance and career development program is available to all students, we recognize the need for some students to have other growth opportunities. While some may find it easy to adjust to change, others struggle with self-identity, family issues, social/emotional and academic difficulties. The A.P. Brewer High School Guidance and Counseling program provides support to students in need as well as in crisis and offers services to meet those needs. Students see the counselors individually as needed. To ensure that students and families shall receive the most professional services possible, A.P. Brewer High School counselors abide by the code of ethics in their delivery of services and participate in ongoing professional development activities designed to increase and enhance both their skills and credibility in the performance of their duties within the school community. The A.P. Brewer High School Guidance and Counseling program is committed to addressing the needs of all students by helping them to acquire the competencies and skills necessary to compete in the competitive, ever-changing world of work. Providing a proactive, comprehensive program assists all students in their efforts to become responsible, productive citizens. Students, teachers, and parents will find a wide variety of valuable resources and information contained within this website that will help to prepare for life after high school.
https://www.morgank12.org/Domain/1257
In the past couple of years or so, positive affirmations have created a name for themselves in the spiritual community. Maybe even more fascinating is that people who have nothing to do with spirituality also believe in their power. Have you ever tried affirmations for yourself? Or are you a total beginner who is confused about whether they really work? Whatever the case may be, this article is going to guide you through everything related to positive affirmations. Hopefully, your intellectual curiosity will be satisfied and you become motivated to start yourself. What Are Positive Affirmations? Positive affirmations are statements that are designed to empower your mind with new ideas and thought patterns. In simpler words, they are positive sentences aimed at rewiring or reprogramming your subconscious mind with new beliefs. Here are a few examples of Positive Affirmations. If you are a total beginner to affirmations, they can be a good place to start. - I am grateful for my life and all my joys. - I am capable of achieving anything I want. - No goal is impossible for me to achieve. - I create my own destiny through the power of my thoughts - I am always becoming a better version of myself. - Things are always working out for the best for me. - I am loved, supported and admired. - I am worthy of all the good things I desire. All of us have our unique sets of beliefs and perspectives about the world. These beliefs are (mostly) beneficial as they help us make sense of the world around us. However, sometimes we acquire limiting beliefs too. A belief as subtle as “I am not enough” or “I am not worthy” is capable of affecting your self-esteem. This keeps you from chasing your dreams and ultimately forces you to live a life way below your actual potential. This is where positive affirmations come into play. Through positive affirmations, we deliberately introduce new powerful beliefs in the subconscious. This makes the more destructive thoughts prune apart on their own making room for positivity and possibility. The Spiritual Origins of Positive Affirmations Using affirmations as a tool to change one’s mental state can be traced back to Buddhism and Hinduism. The Sanskrit word for affirmations is Mantra. Sages and monks sitting in the Himalayan mountains have been using “mantras” for centuries now. These mantras aid in meditation by letting the mind focus solely on words instead of unrelated chains of thoughts. Secondly, when a certain mantra is repeated over and over again, it becomes a permanent part of the memory. This ultimately impacts other thoughts, perspectives and beliefs held in your mind. Mantras are now used by spiritual teachers and followers all around the world. No matter whether they belong to the eastern school of thought or the western philosophy. Current-day spiritual enthusiasts believe that positive affirmations are a great tool for manifesting (attracting) things, people and situations you want in your life. Is that possible? We will come to that soon, but the answer remarkably is yes. Positive Affirmations — What Does Science Say? Are affirmations just wishful thinking or is there some real, convincing evidence that affirmations work? Here is what the science says. A study conducted by Prof. Emily Falk at the University of Pennsylvania tells us that when someone deliberately practices positive affirmations, they are able to immediately shift their perspective and view “otherwise threatening information as more self-relevant and valuable.” Another study which included MRI evidence of the participants’ brains, suggests that when an individual practices self-affirmation, the neural pathways in his prefrontal cortex increase. These are the brain systems associated with reward and self-related processing. These are just two of the hundreds of scientific studies that have been done about the power of positive affirmations. The science is clear — positive affirmations work wonders! All you have to do is to give them a try and see the results for yourself. The Interesting Benefits of Affirmations - Positive affirmations can be an incredible tool for getting rid of negative beliefs and thought patterns. These would otherwise keep you from getting the best in life. These beliefs might be related to your self-worth, confidence or any other problem hindering your progress. - Affirmations aid you in viewing things and your life situations from a different perspective. Sometimes we get too caught up in the problem at hand that it’s hard to see the whole picture. Affirmations can help you with building that balanced perspective in life. - These positive statements also instill a sense of self-confidence in the person practicing them. When positive words are repeated again and again, the brain starts believing them as true. Very soon, that new belief starts reflecting in your self-esteem and confidence. - Finally, affirmations can be used as mantras in meditation. They will help in getting rid of the monkey chatter and calming down your nervous system so that you can more easily slip into a meditative state. Positive Affirmations and Manifesting Even though affirmations have so many powerful, scientifically proven benefits, still, they are most widely used by the manifestation (law of attraction) enthusiasts. These people believe that they can attract anything they want in their life by first reprogramming their minds for that desire. Whether it’s doing better as a student or simply feeling better and stronger as a woman, people have used positive affirmations with a lot of success. Other areas where positive affirmations can be great is in developing better releationships whether it be with yourself or others. Also, quite topical these days and one I have personally worked with, positive affirmations that help you in a work-from-home setting. But how is it possible to attract things into your life by chanting a few mantras? Glad you asked. The secret behind the raging success of affirmations in the manifestation community is — repetition. When we repeat something over and over again, the brain builds a neural pathway corresponding to that thought, statement or experience. Once those neural pathways become stronger, they start impacting how you view the world. For instance, if you believe that you are a magnet for opportunities, you are naturally going to see more opportunities. Because through repetition, you have convinced your brain that such information is important to you; therefore, your Reticular Activating System or RAS is going to bring you data that conforms with that thought. Manifesting otherwise seems like a woo-woo concept, but when you combine it with positive affirmations — you combine the power of spirituality with the practicality of science, and that’s when the magic starts unfolding in your life. How to Use Positive Affirmations? To make the best out of positive affirmations and fully harness them to your advantage, you can follow the following steps: - First, identify what your purpose behind using positive affirmations is. Do you want to be more confident? Get a better job? Or are you working on some other aspect of life? Developing clarity is very important to make affirmations work. - Next, create a couple of positive affirmations addressing that specific issue/problem. Make sure all your affirmations are in the present tense. - Then start with repeating your set of 5 minutes a day. Saying them out loud is a faster and better method for repetition, as this would engage all your senses simultaneously. - Be consistent in your repetitions. Make sure to carve out 5 minutes every day (preferably at the same time) to get done with your affirmations. - Lastly, be patient with affirmations. It would take a few weeks for your mind to accept new thoughts, so make sure you stay committed to your journey. Can You Say Affirmations In Your Head? Yes, you can repeat affirmations in your head, and they can be repeated aloud, too. Do whatever feels most comfortable to you, and try experimenting with both methods in the beginning until you figure out what works best for you. How Long Should You Repeat Affirmations? For beginners, a minimum of 5 minutes is a great time to repeat your affirmations. After a few days, you can start doing these 5-minute sessions twice daily for faster results. What Is the Best Time For Affirmations? The best time to repeat affirmations is when your brain waves are slow, so the words you repeat have a greater chance of getting downloaded into your subconscious mind. The moment you wake up in the morning and the time at night when you are just about to go to sleep are, therefore, two best windows to repeat your affirmations. How Long Do Affirmations Take to work? A great habit-building hack is the 21-day rule. Studies have shown that 21 days are an excellent start to familiarizing your brain with a new task. So, affirmations should also be done for a minimum of 21 days to see results. It may take longer for some people, and for others, it may only take a few days. 7 Affirmations That Sets You on a Better Path Here are 7 affirmations that anyone can use to improve the quality of their life and overall well-being: - I am more than enough. - I am powerful. - My life is an exciting adventure. - I am a happy soul. - I achieve all my goals and dreams, every time. - The universe loves, supports and guides me at every step. - I experience joy and abundance in all areas of my life.
https://goddessgift.com/mind-body/positive-affirmations/
Trends By Amy Roach Partridge No tags available While the avian flu virus is clearly wreaking havoc with the world's bird population, and is being hyped as a potential deadly threat to human health and safety, it could also do a real number on the global supply chain, as a recent simulation by MIT's Center for Transportation and Logistics illustrated. In an "experimental theater" setting, a panel of employees from Intel, Arnold Communications, and EMC Corporation acted out a live simulation that showed Vaxxon—a fictional, publicly traded wireless phone manufacturer—responding to news that a worker at its contract manufacturing plant in China has died from the avian flu. The news comes just as the company is finishing production for the launch of its hot new SlimPhone 360. Because Chinese health officials cannot determine if the virus has spread, the plant is quarantined. Production and distribution of the new phone fall off, threatening the timing of the launch and the company's subsequent profits. And workers at Vaxxon's U.S. port of entry and third-party kitting facility refuse to handle shipments for fear of contamination. At the same time, a public relations nightmare blossoms as the media grabs hold of the story and Vaxxon becomes the public face of a potential pandemic. Luckily, Vaxxon had advanced contingency/emergency response plans in place. As the news breaks, panel members acted as Vaxxon's cross-functional emergency response team. HR immediately notifies employees and their families about the crisis; government affairs employees work with U.S. and Chinese officials on the quarantine status and shipping product out of China; and the communications team sets up a media command center. The scenario was not without challenging snafus however, including one rogue Vaxxon engineer who left China for Hong Kong the day of the outbreak, then returned to Vaxxon's U.S. headquarters, possibly carrying the infectious flu with him. In addition, negative press associated with Vaxxon and consumer fears over tainted products lead to in-fighting among the emergency response team over whether to delay the launch and change the name of the SlimPhone product. Add to that an overzealous CEO who frequently disrupts the emergency response panel's operations, and ultimately decides to cancel launch plans for the phone—a major bummer for an already bummed-out staff. Where did the exercise rank on the believability scale? The audience—supply chain professionals from a variety of high-profile companies including Boston Scientific, Cisco Systems, ExxonMobil, and Gillette—reacted positively. In particular, attendees applauded the panel's multidisciplinary nature. "These situations impact all supply chain partners, so companies have to involve human resources, IT, transportation, and government affairs, as well as logistics, in emergency response planning or they are missing the boat," says William Archer, global security director, Limited Brands. He points to his company's experience after Hurricane Katrina as an example. The retailer, which lost 38 stores in the Gulf Coast area, relied heavily on its cross-functional emergency response team to get operations up and running and track down and support some 1,900 employees during the aftermath. "Katrina was a mega-test of our contingency plans, which worked very well," says Archer. Tabletop testing exercises such as this simulation are crucial to succeeding during supply chain disruptions. "If you don't test, there is no point in planning," Archer says. Not all companies have the time or resources to devote to such detailed emergency planning, however—a key point in the panel discussion that took place after the simulation. While the crisis in this case was an avian flu outbreak, proper preparedness for supply chain disruptions of any kind is crucial. Here are some suggestions that emerged from the simulation and discussion panel. Tie emergency preparedness into other corporate goals. "Building flexibility into the supply chain gives companies the ability to respond to market demand fluctuations, regardless of whether or not they are caused by a catastrophe," says Yossi Sheffi, discussion panel member and director of MIT's Center for Transportation and Logistics. Empower emergency response teams to make important decisions. "Often, senior executives feel the need to take over in emergency situations, even if they haven't been properly trained," says Sheffi. "This can hinder response efforts." Weigh long-term vs. short-term costs when deciding whether to use a single-source or multiple-source supplier strategy. Vaxxon's Chinese manufacturing plant was the sole facility manufacturing its new phone, so the outbreak severely damaged its ability to produce and ship merchandise. But often, companies choose a sole supplier to achieve cost efficiencies and economies of scale. "Using a sole supplier can be cheaper in the short run, but can end up being costly if things go wrong," says Vaxxon "employee" Jim Holko, Intel's security business programs manager. Ultimately, the thought-provoking exercise raised more questions than answers, which is not surprising. Even a well-coordinated disaster simulation may not bear much resemblance to real life, when contingency plans struggle to hold up under the stress and strain of an impending disaster. It's doubtful, however, that anyone left the MIT simulation questioning the importance of preparing the supply chain for any number of unexpected disruptions. JDA Software and Manugistics Merge Continuing the M&A frenzy that has hit the logistics sector, Phoenix-based software provider JDA Software is acquiring Manugistics, a global provider of supply chain and revenue management solutions based in Rockville, Md. The $211-million merger, which is expected to close in the second or third quarter of 2006, promises to bring advanced optimization solutions to retailers currently using JDA products, according to analysts. It also extends JDA's reach further into the supply chain and sweetens its appeal to consumer goods manufacturers and wholesalers that relied on Manugistics' price optimization and transportation management capabilities. In addition, the deal has the potential for large-scale financial gains—based on last year's financials, the combined company would have earned more than $390 million. What does the merger mean for the logistics industry? It depends on where you fall within the supply chain, according to Lora Cecere, research director for Boston-based AMR Research. "This is good news for retailers and wholesale distribution companies using Manugistics. JDA historically has great customer service and understands these verticals," she says. "Manufacturers, however, should push JDA for answers on how it plans to support their industries." In connection with this transaction, JDA secured a $50-million investment from Thomas Cressey, an enterprise software investor with some $2 billion in equity under management. Demand for 'On-Demand' on the Rise Though it started as a niche player, on-demand or software-as-a-service (SaaS) applications are gaining prominence in the supply chain. In fact, a majority of logistics professionals responding to a recent Aberdeen Group study prefer on-demand applications—where software is provided to users through a network such as the Internet—to traditional supply chain management technology vendors. The study, The On-Demand Tipping Point in the Supply Chain, finds more than 50 percent of respondents using or considering using on-demand or SaaS applications, particularly for externally facing processes with suppliers, customers, and transportation carriers. Only 5 percent of respondents prefer traditional SCM vendors, finds the study. Why is on-demand in such high demand now? Because the software is hosted by a third party and boasts a "pay-as-you-go" billing structure, companies using SaaS applications can often minimize in-house IT requirements, and reduce costs associated with implementation. Current on-demand SCM users also report quick return on investment, and easy maintenance and upgrade possibilities, according to the study. Other key takeaways from the Aberdeen report include: User adoption of on-demand SCM is spreading from supply chain leaders to laggards. CEOs and COOs at top supply chain performers are twice as likely to be supporters of the on-demand model.
Philosophical rationalism encompasses several strands of thought, all of which usually share the conviction that reality is actually rational in nature and that making the proper deductions is essential to achieving knowledge. Such deductive logic and the use of mathematical processes provide the chief methodological tools. Thus, rationalism has often been held in contrast to empiricism. Earlier forms of rationalism are found in Greek philosophy, most notably in Plato, who held that the proper use of reasoning and mathematics was preferable to the methodology of natural science. The latter is not only in error on many occasions, but empiricism can only observe facts in this changing world. By deductive reason, Plato believed that one could extract the innate knowledge which is present at birth, derived from the realm of forms. However, rationalism is more often associated with Enlightenment philosophers such as Descartes, Spinoza, and Leibniz. It is this form of continental rationalism that is the chief concern of this article. |BELIEVE| Religious Information Source web-site |BELIEVE Religious Information Source - By Alphabet Our List of 2,300 Religious Subjects| Innate ideas are those that are the very attributes of the human mind, inborn by God. As such these "pure" ideas are known a priori by all humans, and are thus believed by all. So crucial were they for rationalists that it was usually held that these ideas were the prerequisite for learning additional facts. Descartes believed that, without innate ideas, no other data could be known. The empiricists attacked the rationalists at this point, arguing that the content of the socalled innate ideas was actually learned through one's experience, though perhaps largely unreflected upon by the person. Thus we learn vast amounts of knowledge through our family, education, and society which comes very early in life and cannot be counted as innate. One rationalistic response to this empirical contention was to point out that they were many concepts widely used in science and mathematics that could not be discovered by experience alone. The rationalists, therefore, concluded that empiricism could not stand alone, but required large amounts of truth to be accepted by the proper use of reason. Perhaps the best example of this conclusion is found in the philosophy of Descartes. Beginning with the reality of doubt he determined to accept nothing of which he could not be certain. However, at least one reality could be deduced from this doubt: he was doubting and must therefore exist. In the words of his famous dictum, "I think, therefore I am." From the realization that he doubted, Descartes concluded that he was a dependent, finite being. He then proceeded to the existence of God via forms of the ontological and cosmological arguments. In Meditations III-IV of his Meditations on First Philosophy Descartes argued that his idea of God as infinite and independent is a clear and distinct argument for God's existence. In fact, Descartes concluded that the human mind was not capable of knowing anything more certainly than God's existence. A finite being was not capable of explaining the presence of the idea of an infinite God apart from his necessary existence. Next Descartes concluded that since God was perfect, he could not deceive finite beings. Additionally Descartes's own facilities for judging the world around him were given him by God and hence would not be misleading. The result was that whatever he could deduce by clear and distinct thinking (such as that found in mathematics) concerning the world and others must therefore be true. Thus the necessary existence of God both makes knowledge possible and guarantees truth concerning those facts that can be clearly delineated. Beginning with the reality of doubt Descartes proceeded to his own existence, to God, and to the physical world. Spinoza also taught that the universe operated according to rational principles, that the proper use of reason revealed these truths, and that God was the ultimate guarantee of knowledge. However, he rejected Cartesian dualism in favor of monism (referred to by some as pantheism), in that there was only one substance, termed God or nature. Worship was expressed rationally, in accordance with the nature of reality. Of the many attributes of substance thought and extension were the most crucial. Spinoza utilized geometrical methodology to deduce epistemological truths which could be held as factual. By limiting much of knowledge to self-evident truths revealed by mathematics, he thereby constructed one of the best examples of rationalistic system-building in the history of philosophy. Leibniz set forth his concept of reality in his major work Monadology. In contrast to the materialistic concept of atoms, monads are unique metaphysical units of force that are not affected by external criteria. Although each monad develops individually, they are interrelated through a logical "preestablished harmony," involving a hierarchy of monads arranged by the culminating in God, the Monad of monads. For Leibniz a number of arguments revealed the existence of God, who was established as the being responsible for the ordering of the monads into a rational universe which was "the best of all possible worlds." God also was the basis for knowledge, and this accounts for the epistemological relationship between thought and reality. Leibniz thus returned to a concept of a transcendent God much closer to the position held by Descartes and in contrast to Spinoza, although neither he nor Spinoza began with the subjective self, as did Descartes. Thus rationalistic epistemology was characterized both by a deductive process of argumentation, with special attention being given to mathematical methodology, and by the anchoring of all knowledge in the nature of God. Spinoza's system of Euclidean geometry claimed demonstration of God or nature as the one substance of reality. Some scholars of the Cartesian persuasion moved to the position of occasionalism, whereby mental and physical events correspond to each other (as the perceived noise of a tree falling corresponds with the actual occurrence), as they are both ordained by God. Leibniz utilized a rigorous application of calculus to deductively derive the infinite collection of monads which culminate in God. This rationalistic methodology, and the stress on mathematics in particular, was an important influence on the rise of modern science during this period. Galileo held some essentially related ideas, especially in his concept of nature as being mathematically organized and perceived as such through reason. A number of trends in English deism reflect the influence of, and similarities to, continental rationalism as well as British empiricism. Besides the acceptance of innate knowledge available to all men and the deducing of propositions from such general knowledge, deists such as Matthew Tindal, Anthony Collins, and Thomas Woolston attempted to dismiss miracles and fulfilled prophecy as evidences for special revelation. In fact deism as a whole was largely characterized as an attempt to find a natural religion apart from special revelation. Many of these trends had marked effects on contemporary higher criticism. First, Locke, Hume, and the empiricists never tired of attacking the concept of innate ideas. They asserted that young children gave little, if any, indication of any crucial amount of innate knowledge. Rather the empiricists were quick to point to sense experience as the chief school-teacher, even in infancy. Second, empiricists also asserted that reason could not be the only (or even the primary) means of achieving knowledge when so much is gathered by the senses. While it is true that much knowledge may not be reducible to sense experience, this also does not indicate that reason is the chief means of knowing. Third, it has frequently been pointed out that reason alone leads to too many contradictions, metaphysical and otherwise. For example, Descartes's dualism, Spinoza's monism, and Leibniz's monadology have all been declared as being absolutely knowable, in the name of rationalism. If one or more of these options is incorrect, what about the remainder of the system(s)? Fourth, rebuttals to rationalistic and deistic higher criticism appeared quickly from the pens of such able scholars as John Locke, Thomas Sherlock, Joseph Butler, and William Paley. Special revelation and miracles were especially defended against attack. Butler's Analogy of Religion in particular was so devastating that many have concluded that it is not only one of the strongest apologetics for the Christian faith, but that it was the chief reason for the demise of deism. G R Habermas (Elwell Evangelical Dictionary) Bibliography R. Descartes, Discourse on Method; P. Gay, Deism: An Anthology; G. Leibniz, Monadology; B. Spinoza, Ethics and Tractatus Theologico-Politicus; C.L. Becker, The Heavenly City of the Eighteenth-Century Philosophers; J. Bronowski and B. Mazlish, The Western Intellectual Tradition: From Leonardo to Hegel; F. Copleston, A History of Philosophy, IV; W.T. Jones, A History of Western Philosophy, III; B. Williams, Encyclopedia of Philosophy, VII. (Latin, ratio -- reason, the faculty of the mind which forms the ground of calculation, i.e. discursive reason. See APOLOGETICS; ATHEISM; BIBLE; DEISM; EMPIRICISM; ETHICS; BIBLICAL EXEGESIS; FAITH; MATERIALISM; MIRACLE; REVELATION). The term is used: (1) in an exact sense, to designate a particular moment in the development of Protestant thought in Germany; (2) in a broader, and more usual, sense to cover the view (in relation to which many schools may be classed as rationalistic) that the human reason, or understanding, is the sole source and final test of all truth. It has further: (3) occasionally been applied to the method of treating revealed truth theologically, by casting it into a reasoned form, and employing philosophical Categories in its elaboration. These three uses of the term will be discussed in the present article. (1) The German school of theological Rationalism formed a part of the more general movement of the eighteenth-century "Enlightenment". It may be said to owe its immediate origin to the philosophical system of Christian Wolff (1679-1754), which was a modification, with Aristotelean features, of that of Leibniz, especially characterized by its spiritualism, determinism, and dogmatism. This philosophy and its method exerted a profound influence upon contemporaneous German religious thought, providing it with a rationalistic point of view in theology and exegesis. German philosophy in the eighteenth century was, as a whole, tributary to Leibniz, whose "Théodicée" was written principally against the Rationalism of Bayle: it was marked by an infiltration of English Deism and French Materialism, to which the Rationalism at present considered had great affinity, and towards which it progressively developed: and it was vulgarized by its union with popular literature. Wolff himself was expelled from his chair at the University of Halle on account of the Rationalistic nature of his teaching, principally owing to the action of Lange (1670-1774; cf. "Causa Dei et reilgionis naturals adversus atheismum", and "Modesta Disputatio", Halle, 1723). Retiring to Marburg, he taught there until 1740, when he was recalled to Halle by Frederick II. Wolff's attempt to demonstrate natural religion rationally was in no sense an attack upon revelation. As a "supranaturalist" he admitted truths above reason, and he attempted to support by reason the supernatural truths contained in Holy Scripture. But his attempt, while it incensed the pietistic school and was readily welcomed by the more liberal and moderate among the orthodox Lutherans, in reality turned out to be strongly in favour of the Naturalism that he wished to condemn. Natural religion, he asserted, is demonstrable; revealed religion is to be found in the Bible alone. But in his method of proof of the authority of Scripture recourse was had to reason, and thus the human mind became, logically, the ultimate arbiter in the case of both. Supranaturalism in theology, which it was Wolff's intention to uphold, proved incompatible with such a philosophical position, and Rationalism took its place. This, however, is to be distinguished from pure Naturalism, to which it led, but with which it never became theoretically identified. Revelation was not denied by the Rationalists; though, as a matter of fact, if not of theory, it was quietly suppressed by the claim, with its ever-increasing application, that reason is the competent judge of all truth. Naturalists, on the other hand, denied the fact of revelation. As with Deism and Materialism, the German Rationalism invaded the department of Biblical exegesis. Here a destructive criticism, very similar to that of the Deists, was levelled against the miracles recorded in, and the authenticity of the Holy Scripture. Nevertheless, the distinction between Rationalism and Naturalism still obtained. The great Biblical critic Semler (1725-91), who is one of the principal representatives of the school, was a strong opponent of the latter; in company with Teller (1734-1804) and others he endeavoured to show that the records of the Bible have no more than a local and temporary character, thus attempting to safeguard the deeper revelation, while sacrificing to the critics its superficial vehicle. He makes the distinction between theology and religion (by which he signifies ethics). The distinction made between natural and revealed religion necessitated a closer definition of the latter. For Supernaturalists and Rationalists alike religion was held to be "a way of knowing and worshipping the Deity", but consisting chiefly, for the Rationalists, in the observance of God's law. This identification of religion with morals, which at the time was utilitarian in character (see UTILITARIANISM), led to further developments in the conceptions of the nature of religion, the meaning of revelation, and the value of the Bible as a collection of inspired writings. The earlier orthodox Protestant view of religion as a body of truths published and taught by God to man in revelation was in process of disintegration. In Semler's distinction between religion (ethics) on the one hand and theology on the other, with Herder's similar separation of religion from theological opinions and religious usages, the cause of the Christian religion, as they conceived it, seemed to be put beyond the reach of the shock of criticism, which, by destroying the foundations upon which it claimed to rest, had gone so far to discredit the older form of Lutheranism. Kant's (1724-1804) criticism of the reason, however, formed a turning-point in the development of Rationalism. For a full understanding of his attitude, the reader must be acquainted with the nature of his pietistic upbringing and later scientific and philosophical formation in the Leibniz-Wolff school of thought (see KANT, PHILOSOPHY OF). As far as concerns the point that occupies us at present, Kant was a Rationalist. For him religion was coextensive, with natural, though not utilitarian, morals. When he met with the criticisms of Hume and undertook his famous "Kritik", his preoccupation was to safeguard his religious opinions, his rigorous morality, from the danger of criticism. This he did, not by means of the old Rationalism, but by throwing discredit upon metaphysics. The accepted proofs of the existence of God, immortality, and liberty were thus, in his opinion, overthrown, and the well-known set of postulates of the "categoric imperative" put forward in their place. This, obviously, was the end of Rationalism in its earlier form, in which the fundamental truths of religion were set out as demonstrable by reason. But, despite the shifting of the burden of religion from the pure to the practical reason, Kant himself never seems to have reached the view --; to which all his work pointed --; that religion is not mere ethics, "conceiving moral laws as divine commands", no matter how far removed from Utilitarianism --; not an affair of the mind, but of the heart and will; and that revelation does not reach man by way of an exterior promulgation, but consists in a personal adaptation towards God. This conception was reached gradually with the advance of the theory that man possesses a religious sense, or faculty, distinct from the rational (Fries, 1773-1843; Jacobi, 1743-1819; Herder, 1744-1803; -- all opposed to the Intellectualism of Kant), and ultimately found expression with Schleiermacher (1768-1834), for whom religion is to be found neither in knowledge nor in action, but in a peculiar attitude of mind which consists in the consciousness of absolute dependence upon God. Here the older distinction between natural and revealed religion disappears. All that can be called religion -- the consciousness of dependence -- is at the same time revelational, and all religion is of the same character. There is no special revelation in the older Protestant (the Catholic) sense, but merely this attitude of dependence brought into being in the individual by the teaching of various great personalities who, from time to time, have manifested an extraordinary sense of the religious. Schleiermacher was a contemporary of Fichte, Schelling, and Hegel, whose philoasophical speculations had influence, with his own, in ultimately subverting Rationalism as here dealt with. The movement may be said to have ended with him -- in the opinion of Teller "the greatest theologian that the Protestant Church has had since the period of the Reformation". The majority of modern Protestant theologians accept his views, not, however, to the exclusion of knowledge as a basis of religion. Parallel with the development of the philosophical and theological views as to the nature of religion and the worth of revelation, which provided it with its critical principles, took place an exegetical evolution. The first phase consisted in replacing the orthodox Protestant doctrine (i.e. that the Sacred Scriptures are the Word of God) by a distinction between the Word of God contained in the Bible and the Bible itself (Töllner, Herder), though the Rationalists still held that the purer source of revelation lies rather in the written than in the traditional word. This distinction led inevitably to the destruction, of the rigid view of inspiration, and prepared the ground for the second phase. The principle of accommodation was now employed to explain the difficulties raised by the Scripture records of miraculous events and demoniacal manifestations (Senf, Vogel), and arbitrary methods of exegesis were also used to the same end (Paulus, Eichhorn). In the third phase Rationalists had reached the point of allowing the possibility of mistakes having been made by Christ and the Apostles, at any rate with regard to non-essential parts of religion. All the devices of exegesis were employed vainly; and, in the end, Rationalists found themselves forced to admit that the authors of the New Testament must have written from a point of view different from that which a modern theologian would adopt (Henke, Wegseheider). This principle, which is sufficiently elastic to admit of usage by nearly every variety of opinion, was admitted by several of the Supernaturalists (Reinhard, Storr), and is very generally accepted by modern Protestant divines, in the rejection of verbal inspiration. Herder is very clear on the distinction -- the truly inspired must be discerned from that which is not; and de Wette lays down as the canon of interpretation "the religious perception of the divine operation, or of the Holy Spirit, in the sacred writers as regards their belief and inspiration, but not respecting their faculty of forming ideas. . ." In an extreme form it may be seen employed in such works as Strauss's "Leben Jesu", where the hypothesis of the mythical nature of miracles is developed to a greater extent than by Schleiermacher or de Wette. (2) Rationalism, in the broader, popular meaning of the term, is used to designate any mode of thought in which human reason holds the place of supreme criterion of truth; in this sense, it is especially applied to such modes of thought as contrasted with faith. Thus Atheism, Materialism, Naturalism, Pantheism, Scepticism, etc., fall under the head of rationalistic systems. As such, the rationalistic tendency has always existed in philosophy, and has generally shown itself powerful in all the critical schools. As has been noted in the preceding paragraph, German Rationalism had strong affinities with English Deism and French Materialism, two historic forms in which the tendency has manifested itself. But with the vulgarization of the ideas contained in the various systems that composed these movements, Rationalism has degenerated. It has become connected in the popular mind with the shallow and misleading philosophy frequently put forward in the name of science, so that a double confusion has arisen, in which; questionable philosophical speculations are taken for scientific facts, and science is falsely supposed to be in opposition to religion. This Rationalism is now rather a spirit, or attitude, ready to seize upon any arguments, from any source and of any or no value, to urge against the doctrines and practices of faith. Beside this crude and popular form it has taken, for which the publication of cheap reprints and a vigorous propaganda are mainly responsible, there runs the deeper and more thoughtful current of critical-philosophical Rationalism, which either rejects religion and revelation altogether or treats them in much the same manner as did the Germans. Its various manifestations have little in common in method or content, save the general appeal to reason as supreme. No better description of the position can be given than the statements of the objects of the Rationalist Press Association. Among these are: "To stimulate the habits of reflection and inquiry and the free exercise of individual intellect . . . and generally to assert the supremacy of reason as the natural and necessary means to all such knowledge and wisdom as man can achieve". A perusal of the publications of the same will show in what sense this representative body interprets the above statement. It may be said finally, that Rationalism is the direct and logical outcome of the principles of Protestantism; and that the intermediary form, in which assent is given to revealed truth as possessing the imprimatur of reason, is only a phase in the evolution of ideas towards general disbelief. Official condemnations of the various forms of Rationalism, absolute and mitigated, are to be found in the Syllabus of Pius IX. (3) The term Rationalism is perhaps not usually applied to the theological method of the Catholic Church. All forms of theological statement, however, and pre-eminently the dialectical form of Catholic theology, are rationalistic in the truest sense. Indeed, the claim of such Rationalism as is dealt with above is directly met by the counter claim of the Church: that it is at best but a mutilated and unreasonable Rationalism, not worthy of the name, while that of the Church is rationally complete, and integrated, moreover, with super-rational truth. In this sense Catholic theology presupposes the certain truths of natural reason as the preambula fidei, philosophy (the ancilla theologiæ) is employed in the defence of revealed truth (see APOLOGETICS), and the content of Divine revelation is treated and systematized in the categories of natural thought. This systematization is carried out both in dogmatic and moral theology. It is a process contemporaneous with the first attempt at a scientific statement of religious truth, comes to perfection of method in the works of such writers as St. Thomas Aquinas and St. Alphonsus, and is consistently employed and developed in the Schools. Publication information Written by Francis Aveling. Transcribed by Douglas J. Potter. Dedicated to the Sacred Heart of Jesus Christ The Catholic Encyclopedia, Volume XII. Published 1911. New York: Robert Appleton Company. Nihil Obstat, June 1, 1911. Remy Lafort, S.T.D., Censor. Imprimatur. +John Cardinal Farley, Archbishop of New York Bibliography HAGENBACH, Kirchengesch. des 18. Jahrhunderts in Vorlesungen über Wesen u. Gesch. der Reformation in Deutschland etc., V-VI (Leipzig, 1834-43); IDEM (tr. BUCH), Compendium of the History of Doctrines (Edinburgh, 1846); HASE, Kirchengesch. (Leipzig, 1886); HENKE, Rationalismus u. Traditionalismus im 19. Jahrh. (Halle, 1864); HURST, History of Rationalism (New York, 1882); LERMINIER, De l'influence de la philosophie du XVIIIe siècle (Paris, 1833); SAINTES, Hist. critique du rationalisme en Allemagne (Paris, 1841); SCHLEIERMACHER, Der christl. Glaube nach der Grundsätzen der evangelischen Kirche (Berlin, 1821-22): SEMLER, Von freier Untersuchung des Kanons (Halle, 1771-75); IDEM, Institutio ad doctrinam christianam liberaliter discendam (Halle, 1774); IDEM, Versuch einer freier theologischen Lehrart (Halle, 1777); STAÜDLIN, Gesch. des Rationalismus u. Supranaturalismus (Göttingen, 1826); THOLUCK, Vorgesch. des Rationalismus (Halle, 1853-62); BENN, History of Rationalism in the Nineteenth Century (London, 1906).
http://mb-soft.com/believe/txn/rational.htm
As part of its "Life?" project, the Volkswagen Foundation is funding a research project that is placing membraneless organelles within cells in the spotlight as a way of understanding the fundamental processes that are essential to life. Professor Edward Lemke will be receiving roughly EUR 1 million in financing over the next five years to support his work in this field. Together with his research team, Lemke recently demonstrated that it is possible to design a membraneless organelle that can assume completely new functions within a cell. The biophysical chemist is Professor of Synthetic Biophysics at Johannes Gutenberg University Mainz (JGU) and Adjunct Director at the Institute of Molecular Biology (IMB) in Mainz. Organelles in a living cell are capable of producing artificial proteins The evolution of complex life forms was given a major boost when cells began to develop internal organelles. Organelles are compartmentalized areas within cells that perform specific tasks. Among these, for example, are the mitochondria that generate energy, the cell nucleus that stores the genetic material, and - in the case of plants - the chloroplasts that are responsible for photosynthesis. Some organelles are enclosed by membranes such, as the cell nucleus has the nuclear membrane, while others are membraneless. "It would be difficult to engineer an artificial organelle with a membrane because then you would also have to create a system for the efficient transport of molecules through this membrane," said Professor Edward Lemke. However, he and his team have managed to construct entirely novel membraneless organelles within a living cell. This cell then possesses multiple genetic codes. These new organelles are able to incorporate synthetic amino acids in proteins, resulting in proteins with innovative engineered functions that can be employed in a range of applications in the fields of biotechnology, material science, and biomedicine. It would, by way of example, be conceivable to integrate fluorescent components that would make it possible to actually view the interior of the cell in question using imaging techniques or even to generate antibody drugs for targeted cancer therapy. "Our breakthrough is based on rejecting the idea that it is necessary for an organelle in a cell to have a membrane or similar form of enclosing structure to have the potential to assume certain functions," Lemke pointed out. "By way of this simple but very compelling concept, we now have discovered a remarkable way of reproducing all other major cellular processes." In line with the researchers' expectations, it turned out they were indeed able to create an individually adaptable, cellular system that operated in parallel with the other functions of the cell. Their objective now is to cultivate a new kind of cell within a living cell - organelle by organelle and function by function. "Thanks to this fresh approach, we should be able, little by little, to observe and investigate the origin of eukaryotic life and the aging of eukaryotes." The research project entitled "De novo organism design from membraneless orthogonal central dogma organelles" is being financed by the Volkswagen Foundation through the final funding round of its "Life? - A Fresh Scientific Approach to the Basic Principles of Life" initiative. The purpose of this funding initiative is to promote research investigating the principles of life at the interface between the natural and the life sciences. Edward Lemke is Professor of Synthetic Biophysics at Johannes Gutenberg University Mainz as well as Adjunct Director at the Institute of Molecular Biology (IMB). He also coordinates the DFG Priority Program on "Molecular Mechanisms of Functional Phase Separation". He was awarded an ERC Advanced Grant worth EUR 2.5 million in 2020 in support of his research. Other Useful Links News-Medical.Net provides this medical information service in accordance with these terms and conditions. Please note that medical information found on this website is designed to support, not to replace the relationship between patient and physician/doctor and the medical advice they may provide.
Provided are polyesters selected from i) aliphatic polyesters wherein the dicarboxylic component and the diolic component are both aliphatic, the aliphatic dicarboxylic component comprising at least 20% by moles of 2-methylglutaric acid and up to 80% by moles of at least one second linear aliphatic saturated diacid; and ii) aliphatic-aromatic polyesters having a dicarboxylic component comprising repeating units deriving from at least one polyfunctional aromatic acid and at least one aliphatic diacid, and a diol component comprising repeating units deriving from at least one aliphatic diol, wherein said aliphatic diacid comprises a mixture consisting of at least 30% by moles of 2-methylglutaric acid and up to 70% by moles, with respect to the total moles of the aliphatic dicarboxylic component, of at least one second linear aliphatic saturated aliphatic diacids; and processes for production thereof. The polyesters have great toughness and high elongation at failure values.
BACKGROUND DETAILED DESCRIPTION Operations analytics are routinely performed on operations data. Operations analytics may include management of complex systems, infrastructure and devices. Complex and distributed data systems are monitored at regular intervals to maximize their performance, and detected anomalies are utilized to quickly resolve problems. In operations related to information technology, data analytics are used to understand log messages, and search for patterns and trends in telemetry signals that may have semantic operational meanings. Operational analytics relates to analysis of operations data, related to, for example, events, logs, and so forth. Various performance metrics may be generated by the operational analytics, and operations management may be performed based on such performance metrics. Operations analytics is vastly important and spans management of complex systems, infrastructure and devices. It is also interesting because relevant analytics are generally limited to anomaly detection and pattern detection. The anomalies are generally related to operations insight, and patterns are indicative of underlying semantic processes that may serve as potential sources of significant semantic anomalies. Generally, analytics is used in IT operations (“ITO”) for understanding unstructured log messages and for detecting patterns and trends in telemetry signals that may have semantic operational meanings. Many ITO analytic platforms focus on data collection and transformation, and on analytic execution. However, operational analytics are generally query-based. For example, a domain expert, such as a system engineer, may query input data to extract and analyze data related to an aspect of system operations. In many situations, relevant data may be normalized and readily available to be uploaded onto a flexible and powerful analytic execution engine. However, questions or problems may need to be translated into appropriate analytic formulations in order to generate the desired responses. In a big data scenario, the size of the volume of data often negatively impacts processing of such query-based analytics. One of the biggest problems in big data analysis is that of formulating the right query. Although it may be important to extract features and execute data analytics, this may not be sufficient to address the issues related to big data. Once data is available in an appropriate format, it becomes important to know what analyses may be most productive in providing operational insights. When datasets are small and experts are readily available, platforms connecting analytic tools to automatically collected data are generally very effective. However, as the data grows larger and experts become scarce, operational data mining becomes difficult; there may be just too much data and the relationships are too complex to formulate queries that may provide much needed insights. Accordingly, there may be an overwhelming need for tools that help formulate analytic queries. Therefore, in the context of operational data, it may be important to provide an interface that may be utilized by operational investigations to easily formulate and solve operational issues. As disclosed in various examples herein, such an interface may be based on concatenations of pattern and anomaly detectors. In particular, interesting analytics may be highlighted, and relevant analytics may be suggested, independent of a query. An interactive ecosystem may be disclosed where new combinations of anomalies and patterns may compete for selection by a domain expert. Generally, it may be difficult to define a set of anomaly and pattern detectors that may encompass all the detection that may be necessary for operational analytics. Additionally every significant set of detectors may initially have an overwhelming set of anomalies and patterns for the domain expert to investigate, validate, and/or disqualify. As disclosed herein, such issues may be addressed by using a limited, but generic, set of anomaly detectors and pattern recognition schemes, which may combine automatically so that input data related to a series of events and telemetry measurements may be enriched whenever an anomaly or pattern may be detected. Such feedback enables deep semantic explorations that may eventually encompass a large set of complex analytics. Furthermore, such feedback-based interaction constitutes a competitive ecosystem for prioritized analytics, where analytics compete for the attention of the domain expert, highlighting the analyses that are most likely to be relevant to the domain expert. Moreover, changes in operational performance are driven by changes in the underlying input data and by continuous interactions with domain experts. New data may manifest new anomalies and patterns, whereas new interactions with domain experts may introduce new tagged patterns and system anomalies. As described in various examples herein, interactive detection of system anomalies is disclosed. One example is a system including a data processor, an anomaly processor, and an interaction processor. Input data related to a series of events and telemetry measurements is received by the data processor. The anomaly processor detects presence of a system anomaly in the input data, the system anomaly indicative of a rare situation that is distant from a norm of a distribution based on the series of events and telemetry measurements. The interaction processor is communicatively linked to the anomaly processor and to an interactive graphical user interface. The interaction processor displays, via the interactive graphical user interface, an output data stream based on the presence of the system anomaly, receives, from the interactive graphical user interface, feedback data associated with the output data stream, and provides the feedback data to the anomaly processor for operations analytics based on the feedback data. Generally, the term “system anomaly” as used herein may correspond to a time-slot where multiple events/signals show collectively anomalous behavior through their combined anomaly measures. Alternatively the term “rare situation” may be used to emphasize a co-location in time. Both these terms may indicate some collective anomaly situation/behavior. Generally, the system anomaly of interest may appear on the graphical user interface and analysis may proceed without the user needing to enter a query. The analysis may begin by selection of the system anomaly, where the major sources of the system anomaly are prioritized—so that the highest contribution appears more prominently, and similar system anomalies are identified, thereby allowing for fast analysis that usually does not require any further search or data queries. The interface also enables filtering of input data using keywords. This may be useful for instances where the problem the user may be set to investigate does not seem to appear on the initial interface. It allows for the interaction described herein from a filtered subset of data. The interface also highlights keywords related to system anomalies as potential filter words as another means of highlighting system anomalies for the benefit of a user, such as, for example, a domain expert reviewing the system anomalies for operations analytics. Generally, the feedback data need not be based on the same type of received system anomalies, i.e., at each iteration, a certain anomaly type (rarity, flood, etc.) may be added and/or removed from the set of the events. As described herein, a weighting may be utilized (e.g., weight 0 for removal of a certain anomaly type). The techniques described herein enable automatic detection of system anomalies without a query. However, such automatic detection techniques may be combined with known system anomalies, and/or query-based detection of system anomalies to form a hybrid system. In the following detailed description, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration specific examples in which the disclosure may be practiced. It is to be understood that other examples may be utilized, and structural or logical changes may be made without departing from the scope of the present disclosure. The following detailed description, therefore, is not to be taken in a limiting sense, and the scope of the present disclosure is defined by the appended claims. It is to be understood that features of the various examples described herein may be combined, in part or whole, with each other, unless specifically noted otherwise. FIG. 1 100 100 102 104 106 is a functional block diagram illustrating an example of a system for interactive detection of system anomalies. System is shown to include a data processor , an anomaly processor , and an interaction processor . 100 The term “system” may be used to refer to a single computing device or multiple computing devices that communicate with each other (e.g. via a network) and operate together to provide a unified service. In some examples, the components of system may communicate with one another over a network. As described herein, the network may be any wired or wireless network, and may include any number of hubs, routers, switches, cell towers, and so forth, Such a network may be, for example, part of a cellular network, part of the internet, part of an intranet, and/or any other type of network. 100 100 The components of system may be computing resources, each including a suitable combination of a physical computing device, a virtual computing device, a network, software, a cloud infrastructure, a hybrid cloud infrastructure that includes a first cloud infrastructure and a second cloud infrastructure that is different from the first cloud infrastructure, and so forth. The components of system may be a combination of hardware and programming for performing a designated function. In some instances, each component may include a processor and a memory, while programming code is stored on that memory and executable by a processor to perform a designated function. The computing device may be, for example, a web-based server, a local area network server, a cloud-based server, a notebook computer, a desktop computer, an all-in-one system, a tablet computing device, a mobile phone, an electronic book reader, or any other electronic device suitable for provisioning a computing resource to perform an interactive detection of system anomalies. Computing device may include a processor and a computer-readable storage medium. 100 100 100 100 100 The system receives input data related to a series of events and telemetry measurements. The system detects presence of a system anomaly in the input data, the system anomaly indicative of a rare situation that is distant from a norm of a distribution based on the series of events and telemetry measurements. In some examples, the system detects presence of an event pattern in the input data. The system displays, via an interactive graphical user interface, an output data stream based on the presence of the system anomaly. The system receives, from the interactive graphical user interface, feedback data associated with the output data stream, and provides the feedback data to the anomaly processor for operations analytics based on the feedback data. 102 In some examples, the data processor receives input data related to a series of events and telemetry measurements. The series of events may be customer transactions, Web navigation logs (e.g. click stream), security logs, and/or DNA sequences. In some examples, each event may be associated with an event identifier identifying a given event in the series of events, an event time identifier identifying a time when the given event occurred. In some examples, the series of events may be defined based on temporal constraints. For example, the series of events may be a collection of log messages for a specified period of time. In some examples, the series of events may be defined based on spatial constraints. For example, the series of events may be a collection of log messages for a specified geographic location. Combinations of spatial and temporal constraints may be used as well. Also, for example, the series of events may be based on additional system identifiers, such as, for example, usage or any other identifier of a system. Generally, such system identifiers may not be uniform. For example, system anomalies may appear over differing time intervals, and/or different usage values. As described herein, system anomalies from such non-uniform system identifiers may be appropriately modified and/or scaled to be uniform, additive, and so forth, to determine, for example, an anomaly intensity, an anomaly score, an anomaly fingerprint, and a fingerprint matching function. 102 102 The input data may be normalized in several ways. For example, a log analysis, and/or a signal analysis may be performed on the input data. In some examples, data processor may receive a normalized input data. In some examples, data processor may perform operations to normalize the input data. In some examples, the input data may be a stream of log messages. Log messages may be analyzed for latent structure and transformed into a concise set of structured log message types and parameters. In some examples, each source of log messages may be pre-tagged. The input data may be a corresponding stream of event types according to matching regular expression. Log messages that do not match may define new regular expressions. In some examples, telemetry signals may also be analyzed for periodicities and relevant features. Generally, the event type is a type of log message or a type of performance metric. 104 100 104 104 FIG. 1 The input data may be fed into analysis processors, such as, for example, an anomaly processor , In some examples, system may include a pattern processor (not illustrated in ). The anomaly processor detects presence of a system anomaly A in the input data. In some examples, such detection may be automatic. In some examples, such detection may be query-based. In some examples, query-less detection of system anomalies may be combined with query-based techniques. 104 104 104 As described herein, a system anomaly A is an outlier in a statistical distribution of data elements of the input data. The term outlier, as used herein, may refer to a rare event, and/or an event that is distant from the norm of a distribution (e.g., an extreme, unexpected, and/or remarkable event). For example, the outlier may be identified as a data element that deviates from an expectation of a probability distribution by a threshold value. The distribution may be a probability distribution, such as, for example, uniform, quasi-uniform, normal, long-tailed, or heavy-tailed. Generally, the anomaly processor may identify what may be “normal” (or non-extreme, expected, and/or unremarkable) in the distribution of clusters of events in the series of events, and may be able to select outliers that may be representative of rare situations that are distinctly different from the norm. Such situations are likely to be “interesting” system anomalies A. In some examples, system anomalies may be identified based on an expectation of a probability distribution. For example, a mean of a normal distribution may be the expectation, and a threshold deviation from this mean may be utilized to determine an outlier for this distribution. In some examples, a system anomaly may be based on the domain. For example, the distribution may be based on the domain, and an expectation or mean of the distribution may be indicative of an expected event. A deviation from this mean may be indicative of a system anomaly. Also, for example, a system anomaly in log messages related to security, may be different from a system anomaly in log messages related to healthcare data. In some examples, a domain expert may provide feedback data that may enable automatic identification of system anomalies. For example, repeated selection of an event by a domain expert may be indicative of a system anomaly. A domain may be an environment associated with the input data, and domain relevance may be semantic and/or contextual knowledge relevant to aspects of the domain. For example, the input data may be data related to customer transactions, and the domain may be a physical store where the customer transactions take place, and domain relevance may be items purchased at the physical store and the customer shopping behavior. As another example, the input data may be representative of Web navigation logs (e.g. click stream), and the domain may be the domain name servers that are visited via the navigation logs, and domain relevance may be analysis of Internet traffic. Also, for example, the input data may be related to operational or security logs, and the domain may be a secure office space for which the security logs are being maintained and/or managed, and domain relevance may be tracking security logs based on preferences such as location, time, frequency, error logs, warnings, and so forth. Generally, a domain expert may be an individual in possession of domain knowledge. For example, the domain may be a retail store, and the domain expert may be the store manager. Also, for example, the domain may be a hospital, and the domain expert may be a member of the hospital management staff. As another example, the domain may be a casino, and the domain expert may be the casino manager. Also, for example, the domain may be a secure office space, and the domain expert may be a member of the security staff. 104 104 j j j j j m,j m j n m j j n n,m l l l l l l l In some examples, the anomaly processor may operate on a series of classified structured log messages {e}. Each log message or event may be associated with at least a time t=t(e), and an event type T=T(e). In some examples, the event type may be a signal, and each event may be associated with, in addition to time and event type, numerical values V=v(e), where the numerical values associated with events of an event type T, v(e|T(e)=T) may be attributed a signal type T. In some examples, the anomaly processor may additionally operate on telemetry signals arriving in structured tabular form as a stream of discrete signal measurement events {e} where each signal measurement may be associated with a time t=t(e), a signal type T=T(e) and a single numerical value v=v(e). 100 In some examples, system may include an evaluator (not shown in the figures) to determine various quantitative measurements related to the input data. Generally, the evaluator may determine measurements at different levels. For example, a first level measurement for anomaly intensity amounts may be determined for each event-type. Also, for example, a second level measurement may be a collective measurement based on anomaly types (e.g., Flood of Events, Rare Events, etc.). For example, the evaluator may determine an anomaly intensity, an anomaly intensity score, an anomaly fingerprint, and an anomaly fingerprint matching score for anomaly types. As another example, a third level measurement may be an aggregated measurement of an anomaly score for a system anomaly in a given time slot. As described herein, a determination at each level may be based on a determination at a preceding level. For example, the anomaly intensity, the anomaly intensity score, the anomaly fingerprint, and the anomaly fingerprint matching score may be based on the anomaly intensity amounts. Likewise, the anomaly score may be based on the anomaly intensity, the anomaly intensity score, the anomaly fingerprint, the anomaly fingerprint matching score and the anomaly intensity amounts. As described herein, each measurement at each level may correspond to different distributions, different scales, different time-slots, and so forth. Accordingly, to meaningfully combine these measurements, they may need to be scaled and/or transformed to measurements that are comparable and additive, facilitating their respective combination, aggregation, comparison and/or matching. These and other aspects of detection of a system anomaly are described herein. k j 0 0 j j j j j j 2 j j l l l l l l t l t l t l t l t l t l t l t l t l t l t l t l t l t In some examples, the evaluator may determine anomaly intensity amounts Q(,T) defined on discrete time-slots : tε[t+iΔ,t+(i+1)Δ] for each event type and signal type T. In some examples, an anomaly intensity amount for events may be the event-count n(T,)=∥T()∥. In some examples an anomaly intensity amount for events may be a function of the event count, such as the event-indicator I(T,)=1 if n(T,)>0) else 0, or the event-count log-scale (T,)=1+logn(T,) if n(T,)>0) else 0. In some examples, an anomaly intensity amount for signals may be the maximal signal value per signal type per time slot M(T,)=max(v(tε)). In some examples, an anomaly intensity for signals may be the range of signal values per signal type per time slot R(T,)=max(v(tε))−min(v(tε)). 104 k k j l t k In some examples, the evaluator determines, for a time interval, the anomaly intensity for each anomaly type. As described herein, the anomaly intensities for each of the different anomaly types may be determined before they are transformed into anomaly scores via a “distinctive residual rarity” transformation. In some examples, the evaluator determines, for each time interval for an anomaly type, incomparable anomaly intensity amounts, wherein each incomparable anomaly intensity amount may be transformed with respect to the distribution of associated incomparable anomaly intensity amounts in reference time intervals, based on a distinctive residual rarity extremity score, into comparable, additive, and distinctive anomaly intensity amounts. Accordingly, incomparable anomaly intensities associated with different event types may be transformed into comparable, additive and distinctive anomaly intensities to determine an anomaly score. For example, the anomaly processor may comprise K components, each component k associated with a specific anomaly type for a specific group of events and/or signals G, and applying a transformation of one or more anomaly intensities into anomaly intensity amounts c(T,). Each such transformation may be designed such that anomaly intensity amounts corresponding to different event types within the reference group Gmay share a common scale so they may be combined into anomaly intensities representative of event type or signal-type groups, and they may also be compared to determine the event types that are the main contributors to the anomaly intensity, for example, to aid in root-cause identification. As used herein, an anomaly intensity amount measures a contribution of a certain event type to an anomaly intensity. 104 104 104 1) Anomaly intensity: An anomaly intensity may be determined based on: In some examples, the anomaly processor may receive a time-based stream of events or signals, and the evaluator may determine an anomaly intensity and an anomaly score for each given interval of time. In some examples, for a given time slot, the anomaly processor may identify events that contribute to a majority of the anomaly intensity, and such identified events may be used as a fingerprint to identify similar system anomalies. In some examples, the evaluator may determine three anomaly-related quantities from the anomaly intensity amounts per time slot. In some examples, such determinations may be performed by each component k of the anomaly processor . The three anomaly-related quantities may be: x c T k k TεG k k t l t l α k k k α where each anomaly intensity amount may be raised to a power a associated with component k. In some cases α=1 (simple addition). In some cases α=2 (sum-of-squares) to emphasize the contribution of larger components relative to smaller components. In some examples, Φmay be an optional non-linear monotonic mapping selected to equalize differences between anomaly intensity amounts. In some examples, Φmay be chosen as the α-root function, i.e. Φ(γ)=√{square root over (γ)}. k k k k k k l t t l t l t l t t l t t l t l t l t 2) Anomaly score to assess an extremity of an anomaly intensity x() relative to the probability distribution of corresponding values in reference group of time-slots εG(): α()=A(x(), f(x())). In some examples, a time-slot reference group may include a large contiguous time-span that may include relevant historical data relative to the particular time-slot (e.g. G() may include all time-slots up to a specified number of days before ). In some examples, a time-slot reference group may correspond to a periodic set of time-slots sharing same time-of-day and day-of-week. Use of such periodic reference groups may be a realization of base-lining for seasonality of work-loads and work-patterns of the system under analysis. In some examples, a time-slot reference group may be based on a geographical area. In some examples, a time-slot reference group may be based on the domain. The anomaly scoring function A may be designed to be additive with unified scales for anomaly scores of different components, so that by adding up the component anomaly scores: α()=Σα(), the resulting total anomaly score may be meaningful. For example, anomaly intensities corresponding to different anomaly types may be transformed into normalized and comparable anomaly scores per anomaly type with respect to the time-slot reference group. As described herein, the transformation may be based on a distribution of related anomaly intensities, and may be based on base lining (e.g., periodic time-slot reference groups) and history spans (e.g., system anomalies with respect to last day, last week, etc.). In some examples, as described herein, the evaluator determines, for the time interval, anomaly intensities and the anomaly score, and each anomaly intensity may be transformed, with respect to the distribution of anomaly intensities of the same anomaly type in reference time-slots, based on a distinctive residual rarity extremity score, into comparable, additive, and distinctive anomaly intensity scores, that may in turn be combined to determine the anomaly score. l t 3) Anomaly Fingerprint per time-slot specifying the identity and relative contributions of event types to the anomaly intensity at time-slot may be determined as: ()=Φ(Σ(,)), (Eqn. 1) F T t k j j i jεJ i t l k j i k j TεG k k k α α l t l t where the relative contribution may be defined as ρ(t)=c(T,)/Σc(T,), and top contributing event types may be selected such that the sum of their relative contributions may be starting with larger contributions first, may be the minimum that exceeds a threshold close to 100%, for example 95%. Generally, each event type in each time-slot may be associated with an anomaly intensity amount for each anomaly type. The anomaly fingerprint may then be based on the anomaly intensity amounts for different event types. In some examples, this may be achieved via a vector of anomaly intensity amounts in time-slots different from the selected time-slot. As described herein, anomaly intensities may be determined by combining comparable anomaly intensity amounts for each event type, and such comparable anomaly intensity amounts for different event types may be combined to determine an anomaly intensity and an anomaly fingerprint. t* 4) Fingerprint matching functions: determined for a sub-set of time slots t* that may be considered to be interesting (either by user selection or by their relatively higher anomaly scores). For each such time slot the fingerprint matching function may be determined as: ()={,ρ()}, (Eqn. 2) k|t* k T j εG kc k j j t l t l t * α k T l t such that for each time-slot the fingerprint matching score may be high only if anomaly intensity amounts corresponding to top contributing event types in the fingerprint are high. α()=Φ(Σ(,)·ρ()), (Eqn. 3) In some examples, as described herein, the anomaly type may include a Flood of Events, wherein the anomaly intensity amount is an event count, a Variety of Events, wherein the anomaly intensity amount is an event occurrence indicator, a Flood of Rare Events, wherein the anomaly intensity amount is a product of an event count extremity factor, and an event-type rarity factor, and a Flood of Extreme Signals, wherein the anomaly intensity amount is a maximal signal value per time interval transformed based on a distinctive residual rarity extremity score. 108 In some examples, the anomaly type may be a Partial Pattern. The Partial Pattern anomaly type may be characterized by multiple events appearing repeatedly in the same time slot. For example, a set of 30 events may be identified in the selected time slot, where each event corresponds to a service shutdown message and/or alert. Generally, the Partial Pattern anomaly type may be detected based on interactions with a domain expert via the interactive graphical user interface . 104 j i FoE j j FoE TεG j i j TεG l t l t l t l t l t l t k In some examples, the anomaly processor may include a component evaluating Flood of Events (“FoE”) anomaly type, where the anomaly intensity amount may be the occurrence-count of event type Tin time slot t, c(T,)=n(T,), and power-law may be α=1 (regular sum), so that the anomaly intensity may be x()=n(). The anomaly-fingerprint components are the relative frequencies of the different events ρ(t)=n(T,)/n() in each time-slot. 104 FoE j j i i VoE l t l t l t l t l t l t l t In some examples, the anomaly processor may include a component evaluating Variety of Events (“VoE”) anomaly type, where the anomaly intensity amount may be the event-indicator c(T,)=I(T,) equal to 1 for each event j that appeared at least once in at time slot tso that the sum of anomaly intensity amount may be just the number of distinct event types that occurred in time slot t. N(Tε). The anomaly intensity may be x()=N(Tε), and anomaly-fingerprint components are equal to 1/N(Tε) for all event types that appeared in time slot and 0 otherwise. 104 j j j j j 2 j j j lεG ) j tεG ) j j i j j TεG ) j RE j j j RE l t t l t l t l t l t l t l t l t l t t l t l t l t l t l t l t l t l t l t γ In some examples, the anomaly processor may include a component evaluating a Flood of Rare Events (“RE”) anomaly type. The RE anomaly intensity amount for each event type Tthat appears in a certain time slot , may be designed to be large if Tis rare relative to other events in the time-slot reference group εG(). In some examples, an event type rarity factor may be computed as the negative log of the occurrence-probability of event type Tin the reference group of time-slots G(), in the reference group of event types G(T:r(T,)=−log(P(T,)), where P(T,)=∥T∥(/∥G(T)∥(. Furthermore, the RE anomaly intensity amount may be designed to be large if the count of an event Tat time-slot is high relative to the counts of that event type in other time-slots in reference group εG(), so that e.g. a rare event that occurs several times in a single time-slot may contribute a larger RE anomaly intensity amount than the same event occurring once in a time slot. In some examples, an event-count extremity factor may be computed as the occurrence-probability of event type Tin time-slot trelative to the reference group of time-slots G():h(T,)=n(T,)/Σ(n(T,). Event-count extremity factors tend to be relatively high for rare events in the time-slots they appear in, and relatively low for frequent events in any time slot, which keeps the RE anomaly intensity amounts corresponding to frequent events low in all time-slots. In some cases, the RE anomaly intensity amount may be expressed as a product of the event type rarity factor and the event-count extremity factor c(T,)=r(T,)·h(T,). Since the RE anomaly intensity amount may be already given in the log-domain, and the event-count extremity factor may be typically much smaller than 1, the sum of RE anomaly intensity amounts over all events may tend to be small. In some case, an exponential non-linear mapping Φ(γ)=2 may be applied to the sum, to emphasize relatively large score, and to transform the anomaly intensities to similar ranges as the other anomaly components. RE In some examples, G(.) or c(.,.) maybe normalized compared to a baseline of a system, such as, for example, a value based on historical data and/or feedback data. Accordingly, G(.)=G(.)−hist (G), where hist (G) denotes the value based on historical data and/or feedback data. For example, the feedback data may be indicative of cropping of system anomalies below a threshold to zero. Also, for example, feedback data may be indicative of bucketing all system anomalies above another threshold to amplified values. 104 In some examples, the anomaly processor may include components to evaluate signal related anomaly types. Unlike event-counts that have a common scale for all types of events, different signal types may have incomparable scales, so their anomaly intensities, like range or maximum within each time-slot, may not be used as anomaly intensity amounts, as there may be no meaning in adding quantities not defined on the same scale. Instead a generic transformation may be applied to transform an anomaly intensity into a value-extremity score, such that value-extremity scores corresponding to signals with significantly different types of distribution, and scale may be comparable and additive so they may be used as anomaly intensity amounts to compute a meaningful anomaly intensity, and an anomaly fingerprint. Furthermore, such additive value-extremity scores may be applied to the anomaly intensity to generate anomaly scores that are comparable and additive across anomaly types. A value-extremity score may be expected to be high only for extreme values (outliers), which may be residually rare (very small percentage of the values are equal or above an extreme value), and well separated from the non-extreme majority of values (the inliers), One value-extremity score in the case of normally distributed values, may be the “Z-score” obtained by subtracting the distribution-mean from the value and dividing it by the distribution standard deviation a. However, each of the anomaly intensities may follow a different type of distribution including quasi-uniform, normal, long-tailed, or heavy-tailed. The Z-score may not work as well for non-normal distributions. FIG. 2A 200 202 204 206 208 a a a a a illustrates an example of hypothetical anomaly intensities distributed uniformly , normally , and with a long-tailed Weibull distribution . Dots whose Z-score may be above 3 (at least 3σ above the mean) are highlighted. This may capture correctly extreme-values in the uniform and normal distributions. For example three points with high extreme-value score are illustrated for the case of normal distribution, and no points with high extreme-value score are found in the case of uniform distribution. However, a relatively large number of values get high Z-score in the case of heavy tailed distribution, which defies the requirement for residual rarity of extreme values. F Another value-extremity score which may work well for long-tailed and heavy-tailed distribution may be the residual rarity of a value measured as the negative log of the probability of other values to be equal or higher—this probability may be associated with the complementary cumulative distribution function (CCDF) known in the statistical literature: R Q P Q t εG Q F Q t l t l t l l t t l 2 2 t εG ) (())=−log((())≧())=−log((()) (Eqn. 4). The CCDF, like any function measuring probabilities, has an important property that when applied to joint value distributions (originating from multiple signals), the distribution function may be expressed as a product of the individual value distributions, provided the signals are statistically independent. Accordingly, the log of the joint probability for independent signals may be expressed by the sum of the logs of the individual signal distributions. In other words, the residual-rarity score of a multiple-signal set corresponds to the sum of individual residual-rarity scores for independent signals. Accordingly, CCDF-based value-extremity scores (referred to herein as residual rarity extremity scores) are comparable and additive as required. In some examples, the residual rarity extremity scores may be equivalent to ‘top-p %’ detection and may have no regard to value separation criteria. Accordingly, it may attribute high scores to top values even if they are not well separated from lower values, like in uniform distributions. To avoid false detections of outliers for uniform distributions, an outlier-detection threshold to match the detection rate of Z-scores for normal distributions may be designed. However, such a technique may still leave several false outlier detections in uniform distributions, and too few true-outlier detections for long-tail distributions. FIG. 2B FIG. 2A FIG. 2A 200 202 204 206 206 202 208 204 200 b b b b a b b b b. illustrates an example of resultant unsatisfactory outliers obtained after a residual rarity transformation of the anomaly intensities illustrated in . For example, resultant unsatisfactory outliers obtained via the residual rarity extremity score applied to the anomaly intensities illustrated in are illustrated. Hypothetical anomaly intensities distributed uniformly , normally , and with a long-tailed Weibull distribution . As illustrated, although the outliers are consistent with the outliers for the normal distribution , the outliers for the long-tailed Weibull distribution are minimal, and there are two false outliers detected for the uniform distribution in To obtain a value-extremity score (an outlier criterion) that works well for a wide range of value distributions, and that may be comparable and additive and may address both the residual rarity and the separation criteria required from outliers, the residual rarity extremity scores may be modified to determine a “scaled CCDF”, referred to herein as a distinctive residual rarity extremity score. Assuming that for operations data, all anomaly intensities are non-negative (as is the case for event-counts and telemetry signal values), and that separation criteria should be relative to the value-scale of each signal, the distinctive residual rarity extremity score may be defined by a minimal ratio S between outlier and inlier values, where S may be larger than 1. The extremity score with separation factor S may be: <math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><mrow><mrow><msub><mi>E</mi><mi>S</mi></msub><mo></mo><mrow><mo>(</mo><mrow><mi>Q</mi><mo></mo><mrow><mo>(</mo><mover><msub><mi>t</mi><mi>i</mi></msub><mi>_</mi></mover><mo>)</mo></mrow></mrow><mo>)</mo></mrow></mrow><mo>=</mo></mrow><mo> </mo></mrow><mo></mo><mstyle><mspace width="0.3em" height="0.3ex" /></mstyle><mo></mo><mrow><mo> </mo><mrow><mrow><mrow><mo>-</mo><msub><mi>log</mi><mn>2</mn></msub></mrow><mo></mo><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><mrow><mi>Q</mi><mo></mo><mrow><mo>(</mo><mrow><mover><mi>t</mi><mi>_</mi></mover><mo>∈</mo><mrow><mi>G</mi><mo></mo><mrow><mo>(</mo><mover><msub><mi>t</mi><mi>i</mi></msub><mi>_</mi></mover><mo>)</mo></mrow></mrow></mrow><mo>)</mo></mrow></mrow><mo>≥</mo><mfrac><mrow><mi>Q</mi><mo></mo><mrow><mo>(</mo><mover><msub><mi>t</mi><mi>i</mi></msub><mi>_</mi></mover><mo>)</mo></mrow></mrow><mi>S</mi></mfrac></mrow><mo>)</mo></mrow></mrow></mrow><mo>=</mo><mrow><mrow><mo>-</mo><msub><mi>log</mi><mn>2</mn></msub></mrow><mo></mo><mrow><mrow><msub><mover><mi>F</mi><mi>_</mi></mover><mrow><mover><mi>t</mi><mi>_</mi></mover><mo>∈</mo><mrow><mi>G</mi><mo>(</mo><mover><msub><mi>t</mi><mi>i</mi></msub><mi>_</mi></mover><mo>)</mo></mrow></mrow></msub><mo></mo><mrow><mo>(</mo><mrow><mrow><mi>Q</mi><mo></mo><mrow><mo>(</mo><mover><msub><mi>t</mi><mi>i</mi></msub><mi>_</mi></mover><mo>)</mo></mrow></mrow><mo>/</mo><mi>S</mi></mrow><mo>)</mo></mrow></mrow><mo>.</mo></mrow></mrow></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mrow><mi>Eqn</mi><mo>.</mo><mstyle><mspace width="0.8em" height="0.8ex" /></mstyle><mo></mo><mn>5</mn></mrow><mo>)</mo></mrow></mtd></mtr></mtable></math> In some examples, a single value of separation factor S may be used in computing value-extremity scores for all anomaly intensities, since separation criterion by ratio may be scale-independent and may apply similarly to signals or intensities at all scales. FIG. 2C 200 202 204 206 202 208 204 c c c c c c c. illustrates an example of resultant outliers based on a modified distinctive residual rarity anomaly transform. For example, an example of resultant outliers based on the distinctive residual rarity extremity score with separation factor S=1.2 (i.e. at least 20% separation) is illustrated. Hypothetical anomaly intensities may be distributed uniformly , normally , and with a long-tailed Weibull distribution . As expected, there are no outliers in the uniform distribution, and for the same three outliers in the normal distribution , a larger but limited set of outliers may be realized for the Weibull samples l i In some examples, anomaly intensities may be transformed into anomaly scores that are comparable, additive and distinctive. The term “distinctive” as used herein refers to a requirement of a threshold separation between high values and lower values to be considered extreme. In some examples, the evaluator determines, for the time interval, anomaly intensities and the anomaly score, and where incomparable anomaly intensities are transformed, based on a distinctive residual rarity extremity score, into comparable, additive, and distinctive signals to determine the anomaly score. For example, the evaluator may include a component evaluating Extreme Signal (“ES”) anomaly type, where the anomaly intensity amount for signal-type Tin time slot tmay be a distinctive residual rarity extremity score: <math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><msub><mi>c</mi><mi>ES</mi></msub><mo></mo><mrow><mo>(</mo><mrow><msub><mi>T</mi><mi>j</mi></msub><mo>,</mo><mover><msub><mi>t</mi><mi>i</mi></msub><mi>_</mi></mover></mrow><mo>)</mo></mrow></mrow><mo>=</mo><mrow><mrow><msub><mi>E</mi><mi>S</mi></msub><mo></mo><mrow><mo>(</mo><mrow><mi>M</mi><mo></mo><mrow><mo>(</mo><mrow><msub><mi>T</mi><mi>i</mi></msub><mo>,</mo><mover><msub><mi>t</mi><mi>i</mi></msub><mi>_</mi></mover></mrow><mo>)</mo></mrow></mrow><mo>)</mo></mrow></mrow><mo>=</mo><mrow><mrow><mo>-</mo><msub><mi>log</mi><mn>2</mn></msub></mrow><mo></mo><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><mrow><mi>M</mi><mo></mo><mrow><mo>(</mo><mrow><msub><mi>T</mi><mi>i</mi></msub><mo>,</mo><mrow><mover><mi>t</mi><mi>_</mi></mover><mo>∈</mo><mrow><mi>G</mi><mo></mo><mrow><mo>(</mo><mover><msub><mi>t</mi><mi>i</mi></msub><mi>_</mi></mover><mo>)</mo></mrow></mrow></mrow></mrow><mo>)</mo></mrow></mrow><mo>≥</mo><mfrac><mrow><mi>M</mi><mo></mo><mrow><mo>(</mo><mrow><msub><mi>T</mi><mi>i</mi></msub><mo>,</mo><mover><msub><mi>t</mi><mi>i</mi></msub><mi>_</mi></mover></mrow><mo>)</mo></mrow></mrow><mi>S</mi></mfrac></mrow><mo>)</mo></mrow></mrow></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mrow><mi>Eqn</mi><mo>.</mo><mstyle><mspace width="0.8em" height="0.8ex" /></mstyle><mo></mo><mn>6</mn></mrow><mo>)</mo></mrow></mtd></mtr></mtable></math> l l l ES l t l t l t corresponding to the maximal signal value per signal type Tper time slot M(T,)=max(v(tε)). In some cases the separation factor S may be 1.2. In some examples the anomaly intensity may be computed according to Eq. 1, with power law α=2 (sum-of-squares), and a mapping function of Φ(γ)=√{square root over (γ)}, such that signals with values in that have high value-extremity scores may be further emphasized relative to signals with lower value-extremity scores. 104 l t l t t l t k In some cases, the anomaly processor , may attribute an anomaly score to each anomaly component k in each time-slot by assessing the extremity of an anomaly intensity x() relative to the probability distribution of corresponding values in reference group of time-slots εG(), using the distinctive residual rarity extremity score: <math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><msub><mi>A</mi><mi>k</mi></msub><mo></mo><mrow><mo>(</mo><mover><msub><mi>t</mi><mi>i</mi></msub><mi>_</mi></mover><mo>)</mo></mrow></mrow><mo>=</mo><mrow><mrow><msub><mi>E</mi><mi>S</mi></msub><mo></mo><mrow><mo>(</mo><mrow><msub><mi>x</mi><mi>k</mi></msub><mo></mo><mrow><mo>(</mo><mover><msub><mi>t</mi><mi>i</mi></msub><mi>_</mi></mover><mo>)</mo></mrow></mrow><mo>)</mo></mrow></mrow><mo>=</mo><mrow><mrow><mo>-</mo><msub><mi>log</mi><mn>2</mn></msub></mrow><mo></mo><mrow><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><mrow><msub><mi>x</mi><mi>k</mi></msub><mo></mo><mrow><mo>(</mo><mrow><mover><mi>t</mi><mi>_</mi></mover><mo>∈</mo><mrow><mi>G</mi><mo></mo><mrow><mo>(</mo><mover><msub><mi>t</mi><mi>i</mi></msub><mi>_</mi></mover><mo>)</mo></mrow></mrow></mrow><mo>)</mo></mrow></mrow><mo>≥</mo><mfrac><mrow><msub><mi>x</mi><mi>k</mi></msub><mo></mo><mrow><mo>(</mo><mover><msub><mi>t</mi><mi>i</mi></msub><mi>_</mi></mover><mo>)</mo></mrow></mrow><mi>S</mi></mfrac></mrow><mo>)</mo></mrow></mrow><mo>.</mo></mrow></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mrow><mi>Eqn</mi><mo>.</mo><mstyle><mspace width="0.8em" height="0.8ex" /></mstyle><mo></mo><mn>7</mn></mrow><mo>)</mo></mrow></mtd></mtr></mtable></math> In some cases the separation factor used for all anomaly components may be S=2. With this extremity measure, anomaly scores for different anomaly components associated with different anomaly intensities have a common scale and may be compared and combined by addition, while at the same time maintaining the separation criterion required for them to be considered extreme in the first place. Accordingly, anomaly scores of different anomaly components may be added into a total system anomaly score as follows: A ·A t l t l K k k ()=Σω() (Eqn. 8) k 106 where weights ωmay be adjusted to reflect current relative importance of anomaly component k, determined heuristically based on domain expert interaction data received via an interaction processor . 106 Whereas event anomalies are generally related to insight into operational data, event patterns indicate underlying semantic processes that may serve as potential sources of significant semantic anomalies. As disclosed herein, an interaction processor may be provided that allows operational analysis to be formulated as concatenations of pattern and anomaly detectors. 100 104 100 100 In some examples, system may include a pattern processor to detect presence of an event pattern in the input data. Although the pattern processor may be described herein as a separate component, in some examples, the functions of the pattern processor may be performed by the anomaly processor . Generally, the pattern processor identifies non-coincidental situations, usually events occurring simultaneously, Patterns may be characterized by their unlikely random reappearance. For example, a single co-occurrence in may be somewhat likely, but 90 co-occurrences in may be much less likely to occur randomly. 106 104 108 106 108 106 106 108 104 108 106 108 108 108 In some examples, interaction processor may be communicatively linked to the anomaly processor and to an interactive graphical user interface . The interaction processor displays, via the interactive graphical user interface , an output data stream based on the presence of the system anomaly. In some examples, interaction processor may generate an output data stream based on the presence of the system anomaly and the event pattern. In some examples, the interaction processor receives feedback data associated with the output data stream from the interactive graphical user interface , and provides the feedback data to the anomaly processor and/or the pattern processor for operations analytics based on the feedback data. As described herein, feedback data may include feedback related to domain relevance, received via the interactive graphical user interface and processed by the interaction processor . The feedback data may be indicative of selection or non-selection of a portion of the interactive graphical user interface . As used herein, selection may include copying a portion of text and/or images displayed by the interactive graphical user interface , selection or non-selection of a selectable menu, hovering over, or clicking on a text and/or image displayed, or touching a touch-sensitive portion of the interactive graphical user interface . 106 108 106 108 106 The interaction processor processes the feedback data and supports interaction between the interactive graphical user interface and a domain expert. Operations analytics, as used herein, may include any analytics associated with system performance. For example, operations analytics may include analysis of interesting patterns and incorporation of domain knowledge in the form of constraints into the detection of system anomalies. For example, the domain may be a retail store, and the domain knowledge may include knowledge about traffic patterns in the store, customer purchases, product placement, products sold, available inventory, clientele, store hours, and so forth. In some examples, the interaction processor provides, via the interactive graphical user interface , an interactive visual representation of the system anomalies and event patterns. For example, to enable the domain expert to better understand and discover patterns, interaction processor may provide a context-augmented interface for visually guided exploration. 104 104 108 104 In some examples, operations analytics may include tagging of system anomalies and event patterns. In some examples, operations analytics may include identifying anomaly types and initiating system responses based on the identified anomaly types. In some examples, operations analytics may include adding and/or removing an anomaly type from the output data stream. In some examples, operations analytics may include an actionable response such as generating a system alert. For example, the anomaly processor may identify an issue and trigger a system alert to act on the issue promptly. In some examples, such an alert may be based on a fingerprint of a past system anomaly that was identified, tagged, and associated with a preferred mitigation or remediation action. For example, a past anomaly may be associated with a service shutdown based on a Partial Pattern anomaly type, and the anomaly processor may trigger a system alert for a service shutdown. Also, for example, the Partial Pattern anomaly type may be detected based on interactions with a domain expert via the interactive graphical user interface , and a forced shutdown message may be generated by the anomaly processor . 106 104 106 108 106 104 In some examples, the interaction processor may display a detected system anomaly, and may identify selection of the system anomaly by a domain expert. In some examples, the anomaly processor may identify an anomaly type associated with the system anomaly, and the interaction processor may display the anomaly type via the interactive graphical user interface . In some examples, the interaction processor may identify interaction based on the system anomaly. For example, the domain expert may add or delete the system anomaly. Also, for example, the domain expert may select a word on a displayed word cloud to further investigate additional system anomalies similar to the selected system anomaly. In some examples, the anomaly processor may determine an anomaly fingerprint for the selected pattern, determine a fingerprint matching function associated with the selected system anomaly, and detect additional system anomalies based on the fingerprint matching function. FIG. 1 108 104 106 108 104 108 108 108 106 108 104 106 As illustrated in , the interactive graphical user interface may be communicatively linked to the anomaly processor via the interaction processor . Accordingly, the interactive graphical user interface supports the anomaly processor and/or a pattern processor. In some examples, the interactive graphical user interface displays the output data stream, including a first selectable option associated with the system anomaly, and a second selectable option associated with the event pattern. Accordingly, the interactive graphical user interface displays system anomalies and event patterns, and provides suitable interfaces, such as the first selectable option associated with the system anomaly, and the second selectable option associated with the event pattern for the domain expert to identify and tag significant system anomalies. The interactive graphical user interface receives such feedback data associated with the first and second selectable options and provides the feedback data to the interaction processor . In some examples, the interactive graphical user interface provides the feedback data back into the anomaly processor and/or the pattern processor via the interaction processor . 108 In some examples, the interactive graphical user interface further provides, in response to a selection of the first selectable option, a pop-up card with information related to the system anomaly. Generally, the feedback data need not be based on the same type of received system anomalies, i.e., at each iteration, a certain anomaly type (RE, FoE, etc.) may be added and/or removed from the set of the events. As described herein, a weighting may be utilized (e.g., weight 0 for removal of a certain anomaly type). FIG. 3 106 is an example display of an output data stream including a word cloud. A word cloud is a visual representation of a plurality of words highlighting words based on a relevance of the word in a given context. For example, a word cloud may comprise words that appear in log messages associated with the selected system anomaly. Words in the word cloud may be associated with term scores that may be determined based on, for example, relevance and/or position of a word in the log messages. In some examples, the word cloud may be interactive, and a system anomaly may be identified based on an interaction with the word cloud. For example, a term in the word cloud may be selected, and the interaction processor may identify system anomalies that are associated with log messages that include the selected term. 108 300 306 308 302 302 302 302 In some examples, the example display of the output stream may be a snapshot of an application launcher interface provided via the interactive graphical user interface . The output data illustrated relates to an input data of log messages received during an example time period including May 5 to July 31, represented by the x-axis of the graphical representation. System anomalies are illustrated, along with a word cloud and event patterns . In some examples, a composite anomaly score may be displayed, where the composite anomaly score may be determined as a sum of several different anomaly scores. The first selectable option associated with the system anomaly may be, for example, a clickable node, such as node . Every highlighted node on the graph, such as, for example, node , may be clickable. Selection of the first selectable option, such as a node, may launch an analysis of the associated system anomaly. For example, clicking node may launch an analysis of the system anomaly that occurred at or about July 1. As described herein, a selection may include a click, or may include hovering over node in a touch-sensitive interactive display. 108 304 340 FIG. 3 FIG. 3 In some examples, the feedback data may include an indication of a selection of a system anomaly, and the graphical user interface further provides, based on the feedback data, a pop-up card with information related to the selected system anomaly. For example, referring again to , in response to a selection of the first selectable option, a pop-up card with information related to the system anomaly may be displayed. For example, pop-up may be displayed, with information related to the system anomaly. Pop-up may include, for example, a date and time associated with the system anomaly, and a type of anomaly score for the system anomaly. For example, as indicated in , the selected system anomaly occurred on “2013-07-01” at “13:00:00”. Also, for example, the anomaly type may be indicated as “Variety of Events”. 104 108 In some examples, the anomaly processor further generates a word cloud to be displayed via the interactive graphical user interface , the word cloud highlighting words that appear in log messages associated with the selected system anomaly. Highlighting may be achieved via a distinctive font, font size, color, and so forth. In some examples, term scores may be determined for key terms, the term scores based on a modified inverse domain frequency. In some examples, the modified inverse domain frequency may be based on an information gain or a Kullback-Liebler Divergence. FIG. 3 306 306 308 308 For example, referring again to , word cloud highlights words that appear in anomalous messages more than in the rest of the messages. In some examples, relevance of a word may be illustrated by its relative font size in the word cloud . For example, “queuedtoc”, “version”, and “culture” are displayed in relatively larger font compared to the font for the other words. Accordingly, it may be readily perceived that the words “queuedtoc”, “version”, and “culture” appear in the messages related to the system anomaly more than in other messages. Event patterns are displayed. In some examples, event pattern may represent groups of events (or event groups) that appear almost exclusively together in the input data. 104 In some examples, the anomaly processor may detect system anomalies based on at least one of the feedback data and a previously processed event pattern. In some examples, the evaluator may determine, for a time interval, the anomaly fingerprint based on a set of relative contributions of event types to the anomaly intensity; where a fingerprint matching score for the anomaly fingerprint may be computed in a second time interval to determine presence or absence of similar system anomalies in the second time interval, and where the fingerprint matching score may be computed based on a correlation between the anomaly fingerprint and anomaly intensity amounts in the second time interval. 104 For example, the anomaly processor may identify an issue and trigger an alert to act on it promptly based on the fingerprint of a past system anomaly that was identified, tagged, and associated to a preferred mitigation or remediation action. The identification may be done by detecting other events that match the tagged fingerprint sufficiently well. Tagged system anomalies may increase the importance of their respective anomaly score, and deleted system anomalies may reduce the respective anomaly score. FIG. 1 108 104 106 108 106 108 104 Referring to , in some examples, the interactive graphical user interface further provides, in response to a selection of the first selectable option, an analysis interface to analyze the system anomaly. In some examples, the analysis interface may be an interactive anomaly analysis interface. In some examples, the analysis interface may be generated by the anomaly processor . For example, in response to a click on a system anomaly, the interaction processor may prompt the interactive graphical user interface to open an analysis interface. In some examples, interaction processor may receive feedback data indicative of a domain expert's interaction with the interactive graphical user interface and provide the feedback data to the anomaly processor to generate and/or modify an analysis interface. For example, the domain expert may examine the system anomaly and perhaps tag it, indicating its underlying cause. Tagging the system anomaly may catalogue an anomaly fingerprint as a known event. FIG. 4 FIG. 3 FIG. 3 FIG. 4 402 408 408 406 302 302 104 406 406 406 302 404 404 is an example of an analysis interface for system anomalies. A snapshot of the analysis interface triggered by selection of the system anomaly under the cursor in is illustrated. A threshold anomaly score may be utilized to filter system anomalies of interest. For example, a threshold of 75% may be utilized. In some examples, an actionable menu may be provided to receive input from a domain expert. The actionable menu may provide data entry fields and/or drop-down menus, to “Save this anomaly”, “choose a severity”, “Delete this Pattern”, “Do Nothing”, and so forth. For example, an entry of “75%” may be entered as a threshold value. System anomaly list may be a set of events constituting the fingerprint as in Eqn. 2 corresponding to the system anomaly associated with node in . Based on selection of node , the anomaly processor may generate an anomaly fingerprint . In some examples, the feedback data from a domain expert may indicate that the fingerprint represents an instance of “Rogue Report”. That is, the underlying cause of the fingerprint , and by association node , may be that the transmission of a very complex report may be holding up data traffic and blocking efficient rendering of system resources. The top portion of illustrates the fingerprint matching score from Eqn. 3 as a function of time. As illustrated, the system anomaly matches the fingerprint perfectly, as expected. There may be other times where the fingerprint match may be high, but not sufficiently high for automatic recognition. Tagging the system anomaly may indicate that the anomaly fingerprint may be stored and any future event that matches the anomaly fingerprint sufficiently well may be associated with the same tag and identified as a system anomaly. 104 108 104 106 108 106 106 In some examples, the anomaly processor generates an interactive analysis interface to be provided via the interactive graphical user interface , and the anomaly processor modifies the output data stream based on interactions with the analysis interface. In some examples, the interaction processor detects, based on the interactions with the interactive graphical user interface , a Partial Pattern anomaly type. In some examples, the interaction processor detects, based on the interactions with the analysis interface, a Partial Pattern anomaly type. In some examples, the interaction processor displays, in the modified output data stream, a service shutdown message with the detected Partial Pattern anomaly type. FIG. 1 108 408 106 108 106 108 Referring to , in some examples, the interactive graphical user interface further provides, in response to a selection of the second selectable option, an analysis interface to analyze the event pattern. For example, in response to an entry in the actionable menu , the interaction processor may prompt the interactive graphical user interface to provide an analysis interface. In some examples, interaction processor may receive feedback data indicative of a domain expert's interaction with the interactive graphical user interface . For example, the domain expert may examine the event pattern, and perhaps tag it, indicating its underlying cause. Tagging the event pattern may catalogue them as a known event pattern. FIG. 5 FIG. 3 104 310 500 504 502 502 is an example of an analysis interface for event patterns. In some examples, the anomaly processor may generate the analysis interface to analyze system anomalies. In some examples, the pattern processor may generate the analysis interface to analyze event patterns. In some examples, a snapshot of the analysis interface may be displayed in response to a selection of the second selectable option, such as, for example, clicking a first pattern illustrated in . Event patterns are shown in the output data stream. Actionable menu is shown including second selectable options. For example, a menu button may be provided to “Apply” changes, name and “Save this pattern”, select severity of a pattern, a clickable option to “Enable pattern anomaly”, “Delete this pattern”, and “Do nothing”. For example, the selected pattern represents an anomaly type, Partial Pattern, characterized by multiple events (e.g., set of 30) appearing repeatedly in the same time slot. The selected pattern may be detected as this coincidence may not be likely to be random. The bottom of the pattern investigation interface lists the pattern events . The list of pattern events may indicate, for example, that the 30 events likely correspond to a ‘service shutdown’ event. In some examples, tagging the pattern as “Service Shutdown” may automatically trigger an anomaly type, “Partial Pattern”. FIG. 6 FIG. 3 FIG. 3 FIGS. 4 and 5 FIG. 3 FIG. 6 FIG. 6 FIG. 3 FIGS. 4 and 5 106 302 310 104 602 604 602 602 606 608 610 306 612 is an example display of the example output data stream of after anomaly and pattern interactions. For example, the output stream illustrated in may be modified based on the interactions described with reference to . For example, interaction processor may receive feedback data that the system anomaly associated with tag and the event pattern (in ) have been tagged. Such feedback data may be provided to the anomaly processor and the pattern processor. Based on the feedback data, two new partial pattern system anomalies may be detected, each corresponding to two instances where the “Service Shutdown” event patterns appeared partially. For example, system anomaly may be identified, and pop-up may be displayed, with information related to the system anomaly . For example, as illustrated in , the selected system anomaly occurred on “2013-06-07” at “01:15:00”. Also, for example, the anomaly type may be indicated as a “partial pattern”. Other changes in are a result of a preference given to the “Variety of Events” system anomaly due to tagging the “Rogue Report” system anomaly, and of deleting a flood of event system anomaly (not illustrated herein). Such interactions may re-evaluate anomaly scores for the system anomalies. For example, an original anomaly score may be marked behind the modified anomaly score . Also, for example, word cloud indicates “queuedtoc”, “culture”, and “neutral” as relevant words. This is different from the word cloud in . Patterns are also displayed based on the interactions described with reference to . 104 104 In some examples, the anomaly processor may detect future system anomalies based on a previously detected event pattern. For example, by identifying and defining event patterns, the anomaly processor may identify system anomalies when the previously detected event patterns are broken or modified. In some examples, the pattern processor may detect future event patterns based on a previously detected system anomaly. For example, system anomalies associated with a low priority may aggregate to event patterns and may be flagged as high priority event patterns. In some examples, a system anomaly associated with a low priority may be identified based on an absence of a selection of a first selectable option associated with the system anomaly. 106 108 108 104 106 As described herein, the interaction processor processes interactions of a domain expert with the interactive graphical user interface based on an explicit tagging of a system anomaly or an event pattern, and also based on a passing interest based on a selection of a particular system anomaly or event pattern. Such feedback data may enrich the input data, enable detection of more refined system anomalies and event patterns, and reprioritize the displayed information on the interactive graphical user interface . The analytic tools, including pattern processor and anomaly processor , may feed data to each other, and utilize each other to continuously enrich the information provided by the interaction processor . FIG. 7 700 100 700 702 704 712 714 716 712 714 702 704 712 714 716 is a block diagram illustrating an example of a processing system for implementing the system for interactive detection of system anomalies. Processing system may include a processor , a memory , input devices , output devices , and interactive graphical user interfaces communicatively linked to the input devices and the output devices . Processor , memory , input devices , output devices , and interactive graphical user interfaces are coupled to each other through communication link (e.g., a bus). 702 704 702 700 704 Processor may include a Central Processing Unit (CPU) or another suitable processor. In some examples, memory stores machine readable instructions executed by processor for operating processing system . Memory may include any suitable combination of volatile and/or non-volatile memory, such as combinations of Random Access Memory (RAM), Read-Only Memory (ROM), flash memory, and/or other suitable memory. 704 702 706 708 710 706 708 710 102 104 106 FIG. 1 Memory also stores instructions to be executed by processor including instructions for a data processor , an anomaly processor , and an interaction processor . In some examples, data processor , anomaly processor , and interaction processor , include data processor , anomaly processor , and interaction processor , respectively, as previously described and illustrated with reference to . 702 706 718 718 718 718 700 706 718 Processor executes instructions of data processor to receive input data related to a series of events and telemetry measurements. The input data may be data related to a series of events and telemetry measurements. In some examples, the input data may be a stream of log messages. In some examples, raw input data may comprise log messages, and may be received via the processing system , and a data processor may process the input data to generate structured log data. 702 708 718 702 702 708 Processor executes instructions of anomaly processor to detect presence of a system anomaly in the input data , the system anomaly indicative of a rare situation that is distant from a norm of a distribution based on the series of events and telemetry measurements. In some examples, processor executes instructions of a pattern processor to detect presence of an event pattern in the input data. In some examples, processor executes instructions of an anomaly processor to generate an output data stream based on the presence of the system anomaly and/or the event pattern. 702 702 708 In some examples, each event in the series of events may be associated with a time, and processor executes instructions of an evaluator (not shown in the figure) to determine, for a time interval, at least one of an anomaly intensity, an anomaly score, an anomaly fingerprint, and a fingerprint matching function. In some examples, processor executes instructions of the anomaly processor to detect a presence of a system anomaly based on the anomaly fingerprint, and the fingerprint matching function. 702 In some examples, processor executes instructions of an evaluator to determine, for the time interval, anomaly intensities and the anomaly score, and where each anomaly intensity may be transformed, with respect to a distribution of anomaly intensities of the same anomaly type in reference time-slots, based on a distinctive residual rarity extremity score, into comparable, additive, and distinctive anomaly intensity scores that may be combined to determine the anomaly score. 702 In some examples, each event in the series of events is associated with an event type, a time, and zero or more measurement values, and processor executes instructions of an evaluator to determine, for each event type, an anomaly intensity amount for an anomaly type from events in the time interval, where for each anomaly type, the anomaly intensity amounts for different event types may be combined to determine an anomaly intensity and an anomaly fingerprint. In some examples, each event may be a signal, and the anomaly intensity in the time interval may be one of a maximal signal value per signal type, a range of signal values per signal type, and a value extremity score. 702 In some examples, processor executes instructions of a pattern processor (not shown in the figure) to detect future event patterns based on at least one of the feedback data and detected system anomalies. 702 In some examples, processor executes instructions of an evaluator to determine, for each time interval for an anomaly type, incomparable anomaly intensity amounts, where each incomparable anomaly intensity amount may be transformed with respect to the distribution of associated incomparable anomaly intensity amounts in reference time intervals, based on a distinctive residual rarity extremity score, into comparable, additive, and distinctive anomaly intensity amounts. 702 708 716 702 708 702 716 In some examples, processor executes instructions of the anomaly processor to generate an interactive analysis interface for system anomalies to be provided via the interactive graphical user interfaces . In some examples, processor executes instructions of the anomaly processor to modify the output data stream based on interactions with the analysis interface. In some examples, processor executes instructions of a pattern processor to generate an interactive analysis interface for event patterns to be provided via the interactive graphical user interfaces . 702 710 716 702 710 716 702 710 In some examples, processor executes instructions of an interaction processor to display, via interactive graphical user interfaces , an output data stream based on the presence of the system anomaly and/or the event pattern. In some examples, processor executes instructions of an interaction processor to receive, via interactive graphical user interfaces , feedback data associated with the output data stream. In some examples, processor executes instructions of an interaction processor to provide the feedback data to the anomaly processor for operations analytics based on the feedback data. 702 710 702 In some examples, processor executes instructions of an interaction processor to identify selection of an anomaly fingerprint, and processor executes instructions of an evaluator to compute a fingerprint matching score for the anomaly fingerprint in a second time interval, to determine presence or absence of similar system anomalies in the second time interval, the fingerprint matching score computed based on a correlation between the anomaly fingerprint and anomaly intensity amounts in the second time interval. 702 708 710 In some examples, processor executes instructions of the anomaly processor to detect, based on the interactions with the analysis interface, a system anomaly associated with a Partial Pattern anomaly type, and executes instructions of an interaction processor to display, in the modified output data stream, a service shutdown message with the detected system anomaly. 702 710 708 702 710 702 710 702 710 702 710 In some examples, processor executes instructions of an interaction processor to display the output data stream, including a first selectable option associated with the system anomaly, and/or a second selectable option associated with the event pattern, receive feedback data associated with the first and/or second selectable options, and provide the feedback data to the anomaly processor . In some examples, processor executes instructions of an interaction processor to further provide, in response to a selection of the first selectable option, a pop-up card with information related to the system anomaly. In some examples, processor executes instructions of an interaction processor to further provide, in response to a selection of the first selectable option, an analysis interface to analyze the system anomaly. In some examples, processor executes instructions of an interaction processor to further provide, in response to a selection of the second selectable option, an analysis interface to analyze the event pattern. In some examples, processor executes instructions of an interaction processor to display a word cloud, the word cloud highlighting words that appear in log messages associated with the system anomaly more than in the rest of the log messages. 712 200 712 710 714 700 714 716 Input devices include a keyboard, mouse, data ports, and/or other suitable devices for inputting information into processing system . In some examples, input devices are used to by the interaction processor to interact with the user. Output devices include a monitor, speakers, data ports, and/or other suitable devices for outputting information from processing system . In some examples, output devices are used to provide interactive graphical user interfaces . FIG. 8 800 802 812 804 806 808 810 802 812 804 806 808 810 is a block diagram illustrating an example of a computer readable medium for interactive detection of system anomalies. Processing system may include a processor , a computer readable medium , a data processor , an anomaly processor , an interaction processor , and an interactive graphical user interface . Processor , computer readable medium , data processor , anomaly processor , interaction processor , and interactive graphical user interface , are coupled to each other through communication link (e.g., a bus). 802 812 812 814 804 812 816 806 812 816 Processor executes instructions included in the computer readable medium . Computer readable medium may include receive instructions of a data processor to receive input data related to a series of events and telemetry measurements. Computer readable medium may include detect instructions of an anomaly processor to detect system anomalies in the input data. In some examples, computer readable medium may include detect instructions of a pattern processor to detect event patterns. 812 818 808 812 820 808 810 812 808 Computer readable medium may include generate instructions of an interaction processor to generate an output data stream based on detected system anomalies. Computer readable medium may include display instructions of an interaction processor to display the output data stream via an interactive graphical user interface . In some examples, computer readable medium may include feedback data receipt instructions of an interaction processor to receive feedback data associated with the output data stream. 812 806 In some examples, computer readable medium may include aggregate instructions of an anomaly processor to aggregate heterogeneous system anomalies detected from heterogeneous input data, where the input data may include event streams, performance metrics, log messages, and event patterns. 812 808 806 In some examples, computer readable medium may include instructions of an interaction processor to display the output data stream, including a first selectable option associated with the system anomaly, and/or a second selectable option associated with the event pattern, receive feedback data associated with the first and/or second selectable options, and provide the feedback data to the anomaly processor . 812 808 812 808 812 808 812 820 808 In some examples, computer readable medium may include instructions of an interaction processor to further provide, in response to a selection of the first selectable option, a pop-up card with information related to the system anomaly. In some examples, computer readable medium may include instructions of an interaction processor to further provide, in response to a selection of the first selectable option, an analysis interface to analyze the system anomaly. In some examples, computer readable medium may include instructions of an interaction processor to further provide, in response to a selection of the second selectable option, an analysis interface to analyze the event pattern. In some examples, computer readable medium may include display instructions of an interaction processor to display a word cloud, the word cloud highlighting words that appear in log messages associated with the system anomaly more than in the rest of the log messages. 812 As used herein, a “computer readable medium” may be any electronic, magnetic, optical, or other physical storage apparatus to contain or store information such as executable instructions, data, and the like. For example, any computer readable storage medium described herein may be any of Random Access Memory (RAM), volatile memory, non-volatile memory, flash memory, a storage drive (e.g., a hard drive), a solid state drive, and the like, or a combination thereof. For example, the computer readable medium can include one of or multiple different forms of memory including semiconductor memory devices such as dynamic or static random access memories (DRAMs or SRAMs), erasable and programmable read-only memories (EPROMs), electrically erasable and programmable read-only memories (EEPROMs) and flash memories; magnetic disks such as fixed, floppy and removable disks; other magnetic media including tape; optical media such as compact disks (CDs) or digital video disks (DVDs); or other types of storage devices. 400 812 802 812 802 800 FIG. 8 As described herein, various components of the processing system are identified and refer to a combination of hardware and programming configured to perform a designated function. As illustrated in , the programming may be processor executable instructions stored on tangible computer readable medium , and the hardware may include processor for executing those instructions. Thus, computer readable medium may store program instructions that, when executed by processor , implement the various components of the processing system . Such computer readable storage medium or media is (are) considered to be part of an article (or article of manufacture). An article or article of manufacture can refer to any manufactured single component or multiple components. The storage medium or media can be located either in the machine running the machine-readable instructions, or located at a remote site from which machine-readable instructions can be downloaded over a network for execution. 812 802 812 812 802 812 802 812 802 802 812 Computer readable medium may be any of a number of memory components capable of storing instructions that can be executed by processor . Computer readable medium may be non-transitory in the sense that it does not encompass a transitory signal but instead is made up of one or more memory components configured to store the relevant instructions. Computer readable medium may be implemented in a single device or distributed across devices. Likewise, processor represents any number of processors capable of executing instructions stored by computer readable medium . Processor may be integrated in a single device or distributed across devices. Further, computer readable medium may be fully or partially integrated in the same device as processor (as illustrated), or it may be separate but accessible to that device and processor . In some examples, computer readable medium may be a machine-readable storage medium. FIG. 9 900 902 904 906 908 is a flow diagram illustrating an example of a method for interactive detection of system anomalies. At , an output data stream may be generated based on system anomalies detected in input data, the system anomalies indicative of rare events and events distant from a norm of a distribution of the series of events. At , the output data stream may be displayed via an interactive graphical user interface, the output data stream including an attribute associated with the output data stream. At , feedback data indicative of selection of a system anomaly may be received from the interactive graphical user interface. At , the feedback data may be processed to modify the output data stream. At , an interactive analysis interface may be provided, via the interactive graphical user interface, for operations analytics based on the selected system anomaly. In some examples, the attribute associated with the output data stream may include an anomaly intensity, an anomaly score, an anomaly Fingerprint, a fingerprint matching function, event patterns, a word cloud, an anomaly type, a service message associated with a selected system anomaly, an anomaly intensity for events in a time interval, an event count extremity factor, and an event type rarity factor. In some examples, each event in the series of events may be associated with a time, and the method may include determining, for a time interval, at least one of an anomaly intensity, an anomaly score, an anomaly fingerprint, a fingerprint matching function, and event patterns. In some examples, the method may include detecting system anomalies based on the anomaly fingerprint, and the fingerprint matching function. In some examples, each system anomaly may be associated with a time, and the method may include determining, for a time interval, at least one of an anomaly intensity, an anomaly score, an anomaly fingerprint, and a fingerprint matching function. In some examples, the method may include detecting a presence of a system anomaly based on the anomaly fingerprint, and the fingerprint matching function. In some examples, the method may include determining, for the time interval, anomaly intensities and the anomaly score, and where each anomaly intensity may be transformed, with respect to a distribution of anomaly intensities of the same anomaly type in reference time-slots, based on a distinctive residual rarity extremity score, into comparable, additive, and distinctive anomaly intensity scores that may be combined to determine the anomaly score. In some examples, each event in the series of events is associated with an event type, a time, and zero or more measurement values, and the method may include determining, for each event type, an anomaly intensity amount for an anomaly type from events in the time interval, where for each anomaly type, the anomaly intensity amounts for different event types may be combined to determine an anomaly intensity and an anomaly fingerprint. In some examples, the method may include determining, for each time interval for an anomaly type, incomparable anomaly intensity amounts, where each incomparable anomaly intensity amount may be transformed with respect to the distribution of associated incomparable anomaly intensity amounts in reference time intervals, based on a distinctive residual rarity extremity score, into comparable, additive, and distinctive anomaly intensity amounts. In some examples, the anomaly type may include a Flood of Events, where the anomaly intensity amount is an event count; a Variety of Events, where the anomaly intensity amount is an event occurrence indicator; a Flood of Rare Events, where the anomaly intensity amount is a product of an event count extremity factor, and an event-type rarity factor; and a Flood of Extreme Signals, where the anomaly intensity amount is a maximal signal value per time interval transformed based on a distinctive residual rarity extremity score. In some examples, the method may include identifying selection of an anomaly fingerprint, and where a fingerprint matching score for the anomaly fingerprint is computed in a second time interval to determine presence or absence of similar system anomalies in the second time interval, where the fingerprint matching score is computed based on a correlation between the anomaly fingerprint and anomaly intensity amounts in the second time interval. In some examples, the method may include generating an interactive analysis interface to be provided via the interactive graphical user interface, and modifying the output data stream based on interactions with the analysis interface. In some examples, the method may include detecting, based on the interactions with the analysis interface, a system anomaly associated with a Partial Pattern anomaly type, and displaying, in the modified output data stream, a service shutdown message with the detected system anomaly. In some examples, the analysis interface may be an anomaly analysis interface to analyze the system anomaly. In some examples, the analysis interface may be a pattern analysis interface to analyze the event pattern. In some examples, the feedback data may include indication of a selection of a system anomaly, and based on the feedback data the interaction processor further provides, via the graphical user interface, a pop-up card with information related to the selected system anomaly. In some examples, the feedback data may include the anomaly score, a modified anomaly score, an anomaly fingerprint, and acceptance or rejection of an anomaly finger matching result. In some examples, the method may include displaying a word cloud, the word cloud highlighting words that appear in log messages associated with the system anomaly. For example, key terms may appear in log messages associated with the system anomaly more frequently than in the rest of the log messages. Accordingly, such key terms may be highlighted in the word cloud. Highlighting may be achieved via a distinctive font, font size, color, and so forth. In some examples, term scores may be determined for key terms, the term scores based on a modified inverse domain frequency. In some examples, the modified inverse domain frequency may be based on an information gain or a Kullback-Liebler Divergence. In some examples, the method may include aggregating heterogeneous system anomalies detected from heterogeneous input data, where the input data may include event streams, performance metrics, log messages, and event patterns. Examples of the disclosure provide a generalized system for interactive detection of system anomalies. The generalized system provides for analyzing and managing operations data. The purpose of the system may be to facilitate managing operations of complex and distributed systems, making sure that they are continuously performing at their best, and whenever there may be a problem, to be able to resolve it quickly and save the problem fingerprint for future prevention and fast resolution. As described herein, data streams of various types streams into the system which analyses it automatically to provide an interface where data anomalies may be constantly prioritized so that the highest recent system anomalies may be visualized prominently. Although the techniques described herein enable automatic detection of system anomalies (e.g., without a query), such automatic detection techniques may be combined with known system anomalies, and/or query-based detection of system anomalies to form a hybrid system. Although specific examples have been illustrated and described herein, the examples illustrate applications to any input data. Accordingly, there may be a variety of alternate and/or equivalent implementations that may be substituted for the specific examples shown and described without departing from the scope of the present disclosure. This application is intended to cover any adaptations or variations of the specific examples discussed herein. Therefore, it is intended that this disclosure be limited only by the claims and the equivalents thereof. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a functional block diagram illustrating an example of a system for interactive detection of system anomalies. FIG. 2A illustrates an example of hypothetical anomaly intensities distributed uniformly, normally, and with a long-tailed Weibull distribution. FIG. 2B FIG. 2A illustrates an example of resultant unsatisfactory outliers obtained after a residual rarity transformation of the anomaly intensities illustrated in . FIG. 2C illustrates an example of resultant outliers based on a modified distinctive residual rarity anomaly transform. FIG. 3 is an example display of an output data stream including a word cloud. FIG. 4 is an example of an analysis interface for system anomalies. FIG. 5 is an example of an analysis interface for event patterns. FIG. 6 FIG. 3 is an example display of the example output data stream of after anomaly and pattern interactions. FIG. 7 is a block diagram illustrating an example of a processing system for implementing the system for interactive detection of system anomalies. FIG. 8 is a block diagram illustrating an example of a computer readable medium for interactive detection of system anomalies. FIG. 9 is a flow diagram illustrating an example of a method for interactive detection of system anomalies.
Solved exercises worksheet Test what you have learned in the subject Newton's laws of motion with this list of exercises with their respective solutions and classified by sections. Decomposition of Forces Finding out the angled force with axis A force is given by the following expression: . Calculate angle of horizontal. Force from angle and magnitude Determine the analytic expression of a force knowing that it forms an 120º angle on X – axis and it has a 5 N magnitude. Concurrent and Parallel Forces Concurrent or parallel forces? Determine if they are concurrent or non concurrent forces (parallel). Resultant Force of a System of Forces Massive force If the following forces act on a body: Find out a force whose effect is equal to exert the 4 forces proposed. Adittion of forces on the same direction Two friends, a stocky one and a slim one, push a coach on the same direction. The first one exert a force of 11 N and the second one a force of 7 N. What is the resultant force? Addition of forces on opposite directions A boy and a girl tight two cords to a ring and play to find out who is stronger. The boy grab one of the cords and exert a force of 11 N, the girl applies 13 N at the same time. If both pull the cords on different directions... Who will win? Linear Momentum Linear Momentum On a given instant of time, a rocket has a velocity of . Knowing that its mass is 2kg. What is the momentum on that instant? Linear Momentum of several particles Two pool balls have the following velocity: Knowing that m1 = 170 g y m2 = 156 g, calculate the system´s linear momentum formed by both balls. Newton´s Second Law Force and trajectory If the equation of trajectory of a body is m, What is the force that makes it move if its mass is 4kg? Acceleration from forces The following forces act on a body: What is the acceleration of the body if its mass is 2kg? Relationship between masses based on Newton´s second law Two bodies have acceleration 6 m/s2 y 9 m/s2 respectively, if they are under the same force. What is the relationship between masses? Traveled distance by a body from force A 6kg body initially on repose experiences a force of 10 N for 4 seconds. What distance does it travel? Newton´s second law and the nail in the wall Pablito is nailing a nail into the wall by using a 2kg hammer of mass. The impact speed is 6m/s. What is the resistance force of the wood if the nail is 5mm depth? Newton´s Third Law Newton´s third law Can you say what does a ball on a table interact with and where the forces are exerted at each interaction? Impulse Impulse A 1kg body is subjected to a single force for 5s. If we know that its initial velocity is . Calculate: a)Impulse b)Linear momentum before and after the force is exerted. Several forces´s impulse A 35 kg body at rest is subjected to the following forces for 3 minutes: If N is the measure given, calculate: 1. Resultant force 2. Impulse 3. Linear momentum variation Velocity aquired by the body during these 3 minutes. La velocidad que ha adquirido el cuerpo in these 3 minutes. Impulse from linear momentum variation A 50 g rubber ball is hit by a racket. Ball´s velocity before the hit is ,after the hit, the velocity is . Determine the impulse exerted by the racket on the ball and the value of the force, assumed constant, if the are in contact for 0.05 s. Force from impulse Alice has had a traffic accident when she was driving at 90 km/h. Thanks she was wearing the seatbelt she survived. Can you say what average force has the seatbelt exerted if the impact last 0.05 s and Alice weights 55 kg? Interpretation of impulse in graphs The following figure represents 3 different forces acting on 3 bodies of equal mass on identical conditions. Knowing that the 3 of them start on rest position, what of them will gain more final speed? Momentum Conservation Principle Conservation of linear momentum principle A competitor shoots a plate by using a rifle of 2,5 Kg on a skeet shooting competition. Knowing that the ball is 23 g and it is shot horizontally at 350 m/s, what is the recoil of the rifle? Collision velocity and momentum conservation A 3 Kg ball is moving towards left at 6 m/s. It collides with a 5 Kg ball moving towards right at 2 m/s. After the collision the second ball went through the left at 3 m/s. Calculate the velocity of the first ball after the crash. Momentum conservation in explosions A 10 Kg body is moving at and after its explosion it breaks into 3 pieces. The first piece is 3 Kg and goes through to , the second body weights 5 Kg and goes through to . What velocity does the third piece reach? Momentum conservation - variable mass A truck moves 40 Km/hr in a straight line over a conventional road. An open container is placed on the truck. The total weight is 1800 Kg. When it starts to rain, the container got filled with water at 6 L/ min. Paying no heed to friction , what is the speed of the truck after one hour of rain?
https://www.fisicalab.com/en/subject/newtons-laws-of-motion/exercises
I am currently augmenting the pollen data from two sites I studied during my PhD research, thanks to a grant from the Swiss Foundation for Alpine Studies. The one I’m dealing with at the moment is a small peat bog at a locality called Saglias, near the village of Ardez in the Grisons, Switzerland. I usually seek for high quality and reliability of palynological data. Depending on the context, I try to identify about 1000 pollen grains per sample, or 500 tree pollen grains, and look at the taxa accumulation curve. These two indices are easily accessible in real-time counting thanks to PolyCounter, the software I’m using. Now, some samples clearly want to drive analysts crazy. Most often they contain very few pollen grains. A typical reason for this is a poor pollen preservation. It can be useful to have a closer look at them and see if there is something one can do to improve the situation. I’m doing this by looking at a few other parameters. Again, PolyCounter is your best friend. That’s easy to import the count data and metadata (such as the number of marker spiked added) into R and compute variables to address key questions: - how many marker were counted? marker_counted - which proportion of the total marker added does it represent? marker_prop - what is the total pollen concentration of the sample? pollen_conc - did I count at least 1 marker for 2 pollen grains for reliable concentration values? ratio_mp > 0.5 quality_control # A tibble: 113 x 7 depth pollen_sum marker_counted marker_total marker_prop pollen_conc ratio_mp <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> 1 14 550 2634 13500 0.195 2819. 4.79 2 22 933 1090 13500 0.0807 11556. 1.17 3 30 880 1792 13500 0.133 6629. 2.04 4 38 1052 1685 13500 0.125 8428. 1.60 5 46 594 1459 13500 0.108 5496. 2.46 6 54 573 1036 13500 0.0767 7467. 1.81 7 62 835 1410 13500 0.104 7995. 1.69 8 70 594 1198 13500 0.0887 6694. 2.02 9 78 586 1303 13500 0.0965 6071. 2.22 10 86 1774 2978 13500 0.221 8042. 1.68 # … with 103 more rows I’m not going to explain how to get this since it is a bit off-topic, but I’d be happy to provide assistance if you’d like to do it. Then, a bit of ggplot magic does the trick: core_name <- "ASG-2012" qual_plot_title <- str_c("Quality Control of Palynological Data from the ", core_name, " Core") quality_control %>% ggplot(aes(pollen_sum, marker_counted)) + geom_abline(slope = 0.5, linetype = "dashed") + geom_path(alpha = 0.1) + geom_point(aes(size = pollen_conc, colour = marker_prop)) + geom_text(aes(label = depth), hjust = -0.1, vjust = -0.1) + scale_colour_viridis_c(direction = -1) + coord_trans(x = "log10", y = "log10") + labs( title = qual_plot_title, subtitle = str_c("From count data of ", nrow(quality_control), " samples, labelled by depth, as of ", Sys.Date()), x = "Pollen sum", y = "Marker counted", colour = "Proportion of marker counted", size = "Pollen concentration" ) + theme(legend.position = "bottom") + guides( colour = guide_colourbar(title.position = "top", barwidth = unit(6, "cm")), size = guide_legend( title.position = "top") ) This is a scatterplot of marker counted against pollen sum for each sample, with log scales to emphasize small values. Concentration is evidenced as size of dots, and the proportion of marker spikes counted over the total added is shown with a colour gradient. The dashed line represents the 1-marker / 2-pollen ratio (= 0.5). Smaller dots therefore indicate poor pollen preservation. Yellow dots indicate samples that didn’t benefit from a significant counting effort. The x and y axes (pollen sum and marker spikes counted) indicate right away the counting effort, and help to represent the counting ratio (ie the dashed line). Three samples have a counting ratio lower than 0.5. Not much marker spikes were counted for these samples. Concentration seem to be high but it could be biased. The situation of these samples is not catastrophic but it could be worth counting a little bit more and see if it helps. A dozen of other samples show pollen sums of about 200–250 pollen grains, and 500–1000 marker spikes counted. The concentration appears to be low, and it is probably reliable given the high proportion of marker spikes counted. Investing a bit of time here could eventually improve the reliability of the data, but the taxa accumulation curve looked already pretty good. Maybe that’s just the way it is. The most concerning are the four samples in the bottom-left hand corner. Very low pollen sum, and very low marker spikes counted. In other words, the slides were almost empty. It could indicate a problem during the preparation of the samples. A closer attention to them is much needed! Finally, the subtle, transparent, grey lines that connect samples help to see if the same concerns regarding pollen preservation and data quality would apply to clusters of successive samples. This would point for instance to environmental factors as a potential cause. I am using these diagnosis tools to help making pragmatic decisions. I’m trying to reach as reliable data as possible, as fast as possible. After a first run of analyses, I can come to this plot and see where are the bigger flaws regarding the quality of data. The situation of some samples will be easy and fast to fix with, say, one extra hour of counting, while for some other it is probably not worth it. That would be a loss of a precious resource in scientific research: time.
https://www.benscoat.eu/index.php/2019/06/29/quality-control-of-pollen-data-at-a-glance/
The SQL Server Procedure Editor allows you to view, edit, compile and run SQL Server stored procedures and functions. It is supports all SQL Server versions starting from SQL Server 2000. It also supports all platforms that DB Solo runs on; Windows, Linux, MacOS X and Solaris. To start the procedure editor, navigate to the function or procedure in the schema browser, select the 'Source Code' tab and click on the 'Edit In Procedure Editor' button. Alternatively, you can right-click on the procedure in the schema browser and select 'Edit In Procedure Editor'. At this point a new procedure editor tab will be opened for the procedure you selected. The left side of the procedure editor has two buttons that will show a mini-panel when selected. The Browser and Directory panels work the same way as in the query window. The procedure editor has its own toolbar that allows you to perform various operations on the selected procedure. It also allows you to select a different procedure to edit. The procedure can be under a different server or different databasewithin the same server. The toolbar has the following buttons To compile the procedure using the latest code in the editor, click on the Compile-button in the toolbar. The bottom of the screen will show the results of the compilation in a tab named 'Compiler'. In case of errors, the errors will be listed in this tab including the line numbers and SQL Server error codes. When you select an error from the list of errors, the corresponding line will be highlighted in the source editor. The first error will be selected automatically. The screen shot below shows an example where a variable name has been misspelled. After compiling the code successfully, you can run the program unit. To do so, click on the 'Run' button in the toolbar. This will bring up the Run-dialog that will contain a T-SQL block that will be executed when you click on 'Run'. T-SQL code will be automatically generated to declare the required variables and call the procedure/function correctly. Notice that you can modify the generated code, e.g. to change the values that will be passed into your program unit. The modified code will be automatically persisted for further executions. To revert back to the automatically-generated block, click on the 'Reset' button. To close the dialog without running the procedure, click on the 'Close' button. When the program unit is running, you can stop the execution by clicking on the 'Stop' button in the toolbar. After the procedure has been executed, you will see the execution results in the 'Run' tab at the bottom of the screen. Output from the print statements, if any, will also be shown in this tab. The 'Compare' button in the toolbar allows you to compare the source code in the editor and the code in the database. This is useful if you don't remember what exactly you modified and don't want to compile before you know for sure. When the button is clicked, a new dialog will be show that shows the code in the editor on the left and the code in the database on the right. All lines that are different/added/removed will be highlighted.
http://dbsolo.com/help/sqlserver_procedure_editor.html
Some theories that propose an electromagnetic nature of consciousness are reviewed. " .. Some theorists turn from these circuits to their electromagnetic fields to deal with such difficulties concerning the mind’s qualia, unity, privacy, and causality. They include Kohler, Libet, Popper, Lindahl, Arhem, Charman, Pockett, John, McFadden, Fingelkurts, Maxwell, and Jones. .. [this review] concludes that while field theories face challenges, they aren’t easily dismissed, for they draw on considerable evidence and may avoid serious problems in neuroscience concerning the mind’s qualia, unity, causality, and ontology." For the author one of the most recurrent problem that face those theories is the absence of an any telepathic phenomenon (because the EM fields are not totally confined to the brain) but in this web are treated the telepathic and non-local phenomena and viewed that precisely the different electromagnetic configurations and emissions have much to say on this . He also detect some other problems but they can be solved if they are applied experimentally proved functions and solutions of other theories, for example for a Globalist Field Theory class he say: " This theory explains the mind’s unity without problematic synchrony. But it’s unclear about how colours and shapes bind together in their right locations in images." Nevertheless it can be taken in account the theory of Ghosh et al. , and view that all can be integrated in an always present electromagnetic field, with a fractal structure, by means of its resonances. Different theories can be grouped in distinct groups, for example in relation to how mind exists relative to fields. The theories most supported in this web are the computationalist field theories, because the electromagnetic fields have always an informative nature (albeit very simple in some cases, or incredibly complex in locations like brain) and their transformations and interactions (computation) make consciousness to appear. Otherwise there are also other kind of electromagnetic mind theories that still need in their theoretical construct a non-physical mind (dualistic theories) and that they are not supported here, because they dont solve anything, and there exists proofs and works in a more explicit direction; that electromagnetic fields are the mind (and matter, in general, it can be said that acts as the memory). Despite the drawbacks that mentions, he also view facts that underpin those theories: " .. there’s evidence that sensory qualia correlate with specific spatio-temporal patterns in neural fields, and with specific electrical activities in sensory detectors." " .. EEG studies by Freeman (e.g. 1991) show that various odours (e.g. from bananas or sawdust) correlate with specific spatial patterns distributed across mammalian olfactory areas. The patterns altered when animals were trained to associate the odours with rewards, showing that the correlations were with odour awareness, not just chemical stimuli." " .. transcranial magnetic stimulation (TMS) produces fields as strong as the brain’s own native fields, and these TMS fields make nerves fire." " In Maxwell’s view, fields are the best candidate for what’s intrinsically conscious, for only they are continuous and smooth like visual images ( Maxwell, 1978, p. 398). This is his solution to Sellars’(1965) ‘grain problem’ of how discrete, grainy molecules and cells in brains create continuous, smooth images." A computationalist field theory that have aspects that are treated and valued in this web is that of McFadden: " McFadden (2002b, p. 25) argues that synchrony is a global event that no neurons can oversee, so it isn't even detectable while encoding images. Instead binding comes from fields, though synchrony still plays a role. Synchronized firing by neurons doing similar tasks amplifies their contribution to the brain's electromagnetic field, but it's the field that does the binding." Where the synchronized activity propagates across brains at the speed of light. Is interesting how the last mentioned (McFadden) face the free will issue, explaining that although conscious field is deterministic is also free in that it affects behavior instead of being epiphenomenal. Assuming that determinism can be compatible with free will construed as self-determination. He also said that private experience of those information fields are not different from external word processes, instead, the mental is constructed from its inner, intrinsic nature (with echoes Chalmers' neutral monism, where the basic stuff of the world is not mental or physical, but neutral). Generally Mcfadden (and Jones) take a panpsychist view because for him information is conscious at all levels, which agrees with the approaches here addressed : " The ‘discrete’ consciousness of elementary particles is limited and isolated. But as particles join into a field they form a unified ‘field’ consciousness. As these fields affect motor neurons, the brain’s consciousness is no longer an ineffectual epiphenomenon, for its volition can communicate with the world." It's also interesting when the paper author says that Maxwell treated all fields as conscious (like Priban) but that electromagnetic fields are the only energy fields with strength along neural circuits. EMMIND › Nonlocality & Fields › Nonlocal Distant Mind Influence › The Electromagnetism Active Role Ghosh, S., Aswani, K., Singh, S., Sahu, S., Fujita, D., & Bandyopadhyay, A. (2014). Design and construction of a brain-like computer: a new class of frequency-fractal computing using wireless communication in a supramolecular organic, inorganic system. Information, 5(1), 28-100.
https://emmind.net/comments/Endogenous_Fields-Mind/General/EM_Mind/Electromagnetic-Field_Theories_of_Mind.html
Background: During the last 15 years, water availability and flow patterns of the Mekong River Basin have been changing dramatically due to a number of complex factors, including the development of mainstream and tributary hydropower dams; development of different types of largescale land and water use infrastructure projects; and arguably most importantly, from more frequent and increasingly severe impacts of climate change phenomenon12. While the Ayeyawardy-Salween River Basin has not yet seen the same level of impacts from largescale infrastructure developments and landscape change, it has also undoubtedly been increasingly affected by the more frequent phenomenon of climate changes3. Despite some resistances, multiple hydropower and largescale infrastructure development projects are expected to go forward in and around the mainstream courses of the Ayeyawardy and Salween Rivers in the near future. Despite the dramatic changes observed, scientific and evidence-based knowledge and information about the underlying key change factors and their impacts to the water availability and flows of the major Greater Mekong Subregion river basins are still inconsistent, disconnected, largely speculative, and often not publicly available. To date, there is no existing initiative or platform that consistently helps to gather and synthesize these different types of information and knowledge into one format that all stakeholder groups, especially the affected ones, can access and use for decision making or other action. This perhaps is due to a lack of consensus on protocols and political interests among all the riparian countries and available inclusive technologies. The existing transboundary institutions, initiatives, technologies, and research that do exist are still largely targeted for certain groups such as official agencies and academia, and tend to be political or business-interest based. Building on existing initiatives, MRC Mekong Transboundary Cooperation, and WB funded Initiative-Mekong IWRM Program, SIP-LMI aims to explore a possibility to enhance coordination and cooperation across additional stakeholder groups by facilitating a dialogue about the idea of an inclusive platform initiative which can generate and share free and reliable real time water and flows data for the whole Basins. Such an idea also aims to help respond to the doubts and multiple questions from stakeholders about emergencies related to climate uncertainty and extreme events that have been happening more frequently and more intensively over time. Key guiding questions: The following questions will serve as the foundation for an open and critical dialogue amongst representatives from diverse stakeholder groups. The intent will not be to ‘answer’ each question, but rather to share and gather a variety of perspectives that will inform future discussions and actions. 1. What are different views and perspectives about the demand for “Inclusive platforms for sharing river basin water and flow data”? 2. What are the relevant ideas about existing technologies and initiatives that have potential for contributing to such platforms? 3. What are the diverse opinions on how such platforms should look, with emphasis on taking a participatory approach? Target session participant groups - Representatives from key affected groups and water user representatives from major regional river basins. - Official representatives from Mekong countries. - Technical water scientists, modellers, project managers, and researchers. - Hydropower dam operators. - Water diversion project operators. - Satellite and remote sensing scientists. - River basin managers. - Relevant academic and NGO representatives. © 2022 Mekong SIP. All Rights Reserved.Web Design by Sphere Digital Marketing Agency.
https://mekongsip.org/past-events/dialogue-session-on-inclusive-platform-for-water-and-flows-data-sharing-session-no-39-october-27-1330-1530-ruby-room-in-greater-mekong-forum-on-water-food-and-energy/
The purpose of the Web Services Business Process Execution Language TC is to continue work on the business process language published in the Business Process Execution Language for Web Services (BPEL4WS) specification in August 2002 . Continuing the approach and design used in BPEL4WS, the work of the BPEL TC will focus on specifying the common concepts for a business process execution language which form the necessary technical foundation for multiple usage patterns including both the process interface descriptions required for business protocols and executable process models. It is explicitly not a goal of the TC to specify bindings to specific hardware/software platforms and other mechanisms required for a complete runtime environment for process implementation. The Technical Committee will take advantage of the OASIS provided services for such things as e-mail lists and archives, and also web pages for tracking progress. E-mail archives will be visible to the public. The specification of the core elements and functionalities of BPEL4WS. The two extension specifications for the usage patterns of business protocol description and executable process description are normative, mandatory extensions to the core specification and will include only the essential feature extensions required for the given usage pattern. Specification work in areas not explicitly listed in the scope above is out-of-scope for this TC but will be considered for future TC charters layered on the results of this TC. It is the intent of the proposers that the TC address requested clarifications to this charter; seeking to resolve ambiguities and narrowing the scope of work. Other requested charter changes that are not clarifications may be noted and archived for possible future work after the conclusion of this TC's lifecycle. Each of the process mechanisms will use implementation and language neutral XML formats defined in XML Schema . 1. Accept contributions as input within the TC's defined scope. BEA, IBM, Microsoft, SAP and Siebel intend to submit the Business Process Execution Language for Web Services (BPEL4WS) V1.1 specification at the first meeting of the TC. This BPEL4WS document is an update to the BPEL4WS specification published by IBM, Microsoft, and BEA on August 9th 2002. The revised document is a modularized and updated version of the original specification that clearly identifies the core concepts and required extensions for BPEL4WS. 2. Produce as output a specification, in one or more documents, for the "Web Services Business Process Execution Language", hereafter referred to as the "resultant specification". This resultant specification will reflect refinements and corrections that are identified by the TC members within the scope of the TC charter. In the interests of stability, continuity and speed, the committee will not make arbitrary changes, i.e. scope will be limited to material technologic improvements. At the TC's option, suggestions for future features, out-of-scope extensions, harmonization with other or broader work and the like may be noted and archived for possible future work after the conclusion of this TC's lifecycle. The final draft of the resultant specification will be due within 9 months of the first meeting. 3. Identify relevant Web services efforts to assist in leveraging the resultant specification as a part of their specifications or solutions. 4. Establish appropriate relationships and coordinate with the chairs of the other business process related standards organizations and industry groups as appropriate, and in accordance with the OASIS Bylaws and TC Process. 5. Oversee ongoing maintenance and errata of the resultant specification.
https://www.oasis-open.org/committees/wsbpel/charter.php
As the example of »Fight Club« below will show, some films go even a step further. In them, the script-writers and directors put the focus of their work explicitly not only on the visible story, but also on the invisible effectual processes. Fight Club in Analysis Beyond the Hero’s Journey Dirk Blothner It is usual to understand the effect of a film as an identification with the hero of the story. The audience puts itself in the position of the protagonist and experiences his or her actions and sufferings as if they were their own. But when, in an in-depth interview, the viewers are asked to describe precisely what they actually feel and think when they view a certain film, another picture becomes manifest. It then becomes clear that the viewers’ journey does not necessarily coincide with that of the hero. It is precisely the most moving and exciting films which tend to stimulate a double life. They unfold their magic because they entertain a partially unconscious development of a complex in the viewers of which the story on the screen represents only the visible side. Even if script-writers do not know about this concept of the effects of films based on depth psychology, they nevertheless take account of it in their daily work, for instance, when they draft several subplots and rely on the centre plot and the subplots coalescing into a coherent connection in the viewers' minds. Ensemble films and series would fall apart into individual parts if the viewers could not bring this unifying activity into play. But even when they consider in which sequence they want to arrange the stations of the hero's journey, so that the climax of the story can unfold its maximum effect, script-writers have the viewers' inner journey in their sights. As the example of Fight Club below will show, some films go even a step further. In them, the script-writers and directors put the focus of their work explicitly not only on the visible story, but also on the invisible effectual processes. They take into account that the viewers are not only interested in observing the hero's actions and suffering, but also want to have an unusual experience. Experienced script-writers therefore pose two questions for every scene. First, what does the protagonist need on his or her journey? And second, what do the viewers need on their journey, what effectively takes their experiential process further? They understand film, perhaps similar to music, as a medium for modelling experience. Even although it is largely unconscious, the experience of a film is not a mystery. Any script-writer can work out an idea of it with empathy and a methodical procedure, and incorporate it in developing the script. Above all it is especially important to understand that the viewers are not already touched emotionally when the actors on the screen manifest feelings. Desperate gestures, tears and outbursts of anger are not converted one-to-one in the viewers’ experience. If you want to bring certain feelings closer to the viewers, you have to carefully prepare the viewers for them. They will only feel the pain of betrayal when beforehand they have actually experienced the feeling of commitment and loyalty, and the victory of love will move them to tears most surely only if a betrayal made probable by the story finally does not take place after all. It all depends on building up the plot in such a way that the psyche, at its own speed, can develop a complex of meaning with a consistent sequence of turnings which transfixes the viewers and finally leaves them with the feeling of having gone through a really moving experience. Beyond the hero’s journey therefore does not mean developing stories which diverge from Campbell's or Vogler’s models, although this approach is also suitable for them. Rather, it aims at the invisible psychic process which is triggered by the visible film and entertained and sustained for two hours. This process has its own rules and therefore places particular requirements on script-writers. When viewers are fascinated by a film, they are absorbed completely by this process. They forget who they are and are transported into the film's universe. They evaluate its quality according to the experience which they go through in this transformation. Shaping effective scenes Scenes unfold strong effects when they have three features. First, when they are thoroughly formed by a universal human predicament. Second, when they have a perceptible twisting on the axis of this predicament. And third, when they are suitable for activating the viewers. Universal human predicaments are the salt of human life. They determine its dynamics and its conflicts, and they also provide orientation. Films which do not make these core structures of human life perceptible leave us cold and seem construed. Scenes in which the figures exchange information about an event which has taken place somewhere else and at another time will quickly make the viewers unsettled or induce yawning. That is not surprising. They wanted to see a film and find themselves in a talk show. But as soon as a power struggle, a betrayal or an effort at intimacy can be felt in the dialogue, the scene comes alive and strings are plucked in the viewers which they share with all humans. They are now not only the observer of a happening, but a part of it. Whether a scene is experienced as authentic is decided, according to our psychological knowledge, not so much by whether it was filmed at the original location, but by whether its actions are based on universal human predicaments or not. For real life takes place between power and impotence, between commitment and betrayal, and between humans coming closer and failing to do so. The viewers rediscover their own experiences of life in such scenes and pay thanks through concentrated attentiveness. The second step in shaping effective scenes takes the concept of universal human predicaments a step further. It has to do with the basic human need for change. As long as we are in a process of change, we feel we are alive. Nothing is more unbearable than the feeling that everything stays the same. If this feeling arises in the cinema, it is immediately noticeable through the fidgeting and conversations in the audience. With its activities it breaks through the standstill as if it wanted to assure itself that it can change something. Experienced script-writers therefore try to give every scene a perceptible twist. They gain an idea of the initial situation and build up the action and dialogue in such a way that at the end something has changed. All the authentically experienced twists take place on the axis of the universal human predicaments addressed above. Power turns into impotence and conversely; the feeling of closeness turns into the heartache of misunderstanding. It is important, however, to take care that the change does not just take place in the characters but also in the viewers. Only then will the viewers really be transfixed by the scene. Script-writers sometimes have a false picture of the intelligence of their audience. You do not need a high IQ to understand a film. The unconscious intelligence of the psyche is far more astonishing and creative than in the academic construct measured by intelligence tests. Even if viewers can hardly articulate in their own words what they liked so much about a film, people with only basic education are still in a position to grasp the most complicated gags and situations. Viewers who have grown up under the influence of the electronic media actually demand to be integrated into the ongoing productions of meaning in a film. If they do not have anything to do, they feel that they are not being taken seriously and finally start being derogatory about the film. Therefore, thirdly, authors are well advised to construct scenes in such a way that the viewers have a job, for instance, by leaving something out and leaving it to the public to close the gap. Or by withholding information from a character which the viewers already have, thus giving them the possibility of understanding one and the same scene from two different perspectives. A means of activation which is currently very popular consists in withholding important information from the viewers and surprising them with it at the end of the film. When the viewers in The Sixth Sense finally find out that the psychotherapist, Malcolm Crowe (Bruce Willis), has been dead for more than a year and only for this reason was in a position to treat little Cole (Haley Joel Osment), the viewers go through the whole story once again in their minds and attribute a completely different meaning to it. This reinterpretation which suddenly breaks in was a decisive building block for the film by Night M. Shyamalan. Young viewers in particular develop a great deal of respect for films which activate them in such a skilful way. Effective scenes lift out a universal predicament from human reality which is connected with the film's theme. They lead viewers on this track into their world and give them the opportunity to adjust to their tone and tint. They then give the mood thus generated a twist in such a way that in the end, the world of the scene has completely changed. In that such scenes include a conception of the viewers' activity, they engender the feeling of also having been the creators of the action. The hero's journey Because he cannot sleep, in Fight Club, an anonymous single man (Edward Norton), the narrator of the film, goes to various self-help groups in search of suffering. When he learns to cry without restraint, he feels better. But his order gets confused when he meets Marla Singer (Helena Bonham Carter), a tourist of misery like himself, for whose direct eroticism he does not feel to be a match as a man. In his distress he develops the psychotic alter ego, Tyler Durden (Brad Pitt), a charismatic macho with steely muscles, and in this way he can partake in the erotic adventure. At the same time he establishes his own self-help group, the so-called Fight Club, in which young men bash each other up in search of their feelings. Our hero believes that he is living together with his alter ego in a large house. Together they build up a paramilitary organisation out of the Fight Club which has the aim of shaking world capital in its foundations. Only when the first victims occur does the whole development seem strange to the narrator. But when he finally recognises his psychosis, it is already too late. As Tyler Durden, he has put the fatal Enterprise Chaos into motion which can no longer be stopped. Thousands of young men have been enthused by his critical social ideas and no longer want to give them up. In the end, he can only free himself from his psychotic ego by putting a bullet into his head. Badly injured, but liberated from his hallucinations, he now wants to risk a love relationship with Marla. The audience's journey That is the rough outline of the hero's journey in Fight Club. While the viewers follow it, a psychic journey is formed in their own experience which partly runs parallel with the outer journey, but partly also unfolds its own life. I would like to describe very briefly a few stations on this journey. I base my comments on in-depth interviews which Jennifer Richards carried out at the University of Cologne. Fight Club makes an unusually strong impression. Not only young men, but also women and older viewers left the show shaken up and were still experiencing after-effects several weeks thereafter. Because of the film, a young woman started painting in oils, a female teacher had the feeling of having found a new point of access to her vocation. Many assessed the film to be a document of the times, a present-day work of art. The film's satirical approach gives expression to experiences, observations and thoughts which have already stirred in many viewers in view of contemporary everyday life. The film shows how greedily people orient themselves towards changing advertising strategies and at the same time are in search of their true selves. The film's portrayal of self-help groups makes it experienceable to what degree the search for feelings has become a substantial content of life. Many viewers also find it is characteristic for our times that the protagonist has to calculate in his job whether a recall action for the automobile maker for whom he is working is more lucrative than the payment of compensation for the accident victims. In the viewers' experience, the opening scenes condense more or less explicitly into a sharply drawn picture of the state of Western civilisation at the turn of the millennium. A society in a diffusion of values, guided by abstractions and obsessions. People move in circles of moods and feelings without an orientation toward a collective aim. Feverish running on the spot. The hero is a cog in this machine. Many feel that they have been understood by Fight Club in their own reservations and misgivings. The second stretch begins with the appearance of the alter ego, Tyler Durden, and the beginning of the fights. It is as if the machine which was idling at the beginning has now been put into gear. The viewers feel that fist punches mean values. With them something is set in motion whose direction cannot be specified, but the change is welcome against the background of the opening sequences. Even if the faces which have been beaten bloody are hard to look at, the first fight scenes have the effect of a surprising clarification, like the longed-for impulse for a development with consequences. Whereas up to this point, the young men dealt blows to each other, in the next twist of the plot they direct their blows increasingly toward the outside, at first not yet in the form of violent attacks, but as partly absurd and partly liberating actions against the fear of change. Fed by Tyler Durden's critical social maxims, the foundations of contemporary civilisation are dealt heavy blows. It becomes more than apparent to what extent people are paralysed by their compulsions and how faint the prospects are that their great expectations will be fulfilled at some time. Even though Tyler's followers, who are dressed in black, look as if they are members of a fascist militant group, the viewers experience their joining together into a kind of underground army, despite that, as an emotional perspective. For, in this way, a clearly discernible direction with consequences replaces the initial circling. This satisfaction shows that among young people today, even the spark of a decisive act suffices to fire their yearning for a common destiny. No matter how little they believe that politicians and business leaders can secure a way into the future, in the cinema they very much enjoy the possibility of seeing a decisive revolution with consequences come about. Fight Club is therefore not only a satire, but also provides for two hours a compensation for human needs which are not fulfilled by present-day civilisation. The fourth station of the journey leads the inner journey back to its psychologizing beginning. For the narrator, the Enterprise Chaos goes much too far and he tries to stop it. He finds out, however, that he himself put it into motion as Tyler Durden. It was all a product of his psychosis into which he fled out of fear of the woman. If one makes clear the development which has been set into motion up until that point and which is rooted deeply in the problems of contemporary civilisation, the satisfaction of liberating oneself from the mad spinning of the top by means of a decisive blow, it then becomes comprehensible that the return to the hero's psychology is not able to absorb this momentum. When the narrator shoots himself in the head, therefore, for many it seems like a contrived solution in order to give the development which has been set into motion a politically correct end. Viewed from beyond the hero's journey it becomes understandable what kind of jolt Fight Club gives to the psyche is of its viewers, many of whom are young. They live in a civilisation which does not tell them what it is worth living and dying for. Even after 11 September 2001, nothing much has changed in this regard. As Tyler Durden says in one of his speeches, they have an inkling that commodity consumption, music radio stations, parties, changing fashions cannot divert them in the long run from the fact that this society does not offer them any path into the future. They look at the film story, but in doing so, inklings and hopes are aroused which they sense within themselves, independently of Fincher's film. The film does not create them, but only gives them a sharp form. Characters who assert themselves forcefully appear in many contemporary action films. The decisive factor in Fight Club is that the film puts the charismatic Tyler Durden into relation with an agitated life which, however, does not have any consequences. In this way, for the duration of the film, the viewers are led out beyond the borders of the civilisation in which they live. This is the way in which the most exciting films of our times work, not just productions like The Usual Suspects or Fight Club. In blockbusters like Forrest Gump, Castaway and The Matrix, too, we have been able to observe similar developments of complexes which are deeply rooted in fundamental problems of contemporary civilisation. What conclusions can script-writers draw from this? An authentic story, a good plot are always the starting point for an effective film. In my article, however, I want to make the case for regarding the hero's journey not as an end in itself, but as a medium for modelling rousing psychic journeys on the part of viewers. Script-writers can implement this concept in two respects. On the one hand, my article suggests that film should be understood consistently as a process having effects. The story and the psyche do not have a simple cause-and-effect relationship with one another. The first scenes of the hero's journey set a complex of meanings into motion in the viewers' experience. This complex brings expectations, limitations and substantial focuses into play. It has effects on the interpretation of the following scenes and gives them meaning. It can, however, also be sharpened by them, differentiated or can have its polarity reversed. When script-writers think their way into such processes and ask themselves in which way a twist to the plot further develops the state which the film has generated in the viewers up until that point, they have the beyond which is addressed here in view. The pointers given above on forming effective scenes can be used as a foundation for this. On the other hand, I want to make it clear that the effective contents of films do not come across to the viewers so much as information, but rather already lie dormant in the viewers in the form of universal basic predicaments. Effective scripts aim at modelling the current hopes and fears of people by means of the hero's journey. They draw virulent complexes into a soul-stirring development. In Fight Club, a nameless protagonist undertakes a journey through his psychosis. A personal conflict forces him to do so. But by following him, the viewers have an experience which deals with the directions of development of our Western civilisation. When, finally, the narrator is freed from his psychosis, for him it is a triumph and the solution of his conflict. For many viewers, however, a formation collapses at this point which they had experienced as very promising. They have enjoyed, at least from their safe seat in the cinema, having had an experience which civilisation at present does not make available to them. In my view, Fight Club is one of the most complicated and sophisticated films of recent years, in both a formal and substantial respect. But that is not supposed to mean that the concept is applicable only to such films. Film is a cultural medium. In a society which is defined by abstractions, formalisms and a diffusion of values, the entertainment media take on the task of supplying people with gripping contents. Viewed in this way, script-writers are at the cutting edge of cultural development and have the possibility of experimenting with its developmental tendencies. At present this field is occupied mainly by American authors. Already in the twenties and thirties of the 20th century, Hollywood knew that film is a medium of culture. In Europe, this conception still has problems finding adherents. Translated from the German by Michael Eldred, artefact text & translation, Cologne
Changes to local law in California cities and municipalities continue at a rapid pace. In the past few days the city of Los Angeles, and six counties and two cities in the Bay Area have enacted new ordinances and orders in response to the COVID-19 pandemic. In addition, the U.S. Department of Labor has issued new rules in response to the pandemic. Some of the important changes are highlighted below. On March 27, 2020, the Los Angeles City Council voted to pass three new ordinances, all aimed at helping Angelinos during the coronavirus pandemic. These new ordinances only apply to the city of Los Angeles. The first new ordinance, titled “COVID-19 Supplemental Paid Sick Leave” (SPSL) was enacted in response to the federal government’s Families First Coronavirus Response Act (FFCRA) — applicable only to private employers with fewer than 500 employees and all public employers. The ordinance requires most Los Angeles employers of more than 500 employees nationally to offer 80 hours of supplemental paid sick leave to all employees in Los Angeles who must take leave for reasons related to COVID-19, including: The leave runs concurrently with any leave required under FFCRA and is capped at $511 per day and $5,110 in the aggregate per employee. SPSL runs concurrently with paid leave already guaranteed under other California or Los Angeles local laws. To be eligible for SPSL, employees must have been employed with the same employer from February 3, 2020, through March 4, 2020, and must have performed some work within the city of Los Angeles. Only full-time employees (40+ hours) are eligible to receive the entire 80 hour benefit. Employees who work less than 40 hours per week are eligible for leave in an amount up to the employee’s average two week pay over the period from February 3, 2020, through March 4, 2020. Independent contractors are not covered by the SPSL. The SPSL ordinance excludes employers of health care workers and emergency responders and also exempts employees covered by a collective bargaining agreement provided that the waiver in the agreement is set forth therein in clear, explicit, and unambiguous terms. The SPSL ordinance will expire on December 31, 2020, unless the City Council votes to expand it beyond that date. The Grocery, Drug Retail, and Food Delivery Worker Protection Ordinance requires grocery store retailers and drug store retailers within the city of Los Angeles, and any food delivery platform (i.e. UberEats, DoorDash, Instacart, etc.) to approve any employee schedule change requests: The ordinance also requires Los Angeles employers to offer current employees any available additional working hours prior to seeking to hire new employees or prior to using temp agencies to fill immediate employment needs. However, the ordinance specifically states that employers need not offer employees additional hours if the additional work would push the employee’s total number of hours over the overtime threshold in accordance with Section 510 of the California Labor Code. The ordinance does not cover independent contractors, but the employer has the burden of proving that the worker is an independent contractor and not an employee, and there is a presumption of employee status. Finally, the worker protection ordinance requires that food delivery platforms provide employees with a “no-contact” option for delivering food, groceries, or other covered goods. Employers must draft and provide written guidance to these employees on how to safely make a no-contact delivery. The ordinance prohibits employers from retaliating against employees for opposing any practices pursuant to the ordinance and states that employees subjected to retaliation or discrimination will be entitled to job reinstatement, back pay, and other remedies. The ordinance is set to expire when either Governor Newsom or the mayor of Los Angeles lifts their state of emergency orders issued in response to the COVID-19 pandemic. The Grocery Shopping Priority for Elderly and Disabled Residents Ordinance requires retail food stores to restrict public access exclusively to elderly (60+) and disabled patrons during the first hour the store is open to the public or for one hour during the morning if the store is open 24 hours a day. The ordinance applies only to retail food stores that are larger than 2,500 square feet. Retail food stores that are between 2,500 and 10,000 square feet need only provide priority shopping to elderly and disabled patrons three days per week. All retail food stores larger than 10,000 square feet must provide priority access to elderly and disabled patrons every day the store is open. For purposes of the ordinance, retail food stores include: The city and county of San Francisco, along with Alameda, Contra Costa, Marin, San Mateo, and Santa Clara counties, as well as the city of Berkeley enacted new orders on March 31, 2020, to fight the spread of the pandemic. The orders contain multiple provisions, and all contain virtually the same new directives, the most notable of which is that they “extend and tighten” the stay safe at home restrictions in the Bay Area for an additional 26 days, through May 3, 2020. For employers, the new orders clarify the definition of essential businesses within the Bay Area. The order further clarifies that essential businesses must maximize the number of employees who work from home and they must scale down operations that are not essential, even if the business as a whole is deemed essential. Most significantly for employers, the orders require that all essential businesses prepare and post, no later than 11:59 p.m. on April 2, 2020, a “Social Distancing Protocol” (protocol) in each facility located within each county or city that is frequented by employees or the public. The protocol must be “substantially similar” in form to the form attached to the orders as Appendix A. While the protocol must be in a similar form as Appendix A, the protocol must be specifically tailored to the particular business and must be posted near the entrance of the facility and easily visible by the public and employees. The protocol must explain how the business is achieving the following, if applicable: These new orders and ordinances in Los Angeles and the Bay Area impact thousands of employers. To ensure compliance with these orders, employers should consult with legal counsel prior to implementing new policies and procedures. On March 16, 2020, San Francisco Mayor London Breed announced a program to provide paid sick leave to private sector employees working in San Francisco or on City-owned property who have been impacted by COVID-19. The new program includes $10 million in City funding to private employers who offer and pay employees for an additional five days of paid sick leave beyond their existing policies. Under the program, all San Francisco employers are eligible to receive the City-sponsored funds. Twenty percent of the funds are reserved for small businesses with less than 50 employees. The City agreed to contribute to employers up to 40 hours at the minimum wage rate of $15.59 per hour, or $623 per employee. The employer is then tasked with paying the difference between the $15.59 per hour and the employee’s normal hourly rate. Employers who choose to participate in this program will be eligible for reimbursements of up to $311,176 (the equivalent to covering 499 full-time employees). The program is only available if: 1) the employee has exhausted their currently available sick leave; 2) the employee has exhausted or is not eligible for federal or state supplemental leave (such as leave under the Families First Coronavirus Response Act, effective April 1, 2020); and 3) the employer agrees to extend sick leave beyond their existing policy benefits. The sick leave is available to employees who are: The mayor announced that, if fully utilized, this program would support an additional 16,000 weeks of paid sick leave and would provide coverage for up to 25,000 San Francisco employees. The leave is available pursuant to San Francisco’s Paid Sick Leave Ordinance and the guidance issued by the San Francisco Office of Labor Standards Enforcement (OSLE). For step-by-step instructions of how to apply for the Paid Sick Leave program with the Workers and Families First Program, click here. For additional question, contact counsel or visit the San Francisco Office of Economic & Workforce Development FAQs here. In an effort to promote flexibility while employers shift to teleworking environments, the U.S. Department of Labor (DOL) abandoned the longstanding “continuous workday” rule that typically required employers to pay employees for all hours between the first and last principal activities of the employee’s workday with the exception of any bona fide meal breaks. The DOL determined that during the COVID-19 pandemic, employers are not required to count all hours between the first and last principal activities as hours worked if the employee is teleworking for COVID-19 related reasons. Instead, the employee must only be paid for actual hours worked. For example, an employer may allow an employee to work the following schedule: 7-9 a.m., 12:30-3 p.m., and 7-9 p.m. on weekdays to allow the employee to also help a child with their schoolwork if the child’s school is closed. Under the continuous workday rule, the employee would arguably be owed 14 hours of wages. Under the DOL’s new rule, however, the employee need only be paid for the hours actual worked — 7.5 hours. While this rule is beneficial for employers, California employers must keep in mind California Wage Orders require that employees be paid a split shift premium of one hour’s wages in these scenarios. In California, a split shift occurs when: 1) a work schedule includes an unpaid block of time longer than 60 minutes (that is not a meal period); 2) the block of time interrupts two work periods; and 3) the total daily wage does not exceed the minimum wage for all hours worked, plus one hour. When a split shift occurs, employers must pay employees a premium of one hour of pay. There is no language from any California authorities intimating the split shift premium will be set aside during the pandemic. Employers should consult with counsel if offering employees modified schedules that might include a split shift. Coronavirus Task Force Labor & Employment Share this page:
https://www.cozen.com/news-resources/publications/2020/california-covid-19-developments
By Leah Pierson, Sophia Gibert, and Joseph Millum Clinical researchers frequently collect samples of blood, skin, and other bodily tissues from their patient-participants and have samples left over when their research is complete. These biospecimens are often in high demand from other scientists who want them for their own research. How should such collections of biospecimens be distributed? A researcher brought this question to our Bioethics Consultation Service at the U.S. National Institutes of Health. She had recently completed a project investigating rare, neurodegenerative diseases and had leftover tissues from her research participants. A number of scientists were interested in using her remaining samples, but because they were limited, she couldn’t give them to everyone. While trying to answer this researcher’s questions, we discovered that the subject was complex, understudied, and important. Very little has been written on the ethics of allocating biospecimens, and there are few guidelines for allocators. Moreover, we found evidence that access to biospecimens has limited scientific progress and that the lack of guidance has had a chilling effect on researchers wishing to allocate their unused samples. In approaching the question of how biospecimens ought to be allocated, we wanted to ensure that whatever framework we developed for allocating biospecimens would not reinforce existing research disparities. Poorer, marginalized, and otherwise disadvantaged populations have historically been overlooked by the research enterprise, leading to large research funding disparities. We concluded that biospecimens, just like research dollars, should be used in the most socially valuable ways. All else equal, research projects are more socially valuable when they benefit more disadvantaged groups. Research projects are also more socially valuable when they produce greater overall benefits. For example, all else equal, a research project that generates a cure for a disease affecting a million people is more socially valuable than a research project that cures a disease affecting a thousand people. The social value of research is therefore a function of how badly off the expected beneficiaries are and the magnitude of the expected benefits. Our general proposal—that scientists planning to distribute biospecimens should assess the social value of competing applications—is easier said than done. There are many factors to consider, including the quantity of samples a project requires, whether it could make use of other available samples, and how likely it is to meet its scientific aims. Many scientific projects are never completed or published, and many of the promising findings generated by research fail to ever be utilized in clinical settings. Moreover, it may be daunting to assess the expected outcomes of a proposed project, especially when it involves basic science research, and to determine how medically and socially disadvantaged the project’s likely beneficiaries are. Our modest aim was to develop a framework that is both accurate, insofar as it tends to identify the most socially valuable projects, and accessible, insofar as it is straightforward for allocators to use. To some extent, there is a tradeoff between these two goals; we are curious to hear whether you think we strike the right balance. A key takeaway is that more needs to be done to ensure that biospecimens don’t sit unused in lab freezers, but instead are distributed to the researchers who will put them to the best use. Accomplishing this will require exploring a number of related ethical issues. For instance, when are researchers ethically required to share their samples, and what duties do they have to inform the research community about their collections? How much burden should researchers take on in order to distribute their samples fairly and in the most socially valuable ways? How should consent for specimen donation be designed so as to balance facilitating sample sharing and providing participants with adequate control over their samples? We hope our paper will draw needed attention to these questions, and we look forward to hearing your thoughts. Paper title: The Allocation of Scarce Biospecimens for Use in Research Authors: Leah Pierson1 (co)*, Sophia Gibert2 (co)*, Benjamin Berkman3,4, Marion Danis3, Joseph Millum3,5 *These authors contributed equally to the work. Affiliations: 1 MD-PhD Program, Harvard Medical School. Boston, MA (USA) 2 PhD Program, Department of Philosophy, Massachusetts Institute of Technology. Cambridge, MA (USA) 3 Department of Bioethics, Clinical Center, National Institutes of Health. Bethesda, MD (USA) 4 Bioethics Core, National Human Genome Research Institute, National Institutes of Health. Bethesda, MD (USA) 5 Fogarty International Center, National Institutes of Health. Bethesda, MD (USA) Competing interests: None declared.
https://blogs.bmj.com/medical-ethics/2020/03/30/allocating-scarce-biospecimens/
There are no reviews yet. The author is an historian and as he says “not a rock critic”, but this book is hardly a history. It is biased on the side of presenting both groups in an ugly light. It seems that one should have an understanding of music to properly appreciate such seminal musicians. If one looks at the artists as mere personalities, one cannot get a sense of their contribution to popular culture, music and the times in which they lived. The book is filled with conjecture, innuendo and put downs. The author takes turns labeling all the band members, except the more mature Charlie Watts, as “thugs”. In such labeling, one chapter seems to contradict the last. Despite the support between the two bands, the author makes a case for their rivalry and egos. That arguably the greatest bands of this generation had egos should come as no surprise. What is lacking in this book is any balanced account of the great musicality and performances of these artists. If anyone under forty reads this account, they will be left with a very incomplete portrait of these artists. Perhaps it is the author’s intent since so much has been written about the groups. There are better, more serious books about the sixties and about the Stones and Beatles. |Author||John McMillian| |Star Count||1/5| |Format||Hard| |Page Count||304 pages| |Publisher||Simon & Schuster| |Publish Date||2013-Oct-29| |ISBN||9781439159699| |Amazon||Buy this Book| |Issue||February 2014| |Category||Music & Movies| |Share| There are no reviews yet.
https://sanfranciscobookreview.com/product/beatles-vs-stones/
Kurt Note: Thomas Jay Oord is a theologian I respect. His work in open and relational theology is so helpful for understanding the nature of God in relation to space and time. In this short article, Tom gives us a bit of a peak at a new way of understanding why evil and suffering exist. He also looks at why God doesn’t always intervene as we’d like. Thought provoking, to be sure! This particular post is inspired by his new book, The Uncontrolling Love of God: An Open and Relational Account of Providence which is available for pre-order now! —————————————- We all want to make sense of life. I think the stakes for Christians in the endeavor to make sense of life are as high as any stakes can be. I’ve been thinking for some time about two major questions in my quest to make sense of life. The first is familiar to just about everyone, at least in some form. Here’s the form of the question I find most perplexing: “If a loving and powerful God exists, why doesn’t this God prevent genuine evil?” The vast majority of answers given to this question are unsatisfying. Most Christians I know ultimately appeal to mystery when proposing an answer. This question perplexes billions of people. The second question is less common but I think equally perplexing: “How can a loving and powerful God be providential if random and chance events occur?” In my new book, The Uncontrolling Love of God: An Open and Relational Account of Providence (Intervarsity Press Academic), I propose answers to these questions. Theology, science, Scripture, and philosophy inform my answers. Unlike most Christians, I don’t appeal to mystery! To get at my answers, I address a wide swath of issues Christians normally consider aspects of God’s providence. I address the randomness and chance we encounter in the world. I claim that it is real, even for God. But I also acknowledge the rampant regularities of life, some of which have been called “the laws of nature.” Thinking carefully about God’s relationship to these law-like regularities is important for solving both the problem of evil and the problem of randomness in relation to providence. In my view, too few Christians take free will seriously when thinking about providence. I believe the freedom we experience in life is real but limited. I’m a freewill theist. I also think values are real. Some events are better than others. Good and evil are not simply a matter personal taste or individual perspective. A portion of my book addresses how we should define evil and why belief in God makes better sense than atheism for understanding good and evil. In this section, I also address the problem of good, which I think is an issue for atheists. Of course, I’m not the first person to recommend a particular view of God’s action in relation to creation. A number of models of providence exist. It helps to get clear on these models if we can make progress in answering well the problem of evil and the problem of randomness. I identify seven major models in my book, pointing out strengths and weaknesses of each. I’m an open and relational theologian. This form of theology comes in many varieties, but there are some common characteristics. People have come to embrace open and relational theologies from various paths. Some come primarily through their study of Scripture. Others by working out issues in the discipline of theology. Some come to open and relational theology through philosophical reflection. And others come as a consequences of their study of science. The most common answer to the problem of evil and the problem of randomness, even among many open and relational theologians, is that God allows evil and randomness. God could control it but chooses not to do so. I don’t find the “God allows it” answer satisfying. God cannot be perfectly loving if God allows evil and permits random events that God would anticipate having negative consequences. We don’t think people are perfectly loving when they allow horrific evils they could have stopped. Why think that God is loving for doing the same? The common view of those who say God allows evil and permits randomness is that God is voluntarily self-limited. God could intervene to prevent evil. God could stop a random event that will likely have negative consequences. But for some mysterious reason, this voluntarily self-limited God doesn’t momentarily become un-self-limited to prevent genuine evil. The problem of evil is a problem for many open and relational theologies. I offer a new open and relational model of providence I call “essential kenosis.” It says God’s love is always self-giving, others-empowering. God must love because God’s nature is love. Unlike many open and relational theologians, I believe God’s nature of self-giving, others-empowering love conditions and shapes God’s sovereignty. To put it in philosophical language, divine love is logically prior to divine power. This means that God’s self-limitation is involuntary, because God’s nature of love limits what God can do. In short: God can’t prevent genuine evil by acting alone and God can’t stop random events that produce evil. To put in biblical terms, I think the Apostle Paul was right when he said that God “cannot deny himself” (2 Tim. 2:13). In other words, God’s nature comes before God’s choice, and God cannot do that which is ungodly. Teasing out the implications of this can make all the difference for answering well the perplexing questions of our time. Of course, there are much more to this book than what I’m offering here. I haven’t even mentioned my explanation of miracles, which gets a whole chapter in the book. I’m grateful to Kurt for allowing me to post this teaser. I hope you consider ordering The Uncontrolling Love of God and pondering my arguments! Thomas Jay Oord ———————————————————- Thomas Jay Oord (PhD, Claremont Graduate University) is professor of theology and philosophy at Northwest Nazarene University in Nampa, Idaho. He serves as adviser or on the councils of several scholarly groups, including the Open and Relational Theologies group (AAR), Biologos, Institute for Research on Unlimited Love, Research Theological Fellowship, Wesleyan Theological Society and the Wesleyan Philosophical Society. Oord has written or edited more than twenty books, including Defining Love: A Philosophical, Scientific, and Theological Engagement, The Nature of Love: A Theology and Theologies of Creation: Creatio Ex Nihilo and Its New Rivals. He is known for his contributions to research on love, open and relational theologies, postmodernism, issues in religion and science, and Wesleyan, holiness and evangelical theologies. Oord serves as an ordained minister in the Church of the Nazarene and in various consulting and administrative roles for academic institutions, scholarly projects and research teams. He and his wife Cheryl have three daughters.
https://www.patheos.com/blogs/thepangeablog/2015/11/04/uncontrolling/
CANA at the mors 89th symposium The MORS 89th Symposium is next week and CANA is contributing to the community by sharing our unique analytics applications. We will lead two special sessions, present two technical briefs, and provide an esports demonstration. The events summary below lists the topics and times you can interact with CANA. List of Events Improving AI/ML Department of Defense Ethical Testing - WG 35 AI and Autonomous Systems - Tuesday, June 22 - 3 pm - 3:30 pm ET The testing of ethical principles for Machine Learning (ML) and Artificial Intelligence (AI) models that may learn with the addition of new data sources outside the traditional DoD Test and Evaluation (T&E) cycle requires a new process. This presentation proposes a T&E rubric to improve the Department of Defense ML/AI model test effectiveness for acquisition program managers and each program’s Chief Developmental Tester. We evaluate 144 research papers in a DoD testing context categorized by three broad ML classes based on data type (e.g., supervised learning, unsupervised learning, and reinforcement learning); makes recommendations on what properties to test for (e.g., correctness, relevance, robustness, efficiency, fairness, interpretability), provides an idealized workflow of how to conduct testing and presents an idealized way to look at where to conduct ML component testing (e.g., data processes, frameworks, and coded algorithms). Applicable T&E methodologies, use, and policy changes are also recommended. The proposed T&E rubric is intended to support Defense Department acquisition policy in DoD 5000.02 and uses the Defense Innovation Board AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense February 2020 AI Ethics Principles for DoD. This research was funded by the STAT Center of Excellence (COE). Logistics Community of Practice Special Session - Tuesday, June 22 - 4 pm - 5 pm ET The MORS Logistics Community of Practice is completing our third year. We focus on bringing together an eclectic group of logistics professionals across the National Security Community to discuss the hottest issues, review potential and applied solutions, and network whenever possible. This special session focuses on our 2020-2021 year in review along with discussions of where we want to take the Log COP for the 2021-2022 year. We will spend some time introducing participants and talking through current challenges. Logistics is very broad and touches many professions. Come join us for a candid talk, to meet others in the community, and to hear about where we are headed. Women In MORS Community of Practice Special Session - Tuesday, June 22 - 4 pm - 5 pm ET According to research studies, critical career-enhancing opportunities are shared unevenly by people in positions of power and influence, often without realizing that certain groups are disproportionately excluded. Hard work and technical skill are the foundation of career progress, but without some access to formal and informal networking opportunities, that progress can be stalled. Are you often the only woman at the table in your meetings or do you have few women in your organization? The Women In MORS Community of Practice invites you to a Special Session for their June COP Meeting featuring a panel presentation on effective networking strategies. The panel will feature a number of senior leaders in the national security arena including CANA's President, Rob Cranston! eSports Data Analysis Modeling - Rainbow Six Siege eSports Tournament - Wednesday, June 23 - 4 pm - 5 pm ET CANA eSports held a Rainbow Six Siege tournament in April 2021. The event's goal was to collect data on team tactics, determine critical skill sets, and team makeup using the data that is provided through First Person Shooters (FPS) games such as Rainbow Six Siege. This demo will review the mechanics of holding an eSports tournament event, using the event as an experiment to collect data, and the post-tournament data analysis results. An overview of the game and actual game play video is provided for context. The Distribution Network Model - Friday, June 25 - 10:30 am - 11:00 am ET The 38th Commandant’s Planning Guidance describes new naval operating concepts that present the Marine Corps with new logistics challenges: “Rather than heavily investing in expensive and exquisite capabilities that regional aggressors have optimized their forces to target, naval forces will persist forward with many smaller, low signature, affordable platforms that can economically host a dense array of lethal and non-lethal payloads.” The Marine Corps requires new logistics operating concepts to include supplying many mobile austere bases distributed over thousands of miles. The new logistics paradigm creates a network of supply and demand nodes, serviced by a wide variety of transportation types, that confounds more linear and traditional military force closure modeling approaches. The Marine Corps seeks to determine cargo and equipment prepositioning and numbers, types, and locations of traditional and non-traditional logistics enablers that are optimized to be most responsive while minimizing investment. The Distribution Network Model can determine the most responsive and lowest cost afloat and ashore tailoring plan for inventory and transportation enablers and inform investment decisions to allow selection of the most effective affordable platforms to support Marine Corps future operating concepts. In addition to these events, CANA will also have a booth if you want to learn more about what we are up to at CANA. Feel free to stop by and see what interesting analytical projects we are working on. We hope everyone attending the 89th MORS Symposium has a great time and we look forward to seeing you next week. Walt DeGrange is the Director of Analytics Capabilities. You can contact him at [email protected].
https://www.canallc.com/post/cana-at-the-mors-89th-symposium
Because NNC is a private and independent Center, the plight of those with Special Needs remains the source of our inspiration that encourages us to continue our work and make program improvements through the process of self-governance, self-reliance, self-evaluation, and self-discovery. We became more determined to continue when we learned that there are more than 62,000 disabled people in Binh Dinh Province, and 70% of them need assistance with education, career choices and employment. We believed in the Center’s competency and development of a stable foundation from what we had learned during 10 years of service. We also became aware of the concern from the Community. With the encouragement from our community, based on experiences, achievements, and the awakening of the whole country in the work of protecting, caring for, and educating children with disabilities, we have established an operational structure with sufficient functions to effectively assist children with disabilities integrate into the community. Our operational configuration had two Phases with the following functions and duties: PHASE 1: Education and Training: - We assume the task of healthcare and rehabilitation for disabled children during the time of their studies. - Literacy education’s duties: • Early Intervention – Integration Education – Specialized Education - Vocational Training is achieved through one of two methods: • Training at the Center. • Referring students to other vocational training centers in community. - Social and Talent Development is encouraged through painting, music, poetry, and establishing clubs to express and share their talents. PHASE 2: Post-Education and Training: - This phase is also achieved with one of two methods: - Production at the Center - Support for the Community Outreach (for individuals and Self-reliance Groups) • Support for apprentice training programs in the community. • Support capital for business startup • Support Scholarships for higher education (High school, University, College, Senior) • Support for difficult Situations (families with multiple disabilities and people with severe disabilities) • Support for adaptive equipment (wheelchairs, hand controlled bicycles and crutches or braces) • Employment, Health, Marriage & Family Counseling Methodically we implemented the above Programs and documented their effectiveness. The Binh Dinh Province People's Committee has assisted NNC over the years with facility locations to enable us to conduct our daily operations: - (1993 - 1996): Center at 18 Nguyen Hue - Ghenh Ráng - Quy Nhon - Binh Dinh. - (1997 - 2004): 100 Phan Boi Chau - Quy Nhon - Binh Dinh - (1999 - 2007): 30 Nguyen Van Be - Quy Nhon - Binh Dinh A current unaccomplished goal of the Nguyen Nga Center is to own the building for our Programs to insure long-term stability.
http://nguyennga.org/en/?option=com_content&view=article&id=50&catid=29&Itemid=129
NATIONAL INSTITUTE ON DISABILITYAND REHABILITATION RESEARCH (NIDRR) People with disabilities want the best that science and engineering can offer. They also want research that takes a collaborative approach, incorporating issues of self-help, consumer control and respect for life experiences into the broader study of health care, rehabilitation and independent living. The National Institute on Disability and rehabilitation Research (NIDRR) is a national leader in sponsoring research to help bring about this synthesis of scholarship, talent and practical life experience. People can become disabled at any point in their lives. Disability may be present from birth, or result from an accident in youth, a work-related injury, the aging process or a multitude of other causes. If we, ourselves, do not experience a disability, perhaps we will be caring for a disabled child, spouse, parent or friend. The chances that we will be affected by a disability have increased due to advances in medical technology that have expanded our life expectancies. At this point, disability ranks among the nation's biggest public health concerns, encompassing an estimated 52 million Americans. NIDRR'S MISSION It is the mission of NIDRR to generate, disseminate and promote new knowledge to improve the options available to disabled persons. The ultimate goal is to allow these individuals to perform their regular activities in the community and to bolster society's ability to provide full opportunities and appropriate supports for its disabled citizens. Toward this end, NIDRR conducts comprehensive and coordinated programs of research and related activities to maximize the full inclusion, social integration, employment and independent living of individuals of all ages with disabilities. NIDRR's focus includes research in areas such as employment; health and function; technology for access and function; independent living and community integration; and other associated disability research areas. Balanced between the scientific and consumer communities, NIDRR plays a unique role in federally funded research activities. As part of the scientific community, NIDRR makes an important contribution to the overall knowledge in rehabilitation medicine, engineering, psychosocial rehabilitation, integration, vocational outcomes and the virtual and built environments. In addition, NIDRR's work helps to integrate disability research into our nation's policies regarding science and technology, health care, and economics. Created in 1978, NIDRR is located in Washington, D.C., and is one of three components of the Office of Special Education and Rehabilitative Services (OSERS) at the U.S. Department of Education. NIDRR operates in concert with the Rehabilitation Services Administration and the Office of Special Education Programs, which are service programs. This juxtaposition between service and science enhances NIDRR's role. NIDRR has unique institutional relationships with the scientific community through the Interagency Committee on Disability Research (ICDR), which the director of NIDRR chairs. In addition, NIDRR co-sponsors research programs with other federal government agencies and with foreign governments and international agencies. NIDRR-SPONSORED ACTIVITIES NIDRR's research is extramural, conducted through a network of individual research projects and centers of excellence located throughout the country. Most NIDRR grantees are universities or providers of rehabilitation or related services. NIDRR's largest funding programs are the Rehabilitation Research and Training Centers (RRTCs) and the Rehabilitation Engineering Research Centers (RERCs). NIDRR also makes awards for information dissemination and utilization centers and projects, field-initiated projects, research and development projects, advanced research training projects, Switzer fellowships and model systems of research. Rehabilitation Research and Training Centers (RRTCs) The RRTCs conduct coordinated programs of research targeted toward the production of new knowledge that will improve rehabilitation methodology and service delivery systems, alleviate or stabilize disabling conditions and promote the maximum social and economic independence of individuals with disabilities. The RRTCs also conduct training and information dissemination activities. Some disabling conditions that are currently the subject of individual centers include deafness, low vision, spinal cord injury and long-term mental illness. Other RRTCs focus on cross-disability perspectives such as aging with a disability, the management of independent living centers, drugs and disability, or the particular needs of American Indians. The RRTCs also train rehabilitation personnel and other individuals to provide rehabilitation services and to conduct additional research. In addition, the RRTCs serve as a resource for researchers, people with disabilities, their families, service providers, and advocates by disseminating information and providing technical assistance through workshops, conferences and public education programs. Rehabilitation Engineering Research Centers (RERCs): The RERCs plan and conduct research leading to new scientific knowledge and new or improved methods, procedures and devices to benefit people with disabilities. They are engaged in developing and disseminating innovative methods of applying advanced technology, scientific achievement, and psychological and social knowledge, with the goal of solving rehabilitation problems and removing environmental barriers. The RERCs work at the individual level, focusing on technology to lessen the effects of sensory loss, mobility impairment, chronic pain, and communication difficulties. They also work at the systems level in such areas as eliminating barriers to fully accessible transportation, communications and housing. Partnering with industry, product developers, private sector entrepreneurs and even hobbyists, the RERCs embody the potential to make sweeping changes affecting public policy and the nature of the built and virtual environments. ADVANCES IN RESEARCH SUPPORTED BY NIDRR Research supported by NIDRR has had a tremendous impact on the lives of persons with disabilities and, at the same time, has made an essential contribution to scientific knowledge in the United States and around the globe. In addition to being responsive to the changing needs of disabled individuals, research has kept pace with medical advancements, new technology, community support initiatives, and new statutory definitions of civil rights. NIDRR-supported research also has helped encourage and educate policymakers to envision and design a society that is universally accessible and functional for all people in every stage of life. As a result, it is now possible for people with significant disabilities to live full and fulfilled lives. It is already commonplace to find people who are blind using computers, people who are deaf attending the theater, and people in wheelchairs traveling in planes and driving their own cars. The future holds even more promise. Medical Rehabilitation Research Taking a broader look at just one area of NIDRR-funded research — medical rehabilitation research — provides evidence of profound changes. Due to the concerted efforts of medical disability researchers, the lives, outlooks and opportunities for people with disabilities have greatly improved. By supporting research on model systems, NIDRR has enhanced the ability of rehabilitation hospitals and centers to care for, rehabilitate and reintegrate patients with spinal cord injury, traumatic brain injury, and severe burns in a shorter period of time than before. Advancements over the past 25 years have resulted in a steady rise in the life expectancy of individuals with paralysis from spinal cord injuries. Improved medical diagnoses, treatment methods and behavioral protocols, as well as enhanced rehabilitation engineering technologies for seating, cushioning, and positioning, have reduced the occurrence of decubitus ulcers, a severe secondary complication of paralysis. Similarly, due to research, a major reduction in the incidence of severe urinary tract infections has eliminated renal failure as the top-ranking cause of death for people with paralyzing conditions. Rehabilitation Engineering Research One focus of rehabilitation engineering research over the last decades has been the adaptation of new, light-weight, but durable materials for wheelchairs and for orthotic and prosthetic devices. A primary objective of engineering research is to make this equipment more serviceable and comfortable for the user. Advances in this area have allowed people with disabilities to enhance their lives, not only at home and work, but also in recreational activities. Wheelchair racers using the newest sports wheelchairs now can finish races longer than 800 meters at speeds faster than those of Olympic runners. Hundreds of athletes with disabilities compete in the Paralympic Games every two years, many through the use of state-of-the-art prosthetic arms and legs. Progress in researching these technologies has been enhanced as NIDRR has increased its RERC research portfolio. ADDITIONAL WORK FUNDED BY NIDRR With the passage of the Americans with Disabilities Act (ADA) in 1990, new requirements were placed on employers, transit and telecommunications systems, state and local governments, and public accommodations. To help businesses and public agencies comply with the ADA mandate, NIDRR funds a network of Regional Disability and Business Technical Assistance Centers that provide technical assistance, training and resource referral. NIDRR also administers the Assistive Technology Act of 1998 (Tech Act). The Tech Act provides grants to states to help bring about systems change to increase the availability of, access to, and funding for assistive technology. It also helps states provide services to rural and underrepresented populations and provide legal advocacy to disabled individuals in regard to assistive technology issues. In regard to independent living, NIDRR funds research on personal assistance services, independent living and disability policy, and such critical issues as community integration. NIDRR's agenda also includes research on employment issues and vocational rehabilitation. One continuing objective is to make the labor market more amenable to full employment for people with disabilities. Another is to help students with disabilities make the transition from school to work. Other Projects, Training Programs and Fellowships The Disability and Rehabilitation Research Projects carry out one or more of NIDRR's activities: research, development, demonstration, training, dissemination, utilization, and technical assistance. Dissemination and Utilization grants are provided to help transfer research and other products to policymakers, the rehabilitation community, educators, technology developers, and persons with disabilities. The topics are reflected in specific NIDRR priorities. Some examples might include: developing model care systems, the creation of a specialized dataset for the collection of clinical and scientific information, or job development and placement for agricultural workers with disabilities. Field-Initiated Projects advance rehabilitation knowledge to improve the lives of people with disabilities, complement research already planned or funded by NIDRR, or address the research in a new and promising way. The researcher proposes the topics of these projects. Some topics recently funded are aging and life adjustment after spinal cord injury, assessing the impact of managed care on rehabilitation research, and a clinical evaluation of pressure-relieving seat cushions for elderly stroke patients. The Advanced Rehabilitation Research Training Program trains physicians, therapists, rehabilitation engineers, and other professionals in research methods and statistical analysis. Small Business Innovative Research contracts help support the production of new products from development to market readiness. NIDRR also administers two types of one-year Switzer fellowships. Distinguished fellowships are for individuals of doctorate or comparable academic status with seven or more years experience relevant to rehabilitation research. Merit fellowships are for persons with considerable research experience, but who do not meet the above requirements. THE FUTURE OF DISABILITY RESEARCH Now and in the years to come, NIDRR will continue to expand its activities to reflect the emerging universe of types and causes of disability. We are only beginning to understand the interaction among the medical, environmental and societal factors that link to disability. New illnesses and conditions are constantly evolving, many of which are associated with poverty, such as low birthweight, poor medical care, lack of prenatal care, substance abuse, violence, and isolation. These factors have a high correlation to impairments and disabilities. NIDRR also will provide leadership to the scientific community and society as a whole to conceptualize disability in a new way. The disability paradigm that undergirds NIDRR's research strategy maintains that disability is an interaction between characteristics (e.g. conditions or impairments, functional status, or personal and social qualities) of an individual and characteristics of the natural, built, cultural, and social environments. NIDRR also recognizes the continuing importance of medical rehabilitation and health within the context of disability. Further, it recognizes that people with disabilities are entitled to accommodations as a civil right under the Americans with Disabilities Act. In addition, NIDRR programs will focus on health and wellness strategies for people with disabilities to continue to increase their quality of life. They will capitalize on new techniques such as telerehabilitation to increase the numbers and types of services offered to people in rural or underserved areas. NIDRR also will work to increase the capacity of personnel through education and training to provide better and more responsive service. Finally, NIDRR will continue to expand its activities in the international arena. Through its international authority, NIDRR currently works with collaborative research centers in India and other countries. These activities abroad help improve the skills of rehabilitation personnel in the United States through international data, and help strengthen disability leadership around the globe. For additional information, contact: THE NATIONAL INSTITUTE ON DISABILITY AND REHABILITATION RESEARCH U.S. Department of Education 400 Maryland Avenue, S.W. Washington, DC 20202-2572 Telephone: 202-205-8134 (voice) 202-205-5516 (TTY) The full text of this public domain publication is available at the Department's Website at: http://www.ed.gov/ and in alternate formats upon request. Consult these sites on the World Wide Web for further information: http://www.ed.gov/about/offices/list/osers/nidrr/ http://www.ncddr.org/ http://www.naric.com/
East Asia Study Group The University of Warwick East Asia Study Group (EASG) is a research and study group focused on the politics and international relations of the East Asian region. We aim to encourage discussion and investigation of the paradoxical challenges the East Asian region poses to traditional IR theories: why does a developed state like Japan eschew a regional security role and constrain its own military; why has China, the world’s second largest economy, enjoyed such success without adopting free market or liberal principles; and why is the Association of Southeast Asian Nations seemingly disinterested in federalisation and continues to emphasise the sovereignty of its individual members? These paradoxes challenge traditional interpretations of security, economic development, multilateralism, national identity and many more besides. The EASG invites researchers from within and beyond the university to present on issues related to these paradoxical challenges, not only deepening understanding of an often under-analysed region, but also encouraging reflection on broader theoretical debates whose interpretive frameworks can close off avenues of investigation, thereby leading to reductive analyses. By challenging traditional theories through investigating this under-analysed region, we aim to contribute to the ongoing decolonisation of the curriculum and welcome those who wish to contribute to this effort. Core Aims - To be a platform for early career researchers interested in studying East Asia. - To be a symposium which raises the visibility of debates on East Asian international relations and helps advance these debates. - To be a vehicle for challenging Eurocentric interpretations and decolonising the curriculum. The EASG is coordinated by Max Warrack, the PAIS Teaching Fellow in International Relations and Japanese Studies. Events - Research Seminars: We invite researchers from outside the university to present and answer questions on their research related to East Asia. Given our regional focus, our speakers come from a broad range of disciplinary backgrounds, so they will also be of interest to colleagues whose research lies outside East Asia. - Informal Discussion Programme: An early career researcher or colleague presents on a topic related to their research as a launch-pad for a broader discussion in a seminar environment; not a typical conference style-Q&A. The purpose is to provide an informal discursive space where students have the opportunity to learn about new topics; colleagues and students have the opportunity to exchange ideas; and colleagues have the opportunity to 'road test' an idea for a future paper, article or book. - MA Thesis Review: An annual event aimed at taught postgraduate students who are considering doing their MA dissertation on a topic related to the study of East Asia. A panel of PhD researchers will ask questions based on an abstract and a short talk, providing feedback; giving a new perspective; and encouraging reflection on the proposal, before the formal process. In addition, it provides an early experience of the academic review process for any students considering future research beyond their MA. It was held at the Oculus as an all-day event on Wednesday 19th January 2022, and the sign-up deadline was Friday 3rd December 2021. Please contact [email protected] for more information, or visit its webpage here. Next Research Seminar TBD Date: TBD Time: TBD Venue: TBD Next Informal Discussion Event TBD Date: TBD Time: TBD Venue: TBD If you have a suggestion for a research seminar speaker, would like to volunteer to host an informal discussion, or have questions about the EASG, please contact [email protected]. Join our mailing list To receive regular updates about EASG events, join our mailing list hereLink opens in a new window. Upcoming events View past and upcoming events hereLink opens in a new window. Seminar Archive View recordings of past research seminars here.
https://warwick.ac.uk/fac/soc/pais/currentstudents/postgraduatephd/academicsupport/eastasiastudygroup/
Reverse transcriptase does not have an exonuclease function so if two primers bind to one RNA, the resulting product is two short cDNAs. This is in contrast to the case of two primers binding to one strand of DNA, where DNA polymerase would degrade the primer further downstream and produce one strand of DNA. Exonucleases that degrade the ends of nucleic acid strands are important in apoptosis and in destruction of RNAs. RNAs must be degraded as part of cell feedback and regulatory mechanisms; RNA messages aren't intended to last forever. Random primers are used in the first step of RT-PCR. They don't interfere with subsequent DNA amplification because it runs at too high a temperature for the short random primers to anneal. A cDNA (complementary DNA) library is composed entirely of genes and is much simpler than a genomic library. There are 30,000-40,000 human genes total. "Housekeeping genes" are the ones that are expressed in all cells regardless of their specific function. For example, all cells have ribosomes. "Tissue-specific" genes and "developmentally regulated" genes also exist. Chicken oviduct cells make ovalbumin (egg white) as a full 50% of their protein output. Thus 50% of mRNAs and cDNAs from these cells will be for ovalbumin. Due to this kind of unequal representation, cDNA libraries must be much larger than the total number of mRNAs in order to get complete coverage. 105 cDNAs is typically enough. A single-copy gene if regularly expressed is much easier to extract from a cDNA library than from a genomic library. Note also that a cDNA clone can be expressed in prokaryotic vectors while a genomic clone cannot since prokaryotic species have no mechanism for removing introns. A genomic clone including introns would be copied exactly by a prokaryotic vector and would code for the wrong protein. Cloning is therefore always performed using cDNAs. cDNA libraries are more useful in general than genomic libraries but are harder to make. Recombinant human insulin made by Eli Lilly is made by E. coli that have an insulin insert. The recombinant insulin is indistinguishable from natural insulin except that it's purer. Animal insulin can contain other proteins that cause allergies. Insulin is only 137 bp in length. If a protein needs post-translational modification in order to be functional, then E. coli may not be up to the task. Instead yeast or insect cells may be used as cloning vectors. Eukaryotic vectors may or may not splice out introns of inserts properly. At times even mammalian cells in culture may be used. In order to understand cDNA library construction, we must understand the function of DNA polymerase in E. coli. Topoisomerase pulls the E. coli chromosome apart at the origin of replication and DNA polymerase begins synthesis. Where do the primers come from? An enzyme called primase (RNA polymerase) makes an RNA primer starting at the origin of replication. E. coli uses DNA polymerase III to synthesize DNA from the RNA primer. Topoisomerase winds and unwinds DNA as the replication forks move along the two strands. The origin of replication is a binding site for the topoisomerase. How do the sections marked with a * get copied? A new RNA primer is made just downstream of the replication fork. DNA polymerase I removes the RNA primers and replaces them with DNA. DNA Pol I is a dual-function enzyme that is both a 5'->3' exonuclease for RNA and an aid to DNA synthesis. Since DNA Pol I is an exonuclease, it can performs this function only where there are "nicks" in the RNA. A nick is a cut in the phosphodiester backbone. The bases nearby the cut are not disturbed. (A defect where the base is removed is called a "gap" rather than a "nick.") Recall that DNA polymerase can only add nucleotides to the 3' end of a primer. At a 5' end it does nothing. DNA polymerase is a big molecule compared to DNA strands, about 25 nt long. When ordering oligos from a vendor, the 5' end doesn't automatically get a phosphate group. An "EcoR1 adaptor" can be synthesized from two primers of different lengths, only one of which has a phosphate on its 5' end. Note that the "AATTC" on the 5' end without a phosphate is the 5' overhang portion of the EcoR1 restriction site. When ligase is added to the solution containing the adaptors, the non-overhang side can attach to the end of a cDNA that has a 5' phosphate group. Because of the missing phosphate, adaptors can't hybridize with each other. Note that 3' ends of DNA strands never have phosphate groups but always hydroxyl groups. This after all is the difference between 5' and 3' ends. Two 5' ends can therefore never join together. Two cDNAs can also join together. Recall that "hybridization" means formation of hydrogen bonds between bases while "ligation" means formation of phosphodiester bonds in the backbone using ligase that works on 3' (hydroxyl) and 5' (phosphate) ends. In the cDNA library formation of Experiment 4, we clone mouse genes into Lambda Zap II. The left arm of Lambda Zap II has an EcoR1 end and the right arm has an Xho end. [See the handout for experiment 4 to learn how the Xho overhang gets on the cDNA via the linker-primer hybridized to the poly-A tail of the original mRNA.] Our cDNA can insert in only one direction, which is called "unidirectional cloning." Unidirectional cloning is desirable because Lambda Zap II has a promoter near the 3' end of the left arm. The insert must join the left arm with the start of its gene near the promoter in order for the protein to be expressed properly. Without unidirectional insertion, half of the E. coli cells would express the wrong protein. This wrong protein would have an mRNA complementary to the correct one and would produce an entirely different amino acid sequence.
http://exerciseforthereader.org/nebmbp/day6pm.html
Founded in 2007, SurfaceScience® is a mission-driven company that strives to supply safe and easy-to-use cleaning products and surface coatings from around the globe. Our Customers’ Values Are Our Values We are united with our customers' values and are determined to provide and manufacture eco-friendly products that make cleaning easier, application faster, and results better! We Don’t Cut Corners We are committed to ensuring the materials used in our products are of premium quality and obtained from sustainable global resources and manufacturing practices dedicated to protecting the environment.
https://www.surfacescience.ca/pages/our-mission
“We just found out my daughter has Dyslexia. I’m a teacher, why didn’t I know?” Her daughter, Becky, was a high school freshman. It was the librarian who discovered it. Sarah was feeling guilty. Becky was feeling relieved to know that there was a reason that she wasn’t reading with the ease she thought she should. Informal tests showed that she had Dyslexia; now that she knew what it was, she could find ways to make it more manageable. Dyslexia is a neurodiversity in which the brain has difficulty translating images received from the eyes into understandable language. Affecting about 15% of the population, it is hereditary and has a detectable genetic indicator. Common teaching strategies are not designed for these folks, which contributes to their lack of performance in school. In truth, a large percentage of dyslexics have a well- above-average IQ. Have you ever wondered what they see? This makes it hard to recognize short, familiar words or to sound out longer words. Decoding takes longer, is exhausting, and consequently reading comprehension is poor. Dyslexia is a language-processing difference, so it can affect some, or all, forms of language, either spoken or written. There are specific forms of Dyslexia. Dysphonetic or Auditory Dyslexia is difficulty connecting sounds to symbols; people may have a hard time sounding out words, or with phonics. Dyseidetic, or Visual Dyslexia, with which a person has a good grasp of phonetic concepts, but difficulty with whole-word recognition and spelling. Dysphoneidetic or Mixed Dyslexia is characterized by difficulty reading (decoding) and spelling (encoding) words, either eidetically or phonetically. Dyscalcula Dyslexia is the inability to calculate equations due to poor math and memory skills (example: poor mental math, difficulty calculating money, making change, recognizing math signs and symbols (+ = – x); or to remember operations (ex.: multiplication tables); or to deal with reversals, substituting other numbers, or leaving numbers out. Parents who suspect their child has Dyslexia should first consult their school. Tests are frequently provided through the school, and involve a series of 16 different assessment tools. Consult your doctor if you are interested in a QIAamp Kit. This is a home test (you’ll need a prescription from a genetics specialist). Similar to a paternity test, swabbing can determine if parents or their children carry the hereditary traits of Dyslexia. Saliva will provide all the DNA to test for DCDC2 in chromosome 6 and 15, as linked to the FGFR3 gene helix mutative gene. Once you know, there’s a lot you can do. First, you should know that medication is not prescribed for Dyslexia. And many school systems do not provide IEPs or 504 modifications for children with Dyslexia. One possible external aid would be Chromagen Glasses. These are eyeglasses made with specific colored glass to ease the decoding process. These are not always covered by insurance. Because the colors chosen are specific to the wearer, tests and prescriptions must be obtained through vision specialists. Similarly, because of the font, color and backlighting, many find e-readers (ex.: Nook or Kindle) to be very helpful. Implementing the following learning techniques can help significantly. It’s like brain training. Dyslexia will never go away, but it can be dominated. - Utilize a multi-sensory approach in teaching, involving multiple senses at the same time: touch, sight, movement and sound. - Use color for visual cues with written material, including a blackboard or white board, especially with mathematic symbols. Provide handouts. - Lectures and study times should be brief, followed by movement breaks. Do not rush; make sure listeners understand before moving on. - Repetition of key words, concepts or instructions will help with short-term memory (for most children). - Provide more time to complete homework, exams and quizzes. Imagine if you needed to decode a message before solving an equation, you’d need more time. - Write homework instructions directly on the assignment, so they know what is expected, such as visual cues ( or *, for example) to highlight important directives or information. - Model organizational systems. Routines help to automate life in general. This will help to automate their learning success in school, at home and in life. List-making, graphic organizers or bubble mapping are creative, visually appealing ways to organize information. Utilize assignment books, color-coded folders or flash cards, calendars, post-its and journals for important things to remember. Becky is a college graduate now, feels great about her skills and, believe it or not, loves to read! If you or your child has Dyslexia, remember 15% of our population is with you in this—you are not alone. No need to reinvent the wheel; research support agencies, websites, apps and Youtube videos for help. Or just ask your children; millennials have technology in their DNA. They can “google” it. [divider] Forsyth Family offers this column as a way to bring awareness to various learning differences and empower parents and families for whom these differences are formally diagnosed. This is not intended to be a diagnostic tool. Concerned parents should consult a professional to determine if their child has any type of learning disabilities or challenges. See full disclaimer in bylines of this publication.
https://www.forsythfamilymagazine.com/dyslexia-defined-decoded-and-dominated/
The climate crisis is one of the central challenges of the 21st century. It threatens stability and global peace by worsening the living conditions in many regions. In the Sahel zone, for example, the impact of climate change is exacerbating humanitarian crises and migration pressures, and facilitating recruitment by terrorist movements. Ambitious climate diplomacy makes therefore a critical contribution to global peace and stability. The need to contain climate-related geopolitical upheaval also requires climate diplomacy that supports the transition to a climate-resistant, low-emission world. Contemporary international political dynamics, however, complicate efforts to achieve orderly, multilateral negotiations and solutions. In the project ‘Global Responses, Regional Initiatives: Climate Diplomacy 2019-2020’, adelphi and the Federal Foreign Office worked to advance the existing foreign policy agenda for climate and stability. The project focus was the engagement for the global design framework: supporting Germany’s work in the United Nations Security Council 2019-2020. The global Sustainable Development Agenda, with its 17 Sustainable Development Goals (SDGs), serves as another important framework. The aim was to make better use of synergies between foreign policy activities and the implementation of the SDGs. In the project, adelphi experts developed conceptual proposals for the design of international processes, provided support for dialogue with many state and non-state actors, and coordinated the international group of experts on climate security. adelphi also organised dialogue formats, carried out analyses, and wrote studies and policy briefs. This includes, for example, exploring climate and conflict dynamics in Mali and the impact of water scarcity in Central Asia. This approach engaged with climate risks in a manner that is both concrete and regional, strengthening international efforts within the framework of the United Nations and developing specific recommendations for action. In addition, adelphi employed innovative public diplomacy tools – like the digital information platform Climate Diplomacy – to promote ambitious climate protection and systematic foreign policy communication. adelphi also made use of diverse event formats such as a modular mobile exhibition as well as role-playing games and continuing education programs for diplomats and decision-makers. The project was designed so that the different elements engage with one another. Knowledge exchange and networking among a wide range of governmental and non-governmental stakeholders also played a key role.
https://adelphi.de/en/project/global-responses-regional-initiatives-climate-diplomacy-2019-2020
Introduction {#s1} ============ The last few years have witnessed a surge of research on the study of the physiological function and *in vivo* substrates of the fat mass and obesity associated (FTO) gene (for a review see [@pone.0095499-Fawcett1]). Recent interest in the FTO gene stems from studies demonstrating an association between a single nucleotide polymorphism in the first intron of the gene with obesity-related traits and higher obesity risk in different human populations [@pone.0095499-Dina1]--[@pone.0095499-Peeters1]. From a molecular point of view, FTO has been characterized as a 2-oxogluterate dependent dioxygenase that is involved in nucleic acid modification [@pone.0095499-Gerken1]. In mice, global deletion of FTO has been linked to postnatal growth retardation [@pone.0095499-Gao1], reduction in adipose tissue [@pone.0095499-Fischer1], reduction in lean mass [@pone.0095499-Fischer1], [@pone.0095499-McMurray1] and increased energy expenditure [@pone.0095499-Fischer1], [@pone.0095499-McMurray1], thus supporting the involvement of FTO in energy metabolism and body weight regulation. FTO is ubiquitously expressed. In the brain, strong expression is seen in the hippocampus, cerebellum and hypothalamus [@pone.0095499-Gerken1], [@pone.0095499-Gao1], [@pone.0095499-Lein1]. The hypothalamic expression of FTO suggests a potential role of this gene in the regulation of autonomic function [@pone.0095499-Dampney1], [@pone.0095499-Nunn1]. The paraventricular and dorsomedial nuclei of the hypothalamus, which show particularly high expression of FTO, are key modulators of sympathetic outflow [@pone.0095499-Dampney1], [@pone.0095499-Nunn1]. Interestingly, preliminary evidence seems to connect FTO deficiency in mice with increased sympathetic nervous system activity [@pone.0095499-Church1]. FTO is also expressed in many other tissues including the heart, albeit at substantially lower levels [@pone.0095499-Gerken1], [@pone.0095499-Gao1]. Given these considerations, the principal objective of the present study was to investigate the potential role of the FTO gene in the autonomic neural regulation of cardiac function, by means of a mouse model of FTO deficiency. Based on the above observations, we hypothesized that global knockout of FTO would lead to an increased sympathetic excitation of the heart. To test this hypothesis, sympathetic and parasympathetic (vagal) influences on the heart were assessed during resting and stress conditions via time- and frequency-domain analysis of heart rate variability (HRV). We also evaluated whether supposed cardiac sympathetic hyperactivity in FTO knockout mice was associated with increased arrhythmia vulnerability, and investigated potential mediating mechanisms at the electrical and structural level of the heart. Methods {#s2} ======= Ethics Statement and Animals {#s2a} ---------------------------- All experimental procedures and protocols were approved by the Veterinarian Animal Care and Use Committee of Parma University and conducted in accordance with the European Community Council Directives of 22 September 2010 (2010/63/UE). Experiments were performed on 4-month-old male homozygous knockout (Fto^−/−^, n = 12) and wild-type (Fto^+/+^, n = 8) mice obtained from the Mouse Biology Unit of the European Molecular Laboratory of Monterotondo, where they had been generated from crosses between heterozygous animals. Fto^−/−^ mice were created using homologous recombination as described previously in detail [@pone.0095499-Fischer1] and maintained on original C57BL/6N background. At their arrival in our laboratory, mice were housed in groups of 3--4 per cage and kept at ambient temperature of 22--24°C and on a reversed 12∶12 light-dark cycle (light on at 19∶00 h), with food and water *ad libitum*. Radiotelemetry System {#s2b} --------------------- A radiotelemetry system (Data Sciences International, St. Paul, MN, USA) was used for recording ECG (sampling rate 2 kHz), temperature and activity (sampling rate 256 Hz) signals. It consisted of wireless transmitters (TA10ETA-F20) and platform receivers (RPC-1), which were controlled by ART-Gold 1.10 data acquisition system. The transmitters were implanted in experimental mice according to a procedure described by Sgoifo and colleagues [@pone.0095499-Sgoifo1]. The surgery was performed under isoflurane (2% in 100% oxygen) anesthesia. The transmitter body was placed in the abdominal cavity; one electrode was fixed to the dorsal surface of the xyphoid process and the other electrode was placed in the anterior mediastinum close to the right atrium. Such electrode location guarantees high-quality ECG recordings, even during vigorous physical activity. Immediately after surgery, mice were individually housed, injected for 2 days with Gentamicin sulphate (Aagent, Fatro, 0.2 mg/kg, s.c.) and allowed 10 days of recovery before the start of experimental recordings. General Experimental Outline {#s2c} ---------------------------- Following recovery from surgery, mice were left undisturbed in their home cages for 5 days for collection of baseline daily rhythms of heart rate (HR, expressed as beats per minute (bpm)), temperature (T, °C) and locomotor activity (LOC, expressed as counts per minute (cpm)). Subsequently, rats were submitted on different days to: i) two acute stress challenges, namely injection of saline (day 1) and restraint test (day 4); ii) epicardial mapping (day 7). These tests were carried out between 10∶00 and 14∶00 (i.e., the dark phase of the light/dark cycle). At sacrifice, the hearts were excised for structural and morphological analyses. Specific experimental procedures and data analysis are described in the following sections. Baseline Daily Rhythms {#s2d} ---------------------- ECG, T and LOC were sampled around-the-clock for 2 minutes every hour over a period of 5 days for collection of baseline daily rhythms. Data analysis was performed as follows. Separate estimates of HR, T and LOC were initially generated for each 2-min recording period and subsequently averaged as mean values of 12 h-light and 12 h-dark daily phases. These parameters were then further averaged as means of the 5 days of the light and dark phases. Acute Stress Challenges {#s2e} ----------------------- On day 1, Fto^+/+^ and Fto^−/−^ mice received a subcutaneous injection of saline (0.9% NaCl, vol: 1 ml/kg). Continuous ECG recordings were performed prior to (30 min, baseline conditions) and following (60 min) the injection, with the mice in their home cages. On day 4, Fto^+/+^ and Fto^−/−^ mice were placed for 15 min in a cylindrical plastic restrainer fitted closely to the body size (inner diameter 4 cm; length 12 cm) and closed at both ends by removable partitions with holes for air circulation. Continuous ECG recordings were performed prior to the test (30 min, with the mice in their home cages (baseline conditions)), during the restraint test (15 min) and throughout the recovery period (45 min, with the mice in their home cages). Data analysis was conducted as follows. Initially, we split each recording period in 3-min epochs (0--3 min, 3--6, etc.). For each epoch, separate estimates of HR, HRV indexes, T and LOC were generated. Time- and frequency-domain parameters of HRV were quantified using ChartPro 5.0 software (ADInstruments, Sydney, Australia), following the guidelines suggested by Thireau and colleagues for the assessment of HRV parameters in mice [@pone.0095499-Thireau1]. In the time-domain, we obtained the root mean square of successive R-R interval differences (RMSSD, ms), which estimates the activity of the parasympathetic nervous system [@pone.0095499-Stein1]. For spectral (frequency-domain) analysis of HRV, a power spectrum was obtained with a fast Fourier transform-based method (Welch's periodogram: 256 points, 50% overlap, and Hamming window). We considered: i) the total power of the spectrum (ms^2^), which reflects all the cyclic components responsible for variability, ii) the power (ms^2^) of the low frequency band (LF, 0.15--1.5 Hz), which is a non-specific index as it contains contributions of both the sympathetic and parasympathetic influences [@pone.0095499-Eckberg1], iii) the power (ms^2^) of the high frequency band (HF; 1.5--5.0 Hz), which is due to the activity of the parasympathetic nervous system and includes respiration-linked oscillations of HR [@pone.0095499-Berntson1], and iv) the low frequency/high frequency ratio (LF/HF), which estimates the fractional distribution of power and is taken as a synthetic measure of sympathovagal balance [@pone.0095499-Task1]. In addition, ECG signals obtained during baseline, pre-saline-injection recordings were further analyzed as follows. Three 2s-segments of high ECG quality [@pone.0095499-Carnevali1] were randomly selected for each 3-min epochs in order to quantify the duration of: i) P wave; ii) PQ segment; iii) QRS complex; iv) QTc, which is the QT interval normalized to cycle length. Lastly, the occurrence of arrhythmic events was determined and quantified off-line based on the Lambeth Conventions for the study of experimental arrhythmias [@pone.0095499-Curtis1]. We determined and quantified the separate occurrence of supraventricular (SV) and ventricular (V) ectopic beats and the total number of tachyarrhythmic events in baseline and challenge conditions. Epicardial Mapping {#s2f} ------------------ On day 7, mice were anesthesized with Xylazine (10 mg/kg, i.p.) and Ketamine (50 mg/kg, i.p.). Subsequently, the heart was exposed through a longitudinal sternotomy. An epicardial electrode array (5×5 row and column with a 0.6 mm resolution square mesh) was used to record unipolar epicardial electrograms during sinus rhythm and ventricular pacing in order to determine cardiac excitability, conduction velocity of the electrical impulse, and refractoriness [@pone.0095499-Colussi1]. The epicardial mapping protocol was prematurely interrupted in two Fto^+/+^ and three Fto^−/−^ mice because of technical difficulties that precluded accurate recording. Therefore, data analysis was conducted in 6 Fto^+/+^ and 9 Fto^−/−^ mice as follows. ### i) Excitability {#s2f1} The strength-duration curve was obtained as a measure of cardiac excitability [@pone.0095499-Fozzard1] at 5 selected electrodes of the array, as described previously in detail [@pone.0095499-Colussi1]. The strength duration curve is represented by the equation I = Rh(1+Chr/T), where I is the threshold current strength, T is the pulse duration, Rh is the rheobase (i.e., the lowest intensity with infinite pulse duration which succeeds in eliciting a propagated response in excitable tissues), and Chr is the chronaxie (i.e., the pulse duration having a threshold twice that of Rh). ### ii) Conduction velocity {#s2f2} Activation sequences (isochrone maps) were computed from the activation times of paced beats using custom written software, and conduction velocity longitudinally and conduction velocity transversally to fiber orientation were calculated from them, as previously described [@pone.0095499-Colussi1]. ### iii) Refractoriness {#s2f3} Ten baseline stimuli (S1), 1 ms width and twice diastolic threshold intensity, were delivered at each of the 5 selected electrodes of the array at a frequency slightly higher than basal cycle length, as in [@pone.0095499-Colussi1]. The S1 pacing sequence was followed by an extra-stimulus (S2, four-fold S1 intensity) whose delay from previous S1 was first progressively decremented by 10 ms steps until capture was lost and then progressively incremented by 2 ms steps till capture was resumed. We considered: i) the effective refractory period (ERP), which was defined as the shortest S1--S2 time interval at which excitation from S2 failed, and ii) the spatial dispersion of the ERP, measured as the maximum difference (range) and the standard deviation (SD) of the mean [@pone.0095499-Burton1]. Post Mortem Measurements {#s2g} ------------------------ Upon completion of the epicardial mapping, the heart was arrested in diastole by cadmium chloride solution injection (100 mM, i.v.). The heart of the 6 Fto^+/+^ and 9 Fto^−/−^ mice that concluded the mapping protocol was removed from the chest and fixed in 10% buffered formalin solution. ### i) Cardiac anatomy {#s2g1} After 24 h, the free walls of the right ventricle (RV) and the left ventricle (LV) inclusive of interventricular septum were separated and their weights recorded. These data and heart weight (HW) were normalized to body weight (BW) value. *ii)* Transverse sections of the LV were paraffin embedded, 5-µm thick sections were then cut and stained with Haemtoxylin & Eosin or Masson's Trichrome following procedures that have been described previously in detail [@pone.0095499-Costoli1], [@pone.0095499-Trombini1] in order to evaluate: i) the volume fraction of myocytes, ii) the total amount of fibrosis, and iii) interstitial extension. Specifically, the number of points overlying each tissue components was counted and expressed as percentage of the total number of points explored. All these morphometric measurements were obtained with the aid of a grid defining a tissue area of 0.23 mm^2^ and containing 42 sampling points each covering an area of 0.0052 mm^2^. Statistics {#s2h} ---------- All statistical analyses were performed using the software package SPSS (version 20). Two-way ANOVA for repeated measures with group as between-subject factor (2 levels: Fto^+/+^ and Fto^−/−^ ) was applied for data obtained from: i) baseline daily rhythms, with time as within-subject factor (2 levels: light and dark phases); ii) injection of saline, with time as within-subject factor (4 levels: baseline; post-injection 1, 2, and 3); iii) restraint test, with time as within-subject factor (5 levels: baseline; test; recovery 1, 2, and 3). Follow-up analyses were conducted using Student's "t" tests, with a Bonferroni correction for multiple comparisons for each outcome variable separately. A priori Student's "t"-tests, after controlling for homogeneity of variance via Levene test, were applied for comparisons between Fto^+/+^ and Fto^−/−^ mice on: i) the occurrence of arrhythmic events; ii) data obtained from epicardial mapping; iii) measurements at sacrifice. Data are presented as means ± standard error of the mean (SEM). Statistical significance was set at p\<0.05. Results {#s3} ======= Baseline Daily Rhythms {#s3a} ---------------------- The daily rhythms of HR, T and LOC in wild-type and Fto^−/−^ mice under during resting conditions are depicted in [Figure 1](#pone-0095499-g001){ref-type="fig"}. Two-way ANOVA yielded a significant effect of i) group on HR (F = 5.2, p\<0.05) and LOC (F = 11.3, p\<0.01) values and ii) time on HR (F = 150.3, p\<0.01), T (F = 96.7, p\<0.01) and LOC (F = 49.4, p\<0.01) values. ![Daily rhythms of heart rate, body temperature and locomotor activity.\ For the 12-light and 12 h-dark phases, values are reported as means ± SEM of data obtained by averaging multiple 2-min segments acquired every hour over a period of 5 days in Fto^+/+^ (n = 8) and Fto^−/−^ (n = 12) mice. \* and ^\#^ indicate a significant difference between Fto^+/+^ and Fto^−/−^ mice (p\<0.05 and p\<0.01, respectively).](pone.0095499.g001){#pone-0095499-g001} Fto^−/−^ mice had significantly higher values of HR than Fto^+/+^ counterparts during both the light (t = 2.2, p\<0.05) and the dark (t = 2.2, p\<0.05) phases of the circadian cycle ([Figure 1A](#pone-0095499-g001){ref-type="fig"}). In addition, Fto^−/−^ mice had higher values of T than Fto^+/+^ counterparts, although statistical significance was reached only during the dark phase (t = 2.1, p\<0.05) ([Figure 1B](#pone-0095499-g001){ref-type="fig"}). Likewise, Fto^−/−^ mice exhibited higher values of LOC than Fto^+/+^ counterparts during the dark phase (t = 3.5, p\<0.01), whereas during the light phase the two groups showed similar LOC values ([Figure 1C](#pone-0095499-g001){ref-type="fig"}). Injection of Saline {#s3b} ------------------- Cardiac autonomic responses to the injection of saline are depicted in [Figure 2](#pone-0095499-g002){ref-type="fig"} and detailed in [Table 1](#pone-0095499-t001){ref-type="table"}. Two-way ANOVA yielded: i) significant effect of group on HR (F = 5.0, p\<0.05) and LF/HF (F = 4.3, p = 0.05) values, ii) significant effect of time on HR (F = 27.8, p\<0.01), RMSSD (F = 15.8, p\<0.01), LF (F = 13.4, p\<0.01), HF (F = 5.3, p\<0.05) and LOC (F = 10.6, p\<0.01) values and iii) a time × group interaction on T values (F = 4.9, p\<0.05). ![Cardiac autonomic response to the injection of saline.\ Time course of changes in heart rate (panel A), RMSSD values (panel B), high frequency (HF) spectral power (panel C) and LF to HF ratio (panel D) following the injection of saline, in Fto^+/+^ (n = 8) and Fto^−/−^ (n = 12) mice. Baseline reference value (bas) is the mean value of the ten 3-min time points in resting conditions. Values are expressed means ± SEM. \* and ^\#^ indicate a significant difference between Fto^+/+^ and Fto^−/−^ mice (p\<0.05 and p\<0.01, respectively).](pone.0095499.g002){#pone-0095499-g002} 10.1371/journal.pone.0095499.t001 ###### Radiotelemetric and HRV parameters in response to the saline injection test. ![](pone.0095499.t001){#pone-0095499-t001-1} Basal Post-Injection(min 0--15) Post-Injection(min 15--30) Post-Injection(min 30--45) Post-Injection(min 45--60) --------------------- ----- ------------ --------------------------- ---------------------------- ---------------------------- ---------------------------- HR (bpm) +/+ 559±13 690±13 642±16 612±8 544±15 −/− 618±14^\#^ 731±7^\#^ 675±9 605±14 568±17 Total power (ms^2^) +/+ 55.8±11.5 21.1±5.6 32.8±6.1 40.3±6.1 55.6±9.2 −/− 39.2±5.1 16.6±3.8 19.4±3.6 38.4±8.2 47.7±9.0 RMSSD (ms) +/+ 4.1±0.6 2.0±0.2 3.0±0.4 3.3±0.4 4.9±0.6 −/− 3.0±0.2\* 1.7±0.2 2.2±0.2 3.2±0.3 3.7±0.5 HF power (ms^2^) +/+ 6.4±1.7 1.7±0.3 2.8±0.5 3.6±0.6 7.1±1.1 −/− 3.1±0.4\* 1.2±0.3 1.5±0.3 3.0±0.6 5.0±1.4 LF power (ms^2^) +/+ 19.3±4.6 7.2±1.5 14.1±3.1 17.7±3.6 28.2±5.9 −/− 13.1±1.7 7.2±1.7 9.2±1.4 13.7±2.4 16.6±3.6 LF/HF +/+ 3.1±0.3 4.4±0.6 4.9±0.7 4.6±0.6 3.7±0.4 −/− 4.5±0.4\* 6.2±0.6\* 6.1±0.4 4.5±0.4 4.4±0.6 T (°C) +/+ 36.4±0.1 36.8±0.2 37.1±0.1 37.0±0.2 36.8±0.1 −/− 36.8±0.1\* 37.2±0.1 37.4±0.1 37.0±0.1 36.7±0.1 LOC (cpm) +/+ 2.3±0.5 11.5±2.7 2.3±0.9 4.6±2.2 3.7±1.2 −/− 4.2±1.3 12.4±2.1 5.5±1.8 3.2±1.3 2.8±1.2 Values are reported as means ± SEM of data obtained by averaging multiple 3-min segments acquired in baseline conditions (30 min) and following the injection of saline (60 min) in Fto^+/+^ (n = 8) and Fto^−/−^ mice (n = 12). Abbreviations: HRV = heart rate variability; HR = heart rate; RMSSD = square root of the mean squared differences of successive RR intervals; HF = high-frequency; LF = low-frequency; T = body temperature; LOC = locomotor activity. \* and ^\#^ indicate a significant difference between Fto^+/+^ and Fto^−/−^ mice (p\<0.05 and p\<0.01, respectively). Before the test, baseline HR was significantly higher in Fto^−/−^ than in Fto^+/+^ mice (t = 2.9, p\<0.01) ([Figure 2A](#pone-0095499-g002){ref-type="fig"} and [Table 1](#pone-0095499-t001){ref-type="table"}). In the same period, HRV analysis revealed i) significantly lower values of RMSSD (t = −2.2, p\<0.05) and HF spectral power (t = −2.2, p\<0.05) in Fto^−/−^ mice compared to Fto^+/+^ counterparts ([Figure 2B, C](#pone-0095499-g002){ref-type="fig"} and [Table 1](#pone-0095499-t001){ref-type="table"}) and ii) significantly higher LF to HF ratio in Fto^−/−^ than in Fto^+/+^ mice (t = 2.7, p\<0.05) ([Figure 2D](#pone-0095499-g002){ref-type="fig"} and [Table 1](#pone-0095499-t001){ref-type="table"}). During the first 15 min that followed the injection of saline, mean HR was significantly higher in Fto^−/−^ than in Fto^+/+^ mice (t = 2.9, p\<0.01) ([Figure 2A](#pone-0095499-g002){ref-type="fig"} and [Table 1](#pone-0095499-t001){ref-type="table"}). In the same period, no differences were found in RMSSD and HF spectral power values between the two groups ([Figure 2B, C](#pone-0095499-g002){ref-type="fig"} and [Table 1](#pone-0095499-t001){ref-type="table"}). However, LF to HF ratio resulted significantly higher in Fto^−/−^ than in Fto^+/+^ mice (t = 2.1, p\<0.05) ([Figure 2D](#pone-0095499-g002){ref-type="fig"} and [Table 1](#pone-0095499-t001){ref-type="table"}). In both groups, the total incidence of tachyarrhytmic events during baseline recordings was almost null (Fto^−/−^ = 0.5±0.3 events vs. Fto^+/+^ = 0.9±0.0 events). Following the injection of saline, the total incidence of tachyarrhthmic events was significantly larger in Fto^−/−^ mice compared to Fto^+/+^ counterparts (t = 2.1, p\<0.05) ([Figure 3B](#pone-0095499-g003){ref-type="fig"}). ![Susceptibility to cardiac tachyarrhythmias.\ Panel A shows an example of ECG traces belonging to a representative Fto^−/−^ mouse with isolated supraventricular (SV) and ventricular (V) ectopic beats. Panels B and C report the incidence of tachyarrhythmias following the injection of saline and during the restraint test, respectively, in Fto^+/+^ (n = 8) and Fto^−/−^ (n = 12) mice. Values are reported as means ± SEM. \* indicates a significant difference between Fto^+/+^ and Fto^−/−^ mice (p\<0.05).](pone.0095499.g003){#pone-0095499-g003} Restraint Test {#s3c} -------------- Cardiac autonomic responses to the restraint test are depicted in [Figure 4](#pone-0095499-g004){ref-type="fig"} and detailed in [Table 2](#pone-0095499-t002){ref-type="table"}. Two-way ANOVA yielded significant effects of i) group on HR (F = 4.4, p = 0.05) and LF/HF (F = 17.6, p\<0.01) values and ii) time on HR (F = 13.9, p\<0.01), RMSSD (F = 12.7, p\<0.01), total power (F = 18.3, p\<0.01), LF (F = 28.2, p\<0.01) and HF (F = 16.8, p\<0.01) values. ![Cardiac autonomic response to restraint test.\ Time course of changes in heart rate (panel A), RMSSD values (panel B), high frequency (HF) spectral power (panel C), and LF to HF ratio (panel D) during the restraint test and the recovery phase, in Fto^+/+^ (n = 8) and Fto^−/−^ (n = 12) mice. Baseline reference value (bas) is the mean value of the ten 3-min time points in resting conditions. Values are expressed as means ± SEM. \* and ^\#^ indicate a significant difference between Fto^+/+^ and Fto^−/−^ mice (p\<0.05 and p\<0.01, respectively).](pone.0095499.g004){#pone-0095499-g004} 10.1371/journal.pone.0095499.t002 ###### Radiotelemetric and HRV parameters in response to the restraint test. ![](pone.0095499.t002){#pone-0095499-t002-2} Basal Restraint Recovery (min 0--15) Recovery (min 15--30) Recovery (min 30--45) --------------------- ----- ------------- ----------- ---------------------- ----------------------- ----------------------- HR (bpm) +/+ 571±18 767±8 719±10 633±29 570±27 −/− 614±11\* 789±6\* 727±14 669±15 627±16 Total power (ms^2^) +/+ 59.8±13.4 5.7±1.9 23.8±4.3 72.6±16.1 90.6±18.5 −/− 47.6±7.9 3.4±0.4 28.9±5.2 46.3±9.9 52.3±17.3 RMSSD (ms) +/+ 4.6±0.5 1.0±0.1 1.9±0.2 4.2±1.0 5.7±1.2 −/− 3.3±0.3\* 0.7±0.1 2.2±0.3 3.1±0.5 3.9±0.7 HF power (ms^2^) +/+ 6.9±1.3 0.5±0.2 1.5±0.2 9.3±2.2 14.2±2.9 −/− 3.7±0.6\* 0.2±0.1 2.4±0.5 4.7±1.9 7.1±3.6 LF power (ms^2^) +/+ 17.4±3.0 2.4±0.8 7.2±1.3 26.6±5.0 36.2±6.8 −/− 15.9±2.2 1.3±0.2 12.4±2.5 18.6±5.0 26.5±8.7 LF/HF +/+ 2.6±0.3 4.7±0.5 4.6±0.5 3.3±0.6 2.8±0.2 −/− 4.6±0.4^\#^ 6.7±0.7\* 5.5±0.6 5.0±0.5\* 5.1±0.6^\#^ T (°C) +/+ 36.7±0.1 37.1±0.3 37.5±0.2 37.1±0.1 36.7±0.2 −/− 36.7±0.1 37.5±0.2 37.9±0.2 37.5±0.2 37.1±0.3 LOC (cpm) +/+ 1.6±0.5 5.7±1.3 13.2±2.9 3.9±1.8 3.3±1.8 −/− 3.4±0.7 4.6±0.7 16.5±2.5 6.4±2.0 5.1±2.0 Values are reported as means ± SEM of data obtained by averaging multiple 3-min segments acquired in baseline conditions (30 min), during the restraint (15 min) and the recovery phase (45 min) in Fto^+/+^ (n = 8) and Fto^−/−^ mice (n = 12). Abbreviations: HRV = heart rate variability; HR = heart rate; RMSSD = square root of the mean squared differences of successive RR intervals; HF = high-frequency; LF = low-frequency; T = body temperature; LOC = locomotor activity. \* and ^\#^ indicate a significant difference between Fto^+/+^ and Fto^−/−^ mice (p\<0.05 and p\<0.01, respectively). Before the test, baseline HR was significantly higher in Fto^−/−^ than in Fto^+/+^ mice (t = 2.1, p\<0.05) ([Figure 4A](#pone-0095499-g004){ref-type="fig"} and [Table 2](#pone-0095499-t002){ref-type="table"}). In the same period, HRV analysis revealed i) significantly lower values of RMSSD (t = −2.3, p\<0.05) and HF spectral power (t = −2.4, p\<0.05) in Fto^−/−^ mice compared to Fto^+/+^ counterparts ([Figure 4B, C](#pone-0095499-g004){ref-type="fig"} and [Table 2](#pone-0095499-t002){ref-type="table"}) and ii) significantly higher LF to HF ratio in Fto^−/−^ than in Fto^+/+^ mice (t = 4.2, p\<0.01) ([Figure 4D](#pone-0095499-g004){ref-type="fig"} and [Table 2](#pone-0095499-t002){ref-type="table"}). During the test, mean HR was significantly higher in Fto^−/−^ than in Fto^+/+^ mice (t = 2.2, p\<0.05) ([Figure 4A](#pone-0095499-g004){ref-type="fig"} and [Table 2](#pone-0095499-t002){ref-type="table"}). In the same period, no differences were found in RMSSD and HF spectral power values between the two groups ([Figure 4B, C](#pone-0095499-g004){ref-type="fig"} and [Table 2](#pone-0095499-t002){ref-type="table"}). However, LF to HF ratio resulted significantly higher in Fto^−/−^ than in Fto^+/+^ mice (t = 2.3, p\<0.05) ([Figure 4D](#pone-0095499-g004){ref-type="fig"} and [Table 2](#pone-0095499-t002){ref-type="table"}). During the recovery phase, no differences between Fto^−/−^ and Fto^+/+^ mice were found in mean HR ([Figure 4A](#pone-0095499-g004){ref-type="fig"} and [Table 2](#pone-0095499-t002){ref-type="table"}), as well as in RMSSD and HF spectral power values ([Figure 4B, C](#pone-0095499-g004){ref-type="fig"} and [Table 2](#pone-0095499-t002){ref-type="table"}). However, LF to HF ratio resulted significantly higher in Fto^−/−^ than in Fto^+/+^ mice during the second (t = 2.3, p\<0.05) and third (t = 3.8, p\<0.01) 15-min recording period ([Figure 4D](#pone-0095499-g004){ref-type="fig"} and [Table 2](#pone-0095499-t002){ref-type="table"}). In both groups, the total incidence of tachyarrhytmic events during baseline recordings was almost null (Fto^−/−^ = 0.5±0.3 events vs. Fto^+/+^ = 0.6±0.4 events). During the restraint test, the total incidence of tachyarrhytmic events was significantly larger in Fto^−/−^ mice compared to Fto^+/+^ counterparts (t = 2.9, p\<0.05) ([Figure 3C](#pone-0095499-g003){ref-type="fig"}). Cardiac Intervals {#s3d} ----------------- The duration of P wave and PQ segment was significantly shorter in Fto^−/−^ mice compared to Fto^+/+^ counterparts (P wave: t = −7.8, p\<0.01; PQ segment: t = −5.4, p\<0.01) ([Figure 5](#pone-0095499-g005){ref-type="fig"}). Likewise, QRS complex duration was significantly shorter in Fto^−/−^ than in Fto^+/+^ mice (t = −2.1, p\<0.05) ([Figure 5](#pone-0095499-g005){ref-type="fig"}). On the other hand, the duration of QTc was significantly longer in Fto^−/−^ mice compared to Fto^+/+^ counterparts (t = 6.5, p\<0.01) ([Figure 5](#pone-0095499-g005){ref-type="fig"}). ![Cardiac interval duration.\ Values are expressed as means ± SEM. \* and ^\#^ indicate a significant difference between Fto^+/+^ (n = 8) and Fto^−/−^ (n = 12) mice (p\<0.05 and p\<0.01, respectively).](pone.0095499.g005){#pone-0095499-g005} Epicardial Mapping {#s3e} ------------------ ### Excitability {#s3e1} Rheobase and chronaxie values, which were determined from the strength-duration curve, were similar between Fto^−/−^ and Fto^+/+^ mice (rheobase: Fto^−/−^ = 31.4±5.0 µA vs. Fto^+/+^ = 31.4±2.7 µA; chronaxie: Fto^−/− = ^1.6±0.2 ms vs. Fto^+/+^ = 1.6±0.2 ms). ### Conduction velocity {#s3e2} Longitudinal ventricular conduction velocity was significantly faster in the heart of Fto^−/−^ mice compared with the heart of Fto^+/+^ mice (Fto^−/−^ = 0.511±0.003 m/s vs. Fto^+/+^ = 0.486±0.006 m/s, t = 3.92, p\<0.01), whereas no differences in transversal ventricular conduction velocity were observed between the two groups (Fto^−/−^ = 0.273±0.002 m/s vs. Fto^+/+^ = 0.268±0.004 m/s). ### Refractoriness {#s3e3} The duration of the ERP was similar between Fto^−/−^ and Fto^+/+^ mice (Fto^−/−^ = 82.1±2.3 ms vs. Fto^+/+^ = 79.5±3.1 ms). Likewise, the spatial dispersion of the ERP was similar between Fto^−/−^ and Fto^+/+^ mice (range: Fto^−/−^ = 26.0±4.0 ms vs. Fto^+/+^ = 28.0±4.5 ms; SD: Fto^−/−^ = 11.3±1.9 vs. Fto^+/+^ = 11.7±1.7). Measurements at Sacrifice {#s3f} ------------------------- Before euthanasia, Fto^−/−^ and Fto^+/+^ mice had similar BW (Fto^−/−^ = 29.6±0.7 g vs. Fto^+/+^ = 31.0±1.0 g). ### Cardiac anatomy {#s3f1} HW and HW corrected for BW (HW/BW ratio) were significantly higher in Fto^−/−^ than in Fto^+/+^ mice (HW: t = −2.7, p\<0.05; HW/BW: t = −3.3, p\<0.01) ([Table 3](#pone-0095499-t003){ref-type="table"}). Likewise, LVW, RVW, and their values corrected for BW were significantly augmented in Fto^−/−^ compared to Fto^+/+^ mice (LVW: t = −2.2, p\<0.05; RVW: t = −2.2, p\<0.05; LVW/BW: t = −3.1, p\<0.01; RVW/BW: t = −2.4, p\<0.05) ([Table 3](#pone-0095499-t003){ref-type="table"}). 10.1371/journal.pone.0095499.t003 ###### Gross cardiac characteristics. ![](pone.0095499.t003){#pone-0095499-t003-3} Fto^+/+^ (n = 6) Fto^−/−^ (n = 9) --------------- ------------------ ------------------ HW (mg) 121±6 160±11\* HW/BW (mg/g) 3.91±0.17 5.40±0.34^\#^ LVW (mg) 85±5 104±6\* LVW/BW (mg/g) 2.73±0.15 3.50±0.18^\#^ RVW (mg) 25±1 35±4\* RVW/BW (mg/g) 0.80±0.05 1.18±0.13\* Values are reported as means ± SEM. Abbreviations: HW = heart weight; BW = body weight; LVW = left ventricular weight; RVW = right ventricular weight. \* and ^\#^ indicate a significant difference between Fto^+/+^ and Fto^−/−^ mice (p\<0.05 and p\<0.01, respectively). ### Tissue morphometry {#s3f2} Morphometric analysis did not show significant changes in the volume fraction of myocytes (Fto^−/−^ = 90.8±1.4% vs. Fto^+/+^ = 89.9±1.5%) and interstitial compartments (Fto^−/−^ = 9.0±1.5% vs. Fto^+/+^ = 9.8±1.5%). Myocardial fibrosis was negligible in the LV myocardium of Fto^−/−^ and Fto^+/+^ mice (Fto^−/−^ = 0.13±0.08% vs. Fto^+/+^ = 0.26±0.10%). Discussion {#s4} ========== The major and novel finding of this study is that FTO deficiency in mice leads to increased heart rate in resting and stress conditions. Such positive chronotropic effect appeared to be linked to a shift of the autonomic balance towards a sympathetic prevalence and was associated with: (i) potentially proarrhythmic remodeling at the electrical (altered ventricular repolarization) and structural (hyperthrophy) level of the heart and (ii) increased vulnerability to stress-induced arrhythmias. A previous study in a mouse model bearing a missense mutation in the FTO gene provided preliminary evidence linking FTO deficiency to increased sympathetic nervous system activity (measured by urinary noradrenaline levels) [@pone.0095499-Church1]. However, to the best of our knowledge, this study is the first description of the effects of global knockout of FTO on cardiac function and its autonomic neural regulation. In resting conditions, FTO deficient mice were characterized by higher heart rate values than wild-type mice, both during the active (dark) and inactive (light) phase of the daily cycle. Likewise, we found signs of elevated body temperature in mice lacking the FTO gene. Clearly, differences in heart rate and body temperature may have been determined by different levels of somatomotor activity, which indeed resulted significantly higher in knockout mice during the active phase of the daily cycle. However, given that heart rate was consistently higher in FTO deficient mice even when somatomotor activity levels were not greater, we believe that autonomic mechanisms concurred to determine higher heart rate in these animals. Supporting this view, HRV analysis revealed that knockout mice were characterized by a lower vagal modulation of heart rate (RMSSD and HF indexes) than wild-type counterparts. In addition, the fact that FTO deficient mice showed higher LF to HF ratio (index of sympatho-vagal balance) is suggestive of a larger contribution of the sympathetic modulation of heart rate in mice lacking the FTO gene. Signs that link FTO deficiency to increased cardiac sympathetic drive were evident during stress conditions. Following the injection of saline and during the restraint test, stress-induced tachycardia was greater in knockout mice, despite similar low levels of vagal modulation (RMSSD and HF indexes) between the two groups. This is a clear indication of a larger sympathetic modulation of heart rate in FTO deficient mice, which consequently resulted in a shift of the sympatho-vagal balance towards an exaggerated sympathetic prevalence (i.e., increased LF to HF ratio). Given that high expression of FTO is seen in the paraventricular and dorsomedial nuclei of the hypothalamus [@pone.0095499-Gerken1], [@pone.0095499-Gao1], [@pone.0095499-Lein1], which represent important brain centers for the regulation of autonomic function, especially during stress response [@pone.0095499-Dampney1], [@pone.0095499-Nunn1], we hypothesize a role of FTO in these brain areas in modulating sympathetic outflow to the heart. Previous studies have demonstrated that β-adrenergic agonists increase the inward sodium current in cardiomyocytes [@pone.0095499-Arnar1]--[@pone.0095499-Wang1]. Because the sodium current is a major determinant of conduction, it is thus reasonable to speculate that enhanced cardiac sympathetic tone is responsible for the reduction in the duration of P wave (index of atrial activation interval), PQ segment (index of atrio-ventricular conduction) and QRS complex (index of ventricular activation) within the heart of FTO deficient mice. The sympathetic nervous system is known to play an important role in arrhythmogenesis [@pone.0095499-Zipes1]. Catecholamines can increase automaticity [@pone.0095499-Toda1] and induce triggered activity [@pone.0095499-Priori1], [@pone.0095499-Valenzuela1], thereby increasing arrhythmic risk. Importantly, in this study we provide evidence of increased vulnerability to stress-induced tachyarrhythmias in mice lacking the FTO gene. Of note, arrhythmogenesis was almost completely absent in wild-type mice. It is interesting that arrhythmia vulnerability in FTO deficient mice was (i) induced by stress exposure, as arrhythmic events were only sporadically noted during baseline recordings, and (ii) clearly more pronounced in response to the restraint than the injection stress. Taken together these findings point to a close link between FTO deficiency and arrhythmia vulnerability, particularly in conditions of sustained stress exposure (restraint stress) [@pone.0095499-Sgoifo2]. Investigation of potential electrophysiological changes relevant to arrhythmogenesis in the heart of FTO deficient mice revealed that no changes occurred in ventricular excitability and refractoriness, suggesting that arrhythmia vulnerability may not be linked to cellular electrophysiological abnormalities. Noteworthy, these measures were obtained in anesthetized mice (i.e., under this condition sympathetic tone is greatly suppressed [@pone.0095499-Tan1]), and therefore we cannot exclude that sympathetic hyperactivity in FTO deficient mice may have affected cardiac excitability and/or refractoriness in the awake state. In addition, our data indicate that arrhythmogenesis was not correlated to accumulation of fibrotic tissue in the left ventricular myocardium. Our hypothesis is that exaggerated sympathetic stress response triggered abnormal automaticity in non-pacemaker tissue. We found in FTO deficient mice signs of cardiac hypertrophy, affecting both the right and the left ventricles. The specific morphological changes were not investigated here, but might reflect structural changes in the hypertrophied myocardium altering the ion channels operating during the early repolarization phase. This hypothesis was based on the observation that the duration of QTc interval (marker of ventricular repolarization) was longer in FTO deficient mice. Therefore, we hypothesize a role of ventricular hypertrophy in altering ventricular repolarization to explain QTc lengthening in these mice [@pone.0095499-Oikarinen1]. The QTc interval is also influenced by the autonomic nervous system: abnormal sympathetic modulation [@pone.0095499-Abildskov1] or vagal withdrawal [@pone.0095499-Browne1] directly induce altered ventricular repolarization, thus leading to prolongation of the QTc interval. Therefore, exaggerated cardiac sympathetic predominance in mice lacking the FTO gene could contribute directly to both ventricular hypertrophy and abnormal ventricular repolarization independent from blood pressure [@pone.0095499-Mancia1], conditions that might serve as a substrate for arrhythmias [@pone.0095499-Schouten1], [@pone.0095499-Schwartz1]. Further studies are needed in order to elucidate the biophysical mechanisms and the cellular and subcellular bases of the reported arrhythmogenesis. Conclusion and Perspective {#s4a} -------------------------- Previous studies have demonstrated that FTO deficiency in mice results in a lean phenotype [@pone.0095499-Fawcett1], [@pone.0095499-Church1], [@pone.0095499-Tews1]. This observation has prompted researchers to hypothesize that inhibition of FTO might be of therapeutic interest in relation to morbid obesity. Putative mechanisms underlying the lean phenotype of FTO deficient mice may include an increase in sympathetic nervous system activity, thereby promoting lipolysis and thermogenesis in adipose tissue and muscle [@pone.0095499-Fawcett1], [@pone.0095499-Church1]. In our mouse model, FTO deficiency led to an exaggerated sympathetic contribution of the autonomic neural modulation of cardiac function and to a potentially proarrhythmic remodeling of the myocardium. We did not determine whether such autonomic imbalance in the sympathetic direction was mediated directly by hypothalamic mechanisms or indirectly by alternative mechanisms that may have occured in FTO deficient mice during development. This represents the major limitation of this study. Further investigations using brain specific and inducible FTO deficiency or FTO deficiency tied for example to certain hypothalamic (e.g. CRH) neurons may be useful for (i) revealing the precise neurobiological pathways underlying the autonomic phenotype of FTO deficient mice and (ii) determining whether reducing the expression or inactivating catalytic activity of FTO might represent a promising strategy to purse in order to alleviate obesity. [^1]: **Competing Interests:**Mumna Al Banchaabouchi works for a commercial company, Preclinical Phenotyping Facility, CSF-Campus Science Support Facilities GmbH, Vienna, Austria. All other authors declare that no competing interests exist. This does not alter the authors' adherence to PLOS ONE policies on sharing data and materials. [^2]: Conceived and designed the experiments: AS EM. Performed the experiments: LC SR GG. Analyzed the data: LC SR GG. Contributed reagents/materials/analysis tools: MAB NR FQ. Wrote the paper: LC. Revised the article critically for important intellectual content: AS EM FQ NR MAB GG SR.
- What were some causes of french unrest? - Bad harvests, high prices, high taxes, and disturbing questions raised by the Enlightenment ideas of Locke, Rousseau, and Voltaire. - What is another name for the estate system of france? - Old Regime - How many estates were there? - Three - Who owned 10% of the land in France? - The Clergy (Roman Catholic Church) - Who made up the second Estate? - Rich nobles - Who owned 20% of the land in france before the revolution? - The Second Estate - What percentage of the population of France was made up of Rich Nobles? - 2% - Which estate did 97% of people belong to? - The Third Estate - What three groups made up the third estate? - Bourgeoise (middle class), Workers, Peasants. - Which part of the third Estate does this describe? Bankers, factory owners, merchants, professionals, and skilled artisans. - Bourgeoisie - Which part of the third Estate does this describe? Tradespeople, apprentices, laborers and domestic servants. - Workers - Which was the poorest group within the Third Estate? - The Workers - Which part of the third Estate does this describe? Paid half their income in dues to nobles, tithes to the Church, an taxes to the King's agesnts. - Peasants - Which enlightenment philosophers were quoted most by the Third Estate before the Revolution? - Rousseau and Voltaire - Who said: "The Third Estate is the People and the People is the foundation of the State" - Comte D'Antraigues - When did France's government sink deeply into debt? - During the 1770's and 1780's - Which two public figures were part of France's debt problem? - Louis XVI and Marie Antoinette - When did bankers refuse to lend the government any more money? - 1786 - Who, besides the Third Class did Louis XVI finally resort to taxing? - The Nobility - The Second Estate forced Louis XVI to call this meeting: - Estates General - What was the Estates General? - An assembly of representatives from all three estates. - When was the meeting of the Estates General called? - May 5, 1789 at Versailles - Who had dominated the Estates-General in throughout the Middle Ages? - The clergy and Nobles. - This person was a leading spokesman for the cause of the Third Estate at the 1789 meeting of the Estates General - Emmanuel-Joseph Sieyes - What did Sieyes suggest in his dramatic speech to the Estates General? - He suggested that the Third Estate delegates name themselves the National Assembly and pass laws and reforms in the name of the French people. - When did the Third Estate vote to establish the National Assembly? - June 17, 1789 - Name this event: Third Estate delegates broke down a door to a tennis court and pledged to stay until they had drawn up a new constitution. - Tennis Court Oath - What was the Bastille? - A Paris Prison. - When is Bastille day celebrated in France? - July 14 - Armed with pitchforks and farm tools, peasants destroyed old legal papers that bound them to feudal dues and burned down manor houses. This was one of the events which happened in a period of time called: - Great Fear - When did thousands of parisian women force louis and Marie Antoinette to return to Paris?
https://cueflash.com/decks/tag/french/10892/Chapter_7_History_Notes
We are all facing different and difficult challenges as we confront the COVID-19 pandemic. In order to support you in this time of uncertainty, the University of Pennsylvania is sharing this free and unique version of Dr. Karen Reivich’s “Resilience Skills” course from the Specialization Foundations of Positive Psychology. Über diesen Kurs von University of Pennsylvania The University of Pennsylvania (commonly referred to as Penn) is a private university, located in Philadelphia, Pennsylvania, United States. A member of the Ivy League, Penn is the fourth-oldest institution of higher education in the United States, and considers itself to be the first university in the United States with both undergraduate and graduate studies. Lehrplan - Was Sie in diesem Kurs lernen werden Resilience and Optimism In this module, you will learn the definition of resilience and understand the protective factors that make one resilient. You will differentiate between helplessness and mastery orientations, and understand the thinking styles underlying each. You will summarize major outcomes of optimism, and the mediators of those outcomes, as well as assess your own levels of optimism, using a questionnaire. Finally, you will hear about personal and organizational outcomes of optimism, and will be able to apply these concepts to your own life. Cognitive Approaches to Resilience: Strategies to Increase Optimism and Resilient Thinking In this module, you will learn about thinking traps and how they undercut resilience. You will learn about five common thinking traps and identify which you are prone to, in addition to the effects of those styles of thinking. You will practice Real-Time Resilience, a strategy to challenge non-resilient thinking. Finally, you will hear about personal and organizational outcomes of optimism, and plan how to apply these concepts to your own life. Managing Anxiety and Increasing Positive Emotions Like Gratitude In this module, you will learn the definition of catastrophic thinking and identify the effects it has on physiology, attention and contingency planning. You will experiment with several non-cognitive strategies to decrease anxiety, including Deliberate Breathing. You will be introduced to the Broaden and Build theory of positive emotions, and will be able to describe the effects of positive emotions on resilience. Finally, you will relate research and examples of gratitude to your own life and develop a plan for a gratitude practice. Leveraging Character Strengths and Strengthening Relationships In this module, you will identify your own character strengths using a well-validated questionnaire. Next, you will describe how to use your character strengths in stressful situations to increase resilience by creating a buffer of positive emotion. Additionally, you will learn the research on active constructive responding, and identify their predominant responding style with important people in their lives. Finally, you will hear examples of inculcating resilience into organizations, and plan how to incorporate resilience strategies into your personal and professional lives. Bewertungen - 5 stars - 4 stars - 3 stars - 2 stars - 1 star Top-Bewertungen von RESILIENCE SKILLS IN A TIME OF UNCERTAINTY I learned quite a few useful skills. Dr. Karen has done a great job as an instructor in this course. Her efforts seemed genuine and they way she talked about her family also helped me to conect more. Great class. Very positive and uplifting. Certainly helped me to calm my anxiety and stress about what is going on in the world. Easy to understand and pleasant. A class I would recommend to others. Loved this course, the thought processes, recognition of your thoughts, how they sabotage you, and how to recognize and control to retrain your thought process. Great thorough instructor - Thank you The topics are like tools very handy. It provides both skills that can be very helpful both on negative and positive emotions. The discussions and examples were clearly explain.Good Job.Keep it up!! Häufig gestellte Fragen Wann erhalte ich Zugang zu den Vorträgen und Aufgaben? Erhalte ich akademische Leistungspunkte für den Abschluss des Kurses? Haben Sie weitere Fragen? Besuchen Sie das Hilfe-Center für Teiln..
https://de.coursera.org/learn/resilience-uncertainty?authMode=signup
Debating a dinosaur detective story Why do most experts think birds are akin to T. rex? Below: University of Maryland paleontologist Thomas R. Holtz Jr. looks over the fossil-rich strata at Dinosaur Provincial Park in Alberta. Holtz served as a consultant on "Walking With Dinosaurs" and "When Dinosaurs Roamed America," and wrote a "Jurassic Park" dinosaur guide. Feb. 12, 1999 — It’s no longer any mystery that birds and dinosaurs are related. But how close is that relationship? How could the “fearfully great lizards” transform themselves into feathered fliers? The debate continues, but paleontologists are converging on solutions to these mysteries. At first glance, few creatures could seem more different than birds and dinosaurs. However, as far back as the 1860s, scientists began to notice many similarities in the skeletons of birds and even the largest of these fearfully great lizards - the literal translation of “Dinosauria.” The gulf between giant dinosaurs and modern birds was first bridged with the discovery of Archaeopteryx. This famous fossil creature, dating from rocks about 150 million years old, was endowed with feathers and all but universally recognized as an ancient bird. In 1861, when the first Archaeopteryx skeleton was discovered, there was nothing else quite like it in the fossil record. Since then, however, we have found more and more small, advanced, birdlike dinosaurs ... and more and more primitive birds. Now we’ve reached a point where there are no huge jumps in the evolutionary development of birds from other dinosaurs - and not just any dinosaurs, but specifically the “psycho killers” of the dinosaur world, coelurosaurian theropods like “raptors” and tyrannosaurs. Under the modern scheme of classification, birds would be considered coelurosaurian dinosaurs, just as bats are still considered mammals, even though unlike other mammals they can fly and have sonar. The lines once used to distinguish birds from all other animals have become blurred. Many characteristics unique to living birds are now known to have occurred in a variety of dinosaur types — and indeed all are present in the coelurosaurian dinosaurs interpreted as closest to birds: wishbones, a backward-pointing pubis, the halfmoon-shaped wrist bone and so on. Even feathers have now been found in dinosaurs other than birds, due to spectacular discoveries in China. Furthermore, other features used to distinguish modern birds from other living animals are absent in Archaeopteryx and other very primitive birds: toothless beaks, tail vertebrae fused into a short stump, a big-keeled breastbone. These clearly evolved after the origin of Archaeopteryx, within the bird lineage itself. So, given the anatomical evidence, why is there still a debate? A small minority of paleontologists say that the similarities between birds and dinosaurs are the result of convergence - the fact that different evolutionary lines can develop the same features due to similar life habits. These paleontologists argue that birds might be distant cousins of the dinosaurs, but not direct descendants. The basis for this debate focuses on several questions, which should be examined based on the evidence. As in other areas of paleontology, biology, geology and other historical sciences, there will always be debate and some conflict over the bird-dinosaur connection. But there comes a point where arguing against a particular theory - say, the idea that continental plates drift against each other - is simply a sign of stubbornness. For many paleontologists, the dinosaurian origin of birds approaches that level of support. That is not to say there aren’t additional questions that interest paleontologists within the framework of the dinosaurian hypothesis of bird origins. For example, were any of the dinosaurs considered closest to birds feathered? So far none of these dinosaurs - such as raptors, troodonts or oviraptorosaurs - have been found in the type of fine-grained sedimentary rocks that preserves either feather or skin impressions: what their body covering was like is unknown. Were any of these forms actually descendants of flying Archaeopteryx relatives, making raptors or troodonts “secondarily flightless” like the ostrich or kiwi of today? When in the history of birds did warm-bloodedness arise: Before Archaeopteryx? After it? At the base of the dinosaur family tree? These and other questions still intrigue paleontologists, and are actively researched today. In a broader context, why should we care about the answers to these mysteries? Scientifically, the bird-dinosaur connection is significant in that it involves questions about major transitions. How quickly can organisms adapt to a changing environment? How quickly, and when, did the many species of bird arise? Why did birds alone among the dinosaurs survive the great extinction at the end of the Cretaceous period, 65 million years ago? Understanding the ancient sources of that diversity, adaptability and success may help us better understand today’s ecosystems and avert tomorrow’s extinctions. Thomas R. Holtz Jr. is a vertebrate paleontologist at the University of Maryland.
French author Georges Perec imposed upon himself this amazing linguistic and intellectual challenge when writing La Disparition, published in 1969. The novel is written in “lipogrammatic” style; a “lipogram” is a literary work in which one compels oneself strictly to exclude one or several letters of the alphabet, usually a common vowel, and frequently “E”. The first lipograms date back to the VI century BC and the word comes from the Greek leipográmmatos, “leaving out a letter”. The missing “E” is actually part of the plot of the novel, which follows a group of individuals looking for a missing companion, very cleverly called “Anton Vowl”. In addition to English (“The void”), the book has been translated into several other languages, including Japanese, Turkish and Catalan. All translators have imposed upon themselves the same or similar literary constraint, avoiding the most commonly used letter of the alphabet. The Spanish version contains no “A”, which is the second most commonly used letter in the Spanish language (first being E), while the Russian version contains no “O”. Georges Perec is part of the “Oulipo”, group, consisting of (mainly) French-speaking writers and mathematicians seeking to create works using constrained writing techniques. Other radical experiments in literary form in Perec’s work include the novel Les Revenentes (1972), where E is the only vowel used. Perec also wrote a number of “heterogrammatic” poems in which each line was an anagram of every other line. His best known work, Life A User’s Manual (1978) also follows formal rules which are “not apparent in a casual reading, but constrain and order every aspect of the novel’s structure”. Georges Perec was born in Paris in 1936 from Polish-Jewish immigrant parents. His father was enlisted in the French army and was killed during World War II; his mother was arrested in 1943 and deported to Auschwitz and never returned (Perec never managed to find out whether she died during the journey or at the camp). It has been suggested that “The void” is “a parable of survival” of a “Holocaust orphan trying to make sense out of absence”.
https://www.capstan.be/literary-acrobatics-a-300-page-novel-without-a-single-e-in-it/
Paul Weller says that people are “not ready” for his record home. This 62-year-old rocker and his ‘ 80s pop group the style Council originally tried to release their 1998 album ‘modernism: a new decade’ in 1989, nine years before it finally saw the light, and Paul today, expect a long delay was because the record label Polydor was not ready for deep house records. He said, “Polydor certainly was not ready for this, and probably the audience at that time might not have been, but it’s hard to say. “It was a small underground sound at a time, it was not mainstream in any way. Years later, I can say that their thinking is – it does not mean that I agree with him – but, of course, influence house music in General just kept going, even now he’s still banging the drums, man. “But that’s what it was at the time, and you should take it.” And although Paul had to wait almost ten years to release an album, he says he’s “not mad” about his decision, because “the whole business is different” now. In ‘wild forest’ musician, released his latest solo album, ‘On sunset’ through Polydor on Friday (03.07.20) – added: “it’s funny because I remember when I first Polydor a few years ago, everyone was always older than me. “Now their whole team for at least 20 years younger, but I like it. I love a lot of artists they have on their composition, as well as [Billie Eilish, Celeste, Sam Fender] so that was good enough for me.” When talking about the future Modfather – which was also the main songwriter in the jam – no clear plan, he just wants to keep doing things that make him happy.
https://instant.com.pk/entertainment/166434/
Diffusion Tensor Imaging of White Matter Change in AD. This Career Development application describes an integrated research and training period with two primary goals: (1) the development of the candidate into a strong and independent researcher in the study of the neuropathology of Alzheimer's disease (AD) through the application of advanced neuroimaging techniques and complementary neuropathology studies and (2) to determine the unique contribution of white matter (WM) degeneration in AD to the clinical profile of patients. The candidate is enthusiastic to continue AD-related research and to add new training to this research program. Immediate training goals will focus on three domains of study: (1) the application of novel neuroimaging data acquisition techniques to study neuropathology, (2) neuropathological assessment of AD and quantitative neuropathology techniques, and (3) advanced statistical analysis procedures for clinical research and multivariate data. The Mentors/Advisors on this proposal and the training environment of the Athinoula A. Martinos Center for Biomedical Imaging and the Massachusetts Alzheimer's Disease Research Center (MADRC) will assure superb instruction in the three fields. The candidate expects to apply the foundation in advanced imaging protocols and neuropathology of AD to develop a strong research program aimed at fully integrating these disparate domains to better understand AD pathophysioiogy. The proposed research will integrate novel diffusion tensor imaging (DTI) measures of WM microstructure with advanced methods for measuring morphological properties of gray matter to determine whether WM degeneration differs in AD compared to normal aging, and whether WM degeneration contributes to specific aspects of AD symptomology when controlling for gray matter degeneration. Patients will be recruited through the MADRC. Imaging data will be related to cognitive and clinical measures (including memory and dementia severity). The Specific Aims of this research are to: Aim 1. Identify differences in the patterns of WM degeneration in AD compared to normal aging. Aim 2. Identify the relation between WM degeneration and gray matter degeneration. Aim 3. Identify the unique contribution of WM degeneration to the patient's clinical profile. Thus, the proposed integrated training and research program could lead to novel insights about the unique contribution of WM degeneration to the clinical sequel of AD.
For most, the world of lawyers and legal work is full of unknown words and confusing processes. When faced with legal action, simply coming to understand the terms can be almost as challenging as the lawsuit. I’ll briefly cover the basic steps of litigation here to mitigate some of that confusion: This includes everything in a lawsuit from beginning to end, such as documents filed, negotiations, discovery process, mediations, and trials. I’ll cover these steps briefly, but first let’s look at some terminology: Plaintiff: Individual, group of individuals, company, or organization that files a lawsuit against another party. Often referred to as complainant. Defendant: The individual, company, or organization being sued. Simple stated, litigation is the process of taking legal action. After a demand letter (if any) has been sent and responded to, and the plaintiff still wishes to pursue a lawsuit, the litigation process begins via a civil lawsuit. Both the summons (a letter to those involved stating the when, where, and requiring presence) and the complaint (document stating all the plaintiff’s claims and demands) must be filed and delivered to all parties. The complaint must then be answered by the defendant within the timeline, which varies by state. If the parties are unable to reach an agreement, the lawsuit moves into the fact discovery phase. In this stage, each party is allowed to request information including documents and answers to questions. Expert discovery follows fact discovery, where each party may choose to have an expert provide opinions to be submitted to the court and opposing counsel. This can include an official report that cannot be created by anyone outside of the expert’s field. This phase typically concludes the lawsuit’s discovery phase. (Discovery can be re-opened during various stages, however, if new info comes to light.) The case will then proceed by either a mediation process or by filing with the court to go to trial. Mediation is the attempt to resolve disputes outside of the court, which lessens the complexity and cost of the litigation process. Mediations also allow the parties to have more control over the case’s outcome, instead of a judge making the final decision. Both parties agree to meet with a neutral third party, the mediator, who facilitates discussion and clarifies facts. A mediator will generally be an experienced attorney who is not affiliated with either party. Mediation has an 80% success rate. Oftentimes before mediation begins, both parties establish that the results of mediation will be binding on both parties. In this situation, the case will conclude once a settlement is determined. A settlement is typically monetary, at least in civil cases. Arbitration is a second dispute resolution technique. Whereas mediation is informal, arbitration is formal. Both parties provide evidence and testimony to the arbitrator, similar to a trial setting. The arbitrator then comes to a decision which is as legally binding as a judge’s ruling in court. Neither party has a final say regarding the decision. While this is very similar to going to trial, arbitration still saves time and money for the parties. Anyone can be an arbitrator as long as the parties agree on the appointee, but they are often lawyers or former judges, or expert professionals such as accountants or engineers. When no mediation is held or is unsuccessful, the case goes to trial. Pre-Trial Conferences are held before the trial to ensure preparation. Parties meet in a courtroom and determine dates, if a jury is needed, how long the trial will go, deadlines, etc. Settlement options are also discussed, and attorneys discuss any major issues and clear up facts before the court. Whether the trial will be a bench trial or a jury trial is decided by the parties when the case is first filed. A bench trial is tried solely by a judge, with no jury. In jury trial, decision making is split between a group of individuals. Paying a small fee is necessary to have a jury trial. It is best to consult with your attorney to find out which type of trial will be best for you. The Trial begins with the plaintiff presenting evidence, which the defendant follows by presenting their defense against the evidence. The plaintiff must prove their case through convincing evidence and its probable truth or accuracy. In other words, the quantity of evidence is not as important as the persuasiveness of the evidence. After both sides have presented, the judge or jury decides on a judgement. If they rule against the plaintiff, the case is over, and the defendant is released from all liability. If they rule in favor of the plaintiff, the court awards damages (money) and/or orders the defendant to perform other acts or forms of restitution. Post-Trial Motions may be filed by either party after judgment has been awarded. Some common types include a motion for a new trial, or to amend or nullify the judgment. These are generally filed in cases involving a jury trial, as a judge is unlikely to reverse their own ruling unless there is new evidence. If the losing party believes the judgement was legally incorrect, they may file an appeal. Appeals are made in the appellate courts, a separate court which may either dismiss judgement, hear and affirm judgement, reverse judgement, or simply send it back to the trial court (where the trial was originally held) with instructions to correct legal errors. Even broken down, the litigation process is still confusing – so let’s recap. The process begins with filing a Civil Lawsuit, after which both parties Discover facts and evidence supporting the case. Mediation can be held to attempt to settle things without going to court, but if unsuccessful, the case then goes to Trial. If the losing party believes the judgement was in legal error, the process can be prolonged if they make an Appeal. Here at Sumsion Business Law, we want you to know we’re here to support you by making legal business as clear and painless as possible. We have a more in-depth pamphlet about the litigation process you can download for free by following this link: Litigation Pamphlet. Our team of professionals is ready to assist you with any legal issues you may have. Call our office to schedule a consultation and learn more about how we’ll support you.
https://www.businesslawutah.com/post/litigation-basics
What is biotechnology? Etymologically, biotechnology is the study of tools from living things. In its current usage, the term is defined either broadly or narrowly. It may be defined broadly as the use of techniques based on living systems to make products or improve other species . This would include the use of microbes to make products via fermentation, an age - old practice. In a narrower definition, biotechnology refers to the genetic manipulation of organisms for specific purposes . The term genetic engineering is sometimes used to describe this practice. Some argue that classic plant breeding is genetic engineering, since the genetics (DNA) of plants are manipulated by breeders . Consequently, a much narrower definition of genetic engineering is used to describe the manipulation of organisms at the molecular level, directly involving the DNA. However, it is the revolutionary technology of recombinant DNA ( rDNA ), which enables researchers to transfer genes from any organism to another, that some accept as genetic engineering. The term molecular breeding is used to describe the use of a variety of tools for manipulating the DNA of plants (which may or ma y not involve rDNA) to improve them for specific purposes. General steps in rDNA technology Even though crossing of two different parents produces new recombinants in the segregating population, the term recombinant DNA is restricted to the product of the union of DNA segments of different biological origins. A cultivar developed by the rDNA procedure is (GM) cultivar . Generally, an organism developed by the rDNA procedure is called a genetically modified organism ( GMO ). Certain basic steps are common to all rDNA projects: 1. The DNA of interest that is to be transferred (the transgene ) is extracted from the source organism. The specific DNA sequence of interest is cut out using special enzymes. 2. The transgene is inserted into a special DNA molecule (a cloning vector ) and joined to produce a new rDNA molecule. Chapter 14 from Principles of plant genetics and breeding Plant Breeding 94442 (by Dr. Munqez Shtaya Page 2 3. The rDNA is transferred into and maintained in a host cell (bacterium) by the process of transformation . The vector replicates, producing identical copies (called clones ) of the insert DNA. 4. The host cells with the cloned transgene are identified and isolated from untransformed cells. 5. The cloned transgene can be manipulated such that the protein product it encodes is expressed by a host cell. Gene transfer Once the desired gene has been identified from the library, it is ready to be transferred into a host cell, a process called genetic transformation . There are two categories of transgene transfer or delivery procedures – direct and mediated transfer . Direct gene transfer 1. By particle acceleration or bombardment One of the commonly used direct gene transfer method is microprojectile bombardment (or biolistic ). A biolistic device (called a gene or particle gun ) is used to literally shoot the target DNA into intact cells (hence the nickname of shotgun transformation ). Small amounts (about 50 μg) of micron - size (1 – 5 μm diameter) carrier particles (tungsten or gold) are coated with the target DNA and propelled in the barrel of the gene gun at energies high enough to penetrate plant cells. The rate of acce leration may be up to 430 m/s in a partial vacuum. The carrier particles pass through a mesh, hitting biolistic device. A low penetration number of projectiles (1 – 5 per cell) is desirable. More than 80% of bombarded cells may die if particle penetration re aches 21 projectiles per cell. 2. Electroporation Callus culture (or explants such as immature embryos of protoplasts) is placed in a cuvette and inserted into a piece of equipment called an electroporator, for electroporation. This procedure widens the pore s of the protoplast membrane by means of Chapter 14 from Principles of plant genetics and breeding Plant Breeding 94442 (by Dr. Munqez Shtaya Page 3 electrical impulses. The widened pores allow DNA to enter through them to be integrated with nuclear DNA. 3. Other methods Other direct methods are available, including microinjection and silicon carbide procedures In - direct ( Biological systems ) gene transfer Agrobacterium tumefaciens mediated transformation Agrobacteria are soil bacteria. They naturally infect dicotyledonous plants (Infection of certain monocotyledonous plants has been reported, including yams, asparagus and lily). Because host range is limited, procedure has not been used for some major crops such as corn, wheat, rice, etc . Life cycle of Agrobacterium involves liv ing in the soil until it encounters a plant and then infecting the plant. Infection causes a rapid proliferation of plant cells around the infection leading to formation of a crown gall tumor (equivalent to cancers in animals). For Agrobacterium tumefacien s, only the crown gall is produced but for Agrobacterium hizogenes , masses of roots emerge from the gall forming hairy root disease. Chapter 14 from Principles of plant genetics and breeding Plant Breeding 94442 (by Dr. Munqez Shtaya Page 4 Molecular plant breeding Molecular breeding may be defined as the use of molecular markers, in conjunction with linkage maps and genomics, to select plants with desirable traits on the basis of genetic assays. The potential of indirect selection in plant breeding was recognized in the 1920s, but indirect selection using markers was first proposed in 1961 by Thoday. The lack of suitable markers slowed the adoption of this concept. Molecular breeding gained new momentum in the 1980s and has since made rapid progress, with the evolution of DNA marker technologies. Molecular markers are used for several purposes in pl ant breeding. 1. Gaining a better understanding of breeding materials and breeding system . The success of a breeding program depends to a large extent on the materials used to initiate it. Molecular markers can be used to characterize germplasm, develop linkage maps, and identify heterotic patterns. An understanding of the breeding material will allow breeders to select the appropriate parents to use in crosses. Usually, breeders select genetically divergent parents for crossing. Molecular characterizatio n will help to select parents that are complementary at the genetic level. Molecular markers can be especially useful in identifying markers that co - segregate with QTLs (quantitative trait loci) to facilitate the breeding of polygenic traits. 2. Rapid introg ression of simply inherited traits . Introgression of genes into another genetic background involves several rounds of tedious backcrosses. When the source of desirable genes is a wild species, issues of linkage drag becomes more important because the dragg ed genes are often undesirable, requiring additional backcrosses to accomplish breeding objectives. Using markers and QTL analysis, the genome regions of the wild genotype containing the genes encoding the desirable trait can be identified more precisely, thereby reducing the fragment that needs to be introgressed, and consequently reducing linkage drag. 3. Early generation testing . Unlike phenotypic markers that often manifest in the adult stage, molecular markers can be assayed at an early stage in the deve lopment of the plant. Chapter 14 from Principles of plant genetics and breeding Plant Breeding 94442 (by Dr. Munqez Shtaya Page 5 Breeding for compositional traits such as high lysine and high tryptophan genes in maize can be advanced with early detection and selection of desirable segregants. 4. Unconventional problem - solving . The use of molecular markers can bring about novel ways of solving traditional problems, or solving problems traditional breeding could not handle. When linkage drag is recessive and tightly linked, numerous rounds of backcrosses may never detect and remove it. Disease resistance is often a recessive trait. When the could be difficult to remove by traditional backcross procedures. Marker analysis can help to solve the problem, as was done by J. P. A. Jansen when he introgressed resistance to the aphid Nasonovia ribisnigi from a wild lettuc e Lactuca virosa by repeated backcrosses. The result of the breeding was a lettuce plant of highly undesirable quality. The recessive linkage drag was removed by using DNA markers flanking the introgression to preselect for individuals that were recombinan t in the vicinity of the gene. The lifespan of new cultivars can be extended through the technique of gene pyramiding (i.e., transferring multiple disease - resistance genes into one genotype) for breeding disease - resistant cultivars. Marker - assisted backcro ss can be used to achieve this rapidly, especially for genes with indistinguishable phenotypes. 5. Plant cultivar identification . Molecular markers are effective in cultivar identification for protecting proprietary rights as well as authenticating plant cultivars. The types of molecular markers are discussed next. Molecular markers Plant breeders use genetic markers (or simply markers) to study genomic organization, locate genes of interest, and facilitate the plant breeding process. Concept of markers Genetic markers are simply landmarks on chromosomes that serve as reference points to the location of other genes of interest when a genetic map is constructed. Breeders are interested in knowing the association (linkage) of markers to genes controlling th e traits they are trying to manipulate. The rationale of markers is that an easy - to - observe trait (marker) is tightly linked to a Chapter 14 from Principles of plant genetics and breeding Plant Breeding 94442 (by Dr. Munqez Shtaya Page 6 more difficult - to - observe and desirable trait. Hence, breeders select for the trait of interest by indirectly selecting for the marker (that is readily assayed or detected or observed). When a marker is observed or detected, it signals that the trait of interest is present (by association). Genetic markers can be detected at both the morphological level and the molecular or cel lular level – the basis for classification of markers into two general categories as morphological markers and molecular markers . Morphological markers are manifested on the outside of the organism as a product of the interaction of genes and the environme nt (i.e., an adult phenotype). On the other hand, molecular markers are detected at the subcellular level and can be assayed before the adult stage in the life cycle of the organism. Molecular markers of necessity are assayed by chemical procedures and are of two basic types – protein and DNA markers . Markers are indispensable in genetic engineering, being used in selection stages to identify successful transformation events. Types of markers 1. Morphological markers • Seed color e.g. Kernel color in maize • Function based e.g. Plant height associated with salt tolerance in rice Limitations 1. Most phenotypic markers are undesirable in the final product (Yellow color in maize). 2. Dominance of the markers: homozygotes/ heterozygotes not distinguishable 3. Sometimes dependent on the environment for expression e.g. Height of plants 2. Molecular markers • Non - DNA such as isozyme markers: Restricted due limited number of enzyme systems available. • DNA based markers: Markers based on the differences in the DNA profiles of individuals. Chapter 14 from Principles of plant genetics and breeding Plant Breeding 94442 (by Dr. Munqez Shtaya Page 7 Some molecular markers are pieces of DNA that have no know function or impact on plant performance (Linked Markers): • Detected via mapping. • Linked markers are near the gene of interest and are not part of the DNA of the gene. Other markers m ay involve the gene of interest itself (Direct Markers): • Based on part of the gene of interest. • Hard to get but great once you have it. Requirements for a useful molecular marker 1. Molecular markers must be tightly linked to a target gene. The linkage must be really tight such that the presence of the marker will reliably predict the presence of the target gene. 2. The marker should be able to predict the presence of the target gene in most if not all genetic backgrounds. Marker - assisted breeding Molecular markers may be used in several ways to make the plant breeding process more efficient. The adoption of a marker - assisted selection (MAS) or marker - aided selection in a breeding program hinges on the availability of useful molecular markers. Fortunately, this resource is becoming increasingly available to many species, thanks to the advances in biotechnology. This breeding approach is applicable to improving both simple and complex traits, as a means of evaluation of a trait that is difficult or expensive to evaluate by conventional methods. The basic requirement is to identify a marker that co - segregates with a major gene of the target trait. MAS is more beneficial to breeding quantitative traits with low heritability. Chapter 14 from Principles of plant genetics and breeding Plant Breeding 94442 (by Dr. Munqez Shtaya Page 8 Conditions under which MAS is va luable 1. Low heritability traits 2. Traits too expensive to score: Soybean Cyst Nematode (SCN) resistance. Young (1999) 3. Recessive genes: Pyramiding of dominant and recessive genes conferring resistance to important crop diseases which would otherwise be very difficult 4. Multiple genes (Quantitative traits): QTLs underlying phenotypic and physiological traits can be traced using markers. Although QTL mapping is tedious, markers once identified can be used fast and accurately to detect the QTLs of interest. 5. Quar antine: No need to grow plants to screen for viral diseases that can not be visually detected, and small tissues can be used for DNA typing. Advantages of MAS 1. Improvement of response to selection (Rs) 2. Assays require small amount of tissue, therefore no destructive sampling. 3. Use of codominant markers allows accurate identification of individuals for scoring without ambiguity 4. Multiple sampling for various QTLs is possible from same DNA prep 5. Can assay for traits before they are expressed, e.g. before flower ing 6. Time saving. Limitations of MAS 1. Cost of equipment, reagents and personnel. 2. Data collected in the field is assumed to be normally distributed, but usually is not. 3. Integration of the DNA information into existing systems is difficult. 4. Linkage drag. As the marker distance from the target gene increases, more of the donor DNA is retained in the desired background resulting in need for more backcrosses. Chapter 14 from Principles of plant genetics and breeding Plant Breeding 94442 (by Dr. Munqez Shtaya Page 9 The Role of PCR in MAS Once a direct or linked marker has been located, characterized, and sequenced, a method called polymerase chain reaction (PCR) can be used to make copies of a specific region of DNA to produce enough DNA to conduct a test. DNA replication in natural systems requires: 1. A source of the nucleotides adenine (A), cytosine (C), thymine (T), and guanine (G); 2. The DNA polymerase (DNA synthesis enzyme); 3. A short RNA molecule (primer); 4. A DNA strand to be copied; 5. Proper reaction conditions (pH, temperature). The DNA is unwound enzymatically, the RNA molecule is synthesized, the DNA polymerase attaches to the RNA, and a complementary DNA strand is synthesized. Use of PCR in the laboratory involves the same components and mechanisms of the natural system, but there are three primary differences: 1. DNA primers are used instead of the RNA primer foun d in the natural system. DNA primers are usually 18 - 25 nucleotide bases long and are designed so that they attach to both sides of the region of DNA to be copied. 2. Magnesium ions that play a role in DNA replication are added to the reaction mixture. 3. A DNA p olymerase enzyme that can withstand high temperatures, such as Taq , is used. 4. A reaction buffer is used to establish the correct conditions for the DNA polymerase to work. The DNA primers are complementary (match up) to opposite strands of the DNA to be co pied, so that both strands can be synthesized at the same time. A and T match, and C and G match. Because the reaction mixture contains primers complementary to both strands of DNA, the products of the DNA synthesis can themselves be copied with the opposi te primer. The length of the DNA to be copied is determined by the position of the two primers relative to the targeted Chapter 14 from Principles of plant genetics and breeding Plant Breeding 94442 (by Dr. Munqez Shtaya Page 10 DNA region. The DNA copies are a defined length and at a specific location on the original DNA. Because DNA replication starts from the primers, the new strands of DNA include the sequence of the primers. This provides a sequence on the new strands to which the primers can attach to make additional DNA copies. Over the years, the PCR procedure has been simplified and the results made unif orm as a result of two important developments. The first was the isolation of a heatstable DNA polymerase, Taq polymerase. This enzyme gets its name from the bacteria from which it was isolated, Thermus aquaticus . This bacteria was discovered living in the boiling water of hot springs. Until Taq polymerase was discovered, the DNA polymerases available to researchers were destroyed at 65ºC. The Taq enzyme is not destroyed by the high temperature required to denature the DNA template (pattern). Therefore, us ing this enzyme eliminates the need to add new enzyme to the tube for each new cycle of copying, commonly done before Taq’s discovery. The PCR procedure involves three steps that make up a cycle of copying. Each step allows the temperature of the mixture t o change to optimize the reaction. The cycles are repeated as many times as necessary to obtain the desired amount of DNA. Step 1: Denaturation The double - stranded DNA that is to be copied is heated to ~95ºC so that the hydrogen bonds between the complemen tary bases are broken. This creates two, single stranded pieces of DNA. Step 2: Annealing or hybridization The temperature is lowered to ~58ºC so the DNA primers can bind to the complementary sequence on the single - stranded DNA by forming hydrogen bonds be tween the bases of the template and the primers. Step 3: DNA synthesis or extension During the replication step, the reaction solution is heated to ~72ºC so the DNA polymerase incorporates the nucleotide bases A, C, T, and G into the new copy of DNA. The new DNA strand is formed by connecting bases that are complementary to the template until it comes to the end of the region to be copied. Enter the password to open this PDF file: File name: - File size: - Title: - Author: - Subject: - Keywords: - Creation Date: - Modification Date: - Creator: - PDF Producer: - PDF Version: - Page Count:
https://www.techylib.com/en/view/onwardhaggard/chapter_14_from_principles_of_plant_genetics_and_breeding