content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
Water is required for transporting materials and for photosynthesis. Structures and processes involved in water movement to include root hairs, guard cells, stomata, epidermis, mesophyll cells and transpiration. Water and minerals are transported up through the stem in xylem. Sugar is transported up and down the plant in living phloem cells. The theme for this unit is all about how multicellular organisms develop and the issues multicellular organisms need to overcome to survive. One of the biggest potential problems for a multicellular organism is the need for transport. A unicellular organism has no need for a transport system. The resources it needs to survive can just be transported through the cell membrane, and the waste materials likewise. Think of a cell deep inside one of your tissues though. Without some sort of transport system, how would it get the oxygen, glucose, amino acids and everything it needs to survive? How would it get rid of the waste materials produced from its chemical reactions without killing the cells around it and itself? It is for this reason that as organisms became multicellular they had to evolve ever more complex transport mechanisms to connect all of the cells with each other and the external environment. For your course, you need to know about a number of such transport systems. We start in this topic with the transport systems of plants, and in the next we'll learn about the transport systems of animals, with a particular focus on mammals (i.e. you). Before we go on to learn about the transport systems in plants, we need to first remind ourselves of the tissues of the leaf. These were mentioned in the first topic of the unit, but let's quickly go over them again. Epidermis: This is a layer of cells at the top and bottom of the leaf which protects the leaf. Palisade Mesophyll: These cells carry out the majority of photosynthesis in the leaf. They have a number of features to maximise the absorption of light such as their location at the top of the leaf, their tall and thin shape and the fact that they are tightly packed together. They also contain a large number of chloroplasts. Spongy Mesophyll: These mesophyll cells also photosynthesise, but not to the same extent as the palisade cells. They are spaced out to allow a large surface area for the absorption of carbon dioxide and the evaporation of water. Guard Cells: These cells surround the stomatal pores on the underside of the leaf. These create holes in the leaf to allow for the exchange of gases and evaporation of water. The guard cells close the stomatal pores at night as photosynthesis is not possible at night so less gas exchange is required and closing the pores prevents unnecessary water loss. Much of the transport systems of plants can be viewed from the perspective of photosynthesis. I'm sure you'll remember what photosynthesis and where it takes place, if not look back at the last unit now! Photosynthesis requires a supply of carbon dioxide and water. The carbon dioxide simply diffuses through holes in the leaf called stomata and into the photosynthesising leaf cells. Water on the other hand is more complex. For many plants (especially ones which live on land) their leaves are as far from the soil as possible in order to maximise light absorption, but their water supply is from the soil. These types of plants therefore have a transport system which brings water up from the roots to the leaves. This water also carries other materials to the plant cells from the soil, such as minerals. So, how does this happen? Water enters the plant in the root via the root hairs. These increase the surface area of the roots to maximise the absorption of water and minerals. Once in the plant's root, the water and dissolved minerals travel up a network of tubes called xylem. Xylem tissue grows as all tissues, in the form of cells, but once the xylem has grown their cell walls become lignified. This involves the addition of the chemical lignin to the walls. This has the dual effect of making the cells waterproof, and killing the xylem cells. So, the xylem tissue consists of long, dead lignified tubes which the water and minerals can be transported up through the plant. But, you might already be thinking...how on earth is the water pumped up the plant in the opposite direction to gravity? Especially if you think of a large tree. It's all to do with transpiration. Although we've started our water transport story in the roots, in actual fact we should probably start at the leaves, as it's evaporation from the leaves that provides much of the force to move water up through the plant. Water is continuously evaporating from the leaf, particularly in daylight. This is because the small holes in the leaf, primarily on the underside, are surrounded by guard cells. These open the stomatal pore during the day to allow the carbon dioxide to diffuse in for photosynthesis and the waste oxygen to diffuse out. These stomatal pores also all the evaporation of water from the leaf. Water has a unique property for a liquid, it holds together remarkably well. If you pull a column of water, the water molecules bond with each other and pull each other along behind. Therefore the water evaporating from the leaf pulls more water out of the mesophyll cells to replace it. The water moving out of the mesophyll results in water being pulled out of the xylem in the leaf into the mesophyll cells to replace it. The water leaving the xylem in the leaf causes the water behind it in the xylem vessels to be pulled up towards the leaf to replace it. The water at the bottom of the xylem in the roots being pulled up the column causes more water to be pulled into the xylem from the root hairs to replace it and so on. This process is called transpiration and allows water to be transported up through the plant to the cells which need it with remarkable efficiency. Plant leaves need less water at night as they aren't photosynthesising and so they try to reduce water loss at this time. The guard cells close the stomatal pores and the plant has specialised epidermis cells at the top and bottom of the leaf to minimise water loss. Many plants also have a waxy cuticle on their leaves to reduce water loss even further. The image above shows the underside of a leaf under a microscope. You'll notice that in amongst the cells there are a few stomata. These are the pores which allow the exchange of oxygen and carbon dioxide and the evaporation of water. Each stoma is surrounded by two guard cells. These cells bend and straighten to open and close the stomata pores. The stomata of most plants are open in daylight and close at night. As well as transporting water and minerals, plants need to transport something else - sugar. The sugar produced from photosynthesis in the leaves is needed by all the cells in the plant for energy, growth and storage. As this is largely moving in the opposite direction to the water and minerals plants have a separate transport system for the movement of sugar: phloem. The biggest difference between the xylem and the phloem is that whilst xylem is dead, phloem tissue is alive. Phloem cells have reduced cytoplasms to allow the movement sugar through, but they have companion cells which carry out many of the functions required to keep the phloem cells alive. Xylem and phloem are explained in more depth on this Bitesize site.
http://nat5biopl.edubuzz.org/unit-2-multicellular-organisms/6a-plant-transport-systems
The STARDUST project will tackle the urbanisation challenges by designing and implementing innovative smart solutions in three Lighthouse cities – Pamplona (Spain), Tampere (Finland), Trento (Italy) – with a holistic approach. Afterwards, four Follower cities – Cluj-Napoca (Romania), Derry (UK & Northern Ireland), Kozani (Greece), Litoměřice (Czech Republic) – will provide an avenue to cultivate tailored replication strategies that resonate the project’s actions across Europe. Establish the Sustainable Energy Forum: Form a partnership to lead the local agenda on Energy and Sustainability initiatives and co-ordinate the development of the regions energy supply. Maximise energy efficiency by implementing the Fuel Poverty reduction project until 2017 to meet a target of 1000 households annually, and by supporting and implementing the second stage of the fuel poverty reduction scheme 2017 – 2019 and by designing, developing and seeking funding for an energy efficiency scheme post 2019. Maximise the development of natural gas network: Engage with the gas network installation companies to plan, optimise and develop the natural gas network to maximise its availability to citizens and commercial organisations. Plan, site and design inclusive renewable energy developments: Support planners, developers and the community to identify suitable locations for siting renewable energy installations. Plan and implement a smart grid pilot project to include a mix of energy sources balancing renewables with traditional fossil fuels by developing partnerships with local and prospective EU partners to identify smart grid options for a pilot project. Analyse the regions energy consumption and develop a strategy to reduce energy use by 20% by 2030 by engaging with public and private organisations to baseline the regions carbon emissions, and by developing and implementing a carbon emission strategy and reduction plan to meet a 20% target by 2030. Develop and implement a strategy to increase the uptake of EV’s at a regional level.
http://stardustproject.eu/cities/derry/?trad=1
Automatic data reading from smart meters is being developed in many parts of the world, including Latvia. The key drivers for that are developments of smart technologies and economic benefits for consumers. Deployment of smart meters could be launched in a massive scale. Several pilot projects were implemented to verify the feasibility of smart meters for individual consumer groups. Preliminary calculations indicate that installation of smart meters for approximately 23 % of electricity consumers would be economically viable. Currently, the data for the last two years is available for an in-depth mathematical analysis. The continuous analysis of consumption data would be established, when more measurements from smart meters are available. The extent of introduction of smart meters should be specified during this process in order to gain the maximum benefit for the whole society (consumers, grid companies, state authorities), because there are still many uncertain and variable factors. For example, it is necessary to consider statistical load variations by hour, dependence of electricity consumption on temperature fluctuations, consumer behaviour and demand response to market signals to reduce electricity consumption in the short and long term, consumer’s ambitions and capability to install home automation for regulation of electricity consumption. To develop the demand response, it is necessary to analyse the whole array of additional factors, such as expected cost reduction of smart meters, possible extension of their functionality, further development of information exchange systems, as well as standard requirements and different political and regulatory decisions regarding the reduction of electricity consumption and energy efficiency. 1. European Commission. (2006). European SmartGrids Technology Platform. Vision and strategy for Europe’s electricity networks of the future. Luxembourg: Office for Official Publications of the European Communities. 2. GEODE Working Group Intelligent Networks. (2009). GEODE position paper on smart metering. Available at http://www.geode-eu.org/uploads/position-papers/_old/other/011109.pdf. 3. European Parliament and the Council Directive 2009/72/EC concerning common rules for the internal market in electricity and repealing Directive 2003/54/EC OJ L 211:55 art 2 annex 1. 4. Barkans J. and Zalostiba D. (2009). Protection against Blackouts and Self-Restoration of Power Systems. Riga: RTU Publishing House. 5. Energy Business Reports. (2011). Smart Grid Industry Market Guide. Available at http://www.energybusinessreports.com/ 6. Bariss U. Kuzņecova T. Laicāne I. and Blumberga D. (2014). Analysis of factors influencing energy efficiency in a smart metering pilot. Energetika 60(2) 125–135. ISSN 0235-7208. 7. KPFI. Project No. KPFI-14/28 Overview of monitoring results for 2013. 8. European Commission. (2014). Cost-Benefit Analyses & State of Play of Smart Metering Deployment in the EU-27. EC Report No. SWD(2014) 189 final Bruseels. Available at http://www.parlament.gv.at/PAKT/EU/XXV/EU/02/98/EU_29831/imf-name_10475991.pdf 9. European Commission. (2014). Benchmarking Smart Metering Deployment in the EU-27 with a Focus on Electricity. EC Report No. COM(2014) 356 final Brussels. Available at http://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52014DC0356&from=EN 10. Avotiņš A. Kuņickis M. Bariss U. and Apse-Apsītis P. (2013). Smart metering cost-benefit analysis in Latvia. Proceedings of the 14th International Scientific Conference on Electric Power Engineering (EPE 2013). NY: Curran pp.169-173. ISBN 978-1-62993-918-6.
https://content.sciendo.com/view/journals/lpts/52/6/article-p13.xml?lang=en
Who is Thomas Halstead Designs? Thomas Halstead is the founder and creative mind behind Thomas Halstead Designs, a design studio in Los Angeles, California. His style focuses on merging beauty, sophistication, and functionality to create design solutions for his clients. The smallest details can make a real impact when it comes to designing a space. My responsibility is to bring these details together to a create beautiful environment. Thomas is appreciated as an Interior Designer for his creative design solutions. From the initial meeting through to the finishing touches on a room or building, clients receive expert guidance throughout the entire design process. Whether enhancing an environment or a complete home redesign, all of our projects are guided by the relationships we build with our clientele. Each project is driven by our core values of compassion, generosity, and trust, which are non-negotiable and enduring. Schedule a consultation today and "Bring Your Style to Life"
https://www.thomashalsteaddesigns.com/
Watch Mars' Moons Pass Each Other for the First Time Today, NASA demonstrated just how neat it would be if the Earth had two moons. This video, stitched together from numerous stills captured by the Mars Curiosity rover's Mast camera, offer the first look at Mars' larger moon, Phobos, passing in front of and blotting out the planet's smaller moon, Deimos. In addition to being one heck of a cool image, NASA scientists hope that the new video will help them learn more about the strange orbits of Mars' twin moons. Phobos, the larger of the two, appears to be orbiting closer and closer to the planet's surface. Meanwhile, tiny Deimos appears to be straying farther and farther from it's celestial partner.
https://mashable.com/2013/08/16/curiosity-rover-mars-moons/
According to Rosina Colonia, (Delphi Ephorate of Antiquities) “… the Delphi exhibits speak for themselves: they have the power to command respect and captivate the visitor, inviting him or her to admire them, and leaving this visitor with the memory of their charm and the enigma surrounding them. Even though the exhibits on display today constitute no more than a small but representative part of the dedications seen by Pausanias at Delphi, and an even smaller part of the many more that inundated the sanctuary during the years of its heyday, they indisputably continue to delight people with their wealth, variety and beauty. …Delphi has been included in archaeology textbooks, it has adorned art books; some of the Delphi finds, such as the Treasury of the Siphnians, are landmarks in the history of ancient Hellenic art, while others, even though more than one hundred years have elapsed since they came to light, continue to be a focal point of scholarly discussions even today, owing to unanswered questions regarding their identity and interpretation. But above all, they still chaim the broad public who flock, like ancient pilgrims, to admire the monuments of Delphi”. (Quotation ©: John S. Latsis Public Benefit Foundation) The temple of the Alcmeonids was reduced left in ruins by the rock fall that accompanied a strong earthquake in 373 BC., the same one that buried the statue of the Charioteer under piles of earth. For its reconstruction, the Amphictyony (the representatives of the cities who were in charge of the sanctuary) once again made a fund-raising campaign throughout the Hellenic world. However, a large part of the emormous amount of money required was provided from the fine imposed on the Phocians for pillaging the sanctuary during the ten-year Third Sacred War. Unique testimony of the financial management plans and technical methods used in the sprawling construction site, which was organised under the direction of specially appointed archons (Naopoioi = the "temple-builders"), is provided to us by inscriptions on stone stelae found during excavations. The excavations have been unable to add to the scant information from the ancient sources about the temple interior, since the latter was almost totally destroyed. A gold-plated cult statue of Apollo stood in a prominent position in its cella (sekos, or inner chamber). On the wall of the vestibule (pronaos) famous sayings by the Seven Sages were inscribed, together with the enigmatic letter E. Nothing has been preserved of the oracular adyton (holy of holies) in which the Oracle prophecies were given. There, beneath the floor of the cella, the shrine would have contained the symbols of the prophet-god: the oracular tripod on which Pythia would be seated and, perhaps, also the sacred navel-stone (omphalos) which was believed to mark the grave of Python or Dionysus. On the east pediment of the temple, Apollo is presented with his mother Leto and his sister Artemis among the Muses. In the center is Apollo, wearing a mantle that leaves his chest bare, and sitting on a tripod holding a laurel branch and a wide-rimmed bowl (phiale). He is depicted not as Musagetes, but as the lord of his oracle. The Muses, some standing and others seated in a rocky landscape, link the god with the world of the arts and culture.
http://www.windmills-travel.com/album.php?destination=195&destinationtype=city&id=94&page=9&page=17
“In the Artifact Lab: Conserving Egyptian Mummies” – an exhibition at the University of Pennsylvania that focuses on the process of conserving ancient artifacts. Courtesy: Past Horizons blog. In what seems like a new trend to explore the world of art conservation through process-oriented exhibitions, the Blanton Museum of Art at the University of Texas at Austin, in conjunction with the National Gallery of Canada, opened “Restoration and Revelation: Conserving the Suida-Manning Collection” to the public on Saturday, November 17. The exhibition focuses on the conservation efforts, including the cleaning and repainting, of several Old Master paintings and drawings from the museum’s Suida-Manning Collection, established in 1998. In a recent press release, the Blanton Museum stresses the potential for discovery, asserting that “new knowledge about the works and their makers” can result from restorations. However, the use of a reconstructive approach (repainting) in treating these objects suggests a greater interest in “visual integrity” than historical veracity. Similar exhibitions, such as the University of Pennsylvania Museum’s “In the Artifact Lab: Conserving Egyptian Mummies,” create environments in which patrons can actually view restorations through a glass-enclosed conservation lab. The Ghent Altarpiece cleaning is also on display for public viewing. While it would seem that a certain degree of transparency is implicit in such demonstrations, thereby creating a sense of accountability, the effect is rather to heroicize art conservation and its practitioners. Antonio Carneo’s “The Death of Rachel” undergoing conservation treatment at the National Gallery of Canada, Ottawa, for the current exhibition “Restoration and Revelation” at UT Austin. Courtesy: UT Austin. Perhaps a more fair and balanced approach to the many issues concerning the conservation of paintings, particularly those that have suffered severe deterioration, would produce an honest examination of the field overall. As James Beck and Michael Daley state in their book, Art Restoration: The Culture, the Business, and the Scandal, “The ‘science of restoration’, like all science, is not a monolithic cure-all.” If a museum rejects this reasoning, then questions regarding the moral implications of extensive repainting, and the museum’s obligation to its patrons to present clear delineations between original and contemporary components of any work, are otherwise wholly ignored. Any knowledge gained from such an exhibition is therefore tempered by what has been lost – the opportunity to develop a more informed audience, and therefore, a more critical public opinion. “Restoration and Revelation: Conserving the Suida-Manning Collection,” is scheduled to run through May 5, 2013.
https://www.artwatchinternational.org/2012/11/
Hello crafty friends! Welcome, I am so glad you stopped by today to join my i-crafter June Blog. I have created this pieced humming bird thank you card to share with you today. It features the new June release, Hummingbird Happiness die set, paired up with the Haley Alphabet dies and the Cherry Blossom Burst dies. I started off creating the hummingbird on the lower left corner from black and white printed papers. I decided it was too busy and i wanted to simplify the look. I used four colors to create the red and yellow hummingbird on the top left in the 1st column. This was the piecing pattern I needed to create the simplified black and white printed paper hummingbird with the red throat on the right column. 1 – I die cut 3 Hummingbird body from black, grey, and yellow cardstock. I die cut their wing and body cluster from a red cardstock 2 – I took the black hummingbird outline and turned it over. I glued the red cardstock body and throat pieces into place 3 – I cut the wings and tail feathers pieces off of the grey hummingbird outline. I used the middle score lines in the wing as a guide to cut the grey cardstock wings apart. I had a left wing and a right wing. I glued the smaller grey wing to the front of the black hummingbird outline 4 – I cut the tail feather cluster from the yellow hummingbird outline and glued it slightly right to the back of the black hummingbird outline. You can see just a bit of the yellow showing off the the right 5 – I did the same thing with the grey tail feathers and glued them just behind the yellow tail feathers so they were exposed just to the right as shown in the photo Here you can see the red and yellow hummingbird in detail. I used this simple piecing pattern to create the black and white printed paper Hummingbird used on my finished card. A – i die cut the big Cherry Blossom Burst die out of white cardstock. I cut a single Cherry Blossom Burst from red cardstock and the stamen form yellow cardstock. I shaped the red flower and glued it together B – I left the white flower cluster untouched and added foam dots to the back of the blooms C – I attached the cluster to a 4″ x 5 1/4″ card base cut form kraft cardstock D – I glued the single red flower to the top area of the white flower cluster I added the hummingbird to the card with foam dots and positioned him with his beak in the red flower I die cut the word “THANKS” using the Haley Alphabet dies, I cut two sets, one from the black cardstock and one from the black and white printed cardstock. I used the black cardstock for the letter outline and the printed paper for the inside of the letters. I attached it to the bottom of the card using glue and foam dots. These are the i-crafter products I used with this card: Thanks you so much for stopping by today for my i-crafter blog post. I hope you enjoyed learning how to create these beautiful Hummingbird Happiness dies for the Humming by to Say Thanks card. I love hearing from you so leave me a message tell me what your favorite i-crafter product is form the June release! Let me know if you have any question! Visit the i-crafter website for more inspiration and see new dies and stamps. Make sure to follow i-crafter on Instagram too Be Creative ~ Stay Inspired Jenn Gross Follow me:
https://i-crafter.com/humming-by-to-say-thanks-i-crafter-june-release/
After the war it brought on a lot of changes all over the world, such as women gaining the right to vote, and the Treaty of Versailles was signed, which declared Germany responsible for starting the war and ordered them to pay reparations. The twentieth century can be characterized as a major turning point in history due to the decisive decisions that were made all over the world. The quick decisions sometimes led to extensive regret, while others led to a glorious ending. Many of the decisions made during the twentieth century affected society new as well as it still affecting society North vs. South Throughout the years, the United States endured many social, political and economic changes which affected the North and South in many different ways. Discussing these differences, we will notice that they caused a lot of controversy between the colonies that, at times, led to wars. The major political struggles during this period were focused primarily on states’ rights. At a certain point, settlers began to come to the realization that they wanted to become their own country and not be tied to Great Britain. Once the idea began spreading, the British took action by imposing many different laws and taxes upon the colonies. The Civil War was the deadliest war for the United States. Over 600,000 soldiers were killed between the two sides. The Confederacy and the Union were the two sides who were fighting, mostly over slavery. Each side had different strategies about fighting, therefor their military style was different. Both sides used different weapons and had different fighting tactics. Many thought that joining the League of Nations would lead to war. The United States continued a policy of isolationism up until World War 2. In conclusion, World War 1 changed American society, and foreign policy. American society changed as so women gained the right to vote, women gained more jobs. One thing that happened during the war was the Great Migration, which was when over 6 million AfricanAmericans moved north. Among them was slavery, state rights, and political matters. These conflicting views on major issues created significant events in history that tore the country apart. Many events dividing the north and the south on numerous controversial issues led up to the south seceding and ultimately The Civil War. (HistoryNet) Before all of this took commence, Tensions were on the rise between the north and the south over the spread of slavery. As America acquired more and more territories after the Mexican Drew Gilpin Faust’s, This Republic of Suffering: Death and the American Civil War, is an intensive study that reflects on the impact of the Civil war had on the soldiers and civilians. Faust wanted to show that, as they dealt with and mourned over the overwhelming amount of carnage, the nation and the lives of the American people were already changed forever. Although there are many other publications relating to the Civil war, she is able to successfully reflect upon the morbid topic of death in the Civil war in a new and unique way. This book shows the war in a whole different perspective by focusing less on quantifying and stating the statistics of the civil war deaths. Rather, she examines more closely on how the Civil War deaths transformed the “society, culture and politics,” and the impact it had on the lives of the Americans in the 19th century. Mass incarceration is the way that the United States has locked up millions of people over the last forty years using unnecessary and disproportionate policies. Contrary to popular belief, this is racially fueled as most of these policies saw to it that blacks and latinos be locked up for longer than their white peers and for smaller crimes. These racist roots within the system can be traced back to when the first slave ship arrived in the US. But our first major prison boom was seen after the American Civil war. I know that the Civil War was far more than forty years ago. Due to the many causes to why the war began there is an understanding that people really fought for their freedom. Standing up for oneself at this time was very difficult because individuals had to make choices of either life or death. Death of many people it had to be for others to realize that people have the right of freedom. After the War it was the time for everyone to make that clear to the public that America was the place to have independence. To resolve problems and try to gain peace with others He guided his country through the most devastating experience in its national history, the ultimate strife from westward expansion the Civil War. Lincoln's victory in that election thus changed the racial future of the United States. The westward expansion of slavery was one of the most dynamic economic and social processes going on in this country” (Foner, E). Political deals, such as the Missouri Compromise in 1820, Compromise of 1850, Supreme Court rulings, and the Dred Scott decision in 1857, divided the country drastically. These divisions went far beyond cotton and economics. Martin Luther King Jr. is known throughout the world for his leadership in the American Civil Rights movement. The Civil Rights movement of the 1950’s and 1960’s will always be remembered as an unstable period in American history. Racial tensions were at an all time high and our country, states, towns and families were torn over their views of racism. Racial barriers challenged black people everywhere and Jim Crow laws of the South denied millions of black people basic rights (Jenkins). During this time of civil unrest, numerous leaders emerged, but by far the most notable was Martin Luther King, Jr.. Dr. King was born to Reverend Martin Luther and Alberta Williams King. World War One led to many changes in the U.S and the world itself, but what affects did it have on the domestic issues of America such as segregation and unjust treatment of African Americans, and women 's suffrage. While greatly affecting domestic issues, World War One led to large changes in the demographics because of migration of african americans from southern states because of oppressive laws and racial prejudice to the northern states. It also changed the roles of African Americans and women on society, and led to women 's right to vote, Being a time of such large impact one might never think of what was happening here in America during World War One but in reality it was a time of much change in America. African Americans roles were beginning to change in society because during World War One from 1914 and 1920, roughly 500,000 black southerners packed their bags and headed to the North, fundamentally transforming the social, cultural, and political landscape of cities such as Chicago, New York, Cleveland, Pittsburgh, and Detroit. The migration was due to mob violence and racial prejudice and also because farming was growing difficult with a boll weevil infestation that was killing cotton crop throughout the south but the effects were greater than thought While the American Revolution was long and suffering it carried a significance on each of the following groups differently (Schultz, K., 2013). While the war killed as many as 25,000, other deaths were caused from disease and the smallpox epidemic. The total amount of deaths that occurred during this time was around 70,000. The colonist were divided up between the ones that were loyal to the British crown, the rebels who rebelled against the crown and the one’s that were indifferent to either side which included many of the individuals living in the colonies (Pettinger, T., 2017). The war took the colonists away from their families and disrupted their daily lives for extended periods of time. The Mexican War was a big moment for the United states. Manifest Destiny gave America a feeling of power over land and that hype for expansion, which happened to time with the conflict between Mexico and Texas, this gave path to the war that many people think we do not talk enough about. From the book I learned that it is always interesting to look for different point of views of an event, and also that we Civil War: The Battle of Jonesborough Was the Civil War necessary? Were there really victories? Over the span of the Civil War there were thousands of casualties. Each battle was the result of these casualties and affected the outcome of the Civil War. As the Civil War continued many of the decisions made by the leaders led to different events and affected the outcome of the battle and how the Civil War would end. The Battle/Siege of Vicksburg The Battle of Vicksburg was one of the most crucial points in the Civil War. It helped Eradicate the Rebels/Confederacy once and for all. The Civil War was fought for over 4 years and it lasted from 1861-1865. It was one of the most horrific wars the world has ever known and witnessed. The Civil war was fought over the topic of slavery and the issues it presented, and the injustifications of slavery.
https://www.ipl.org/essay/Civil-War-Immigration-Effects-F3EQARWBGXPV
Objective to analyze and compare social and cultural aspects from renaissance and middle ages according to pages 205-208 3 context the middle ages is known as a period of time between the 5th and the 15th century. A comparison of the medieval and renaissance eras christian churches commemorate the announcement of the incarnation of luke as shown in these examples, painting took a very secular turn in the renaissance from the religious-based paintings that were found in the middle ages. Comparing renaissance and baroque art judith and holofernes by artemisia gentileschi created in 1620 oil on canvas 199 x 163 m found in the galleria degli uffizi, florence (benton, diyanni. The renaissance was a period of rebirth and transition in europe it began in italy around the thirteenth century and spread gradually to the north and west across europe for the next two centuries it was a time of vast growth in learning and culture through contacts with the arab world, the. Comparison and contrast of the middle ages and renaissance essay sample this essay will compare and contrast the visual arts of the middle ages, called medieval art, with the arts of the renaissance period by giving an overview of each period and illustrate how the collision between these two periods, and what influenced them, brought about new. Comparison of the renaissance and enlightenment essay sample renaissance means 'rebirth' or 'recovery', has its origins in italy and is associated with the rebirth of antiquity or greco-roman civilization. The most significant difference between medieval and renaissance art is that renaissance art paid more attention to the human body, and to detail. The difference between the elizabethan age and renaissance is that while the renaissance era is considered to be the transition from the middle ages to modern history in europe, the elizabethan. Renaissance means 'rebirth' or 'recovery', has its origins in italy and is associated with the rebirth of antiquity or greece-roman civilization. The renaissance began to perceive a person as a microcosm a human bared a small likeness to the vast cosmos, the macrocosm, in all its diversity for italian humanists, the underlying factor was the orientation of human towards oneself. The mid-fourteenth century marked the beginning of a transition between the medieval and modern worlds this transition is known as the renaissance — fren. Compare and contrast medieval ages and renaissance the medieval ages and renaissance were periods of distinct cultural and worldviews within the continent of europe both the medieval ages and renaissance had the presence of a social organization and had artwork centered on religion. Comparison and contrast of renaissance art to neoclassicism art russian art & architecture from icons and onion domes to suprematism and stalin baroque, russian art and architecture seems to many visitors to russia to be the rather baffling array of exotic forms and alien sensibilities. During the renaissance period, most of the musical activity shifted from the church to the courts composers were more open to experimentation composers were more open to experimentation as a result, more composers used musical instruments in their compositions. The renaissance was a cultural and intellectual movement that peaked during the 15 th and 16 th centuries, though most historians would agree that it really began in the 14 th, with antecedents. The terms themselves provide the basic differences the root word in renaissance is naissance, which is french for birth so the renaissance was a rebirth (in perception) of an earlier (greek & roman) classical style. The primary difference between the reformation and the renaissance was that the reformation focused on a religious revolution, while the renaissance focused on an intellectual revolution the reformation came about in order to correct what many felt were the faults of the catholic church, leading. The term renaissance music refers to the music written and composed in the renaissance era renaissance was a great period in europe where art, science, literature, music, intellect, and lifestyle underwent a rebirth. During the renaissance the roman catholic church was a dominant force and controlled most aspects of peoples lives other religions weren't accepted, but as the church gained power it became corrupt and people broke off from the church. A comparison of two paintings from the renaissance period introduction this paper will compare the themes found in the paintings madonna and child with st john the baptist and an angel by domenico di bartolomeo ubaldini (puligo) and madonna enthroned by giotto. Renaissance vs middle ages contrasting the renaissance and later middle ages created in 1998 by chaffey classes of '99, '00, & '01 renaissance. Renaissance world view vs enlightenment world view both the renaissance and the enlightenment are two significant points in world history, specifically in european history both periods have distinctive characteristics but share the notion of being periods of discovery in many aspects of life and living in this world. Renaissance, (french: rebirth) period in european civilization immediately following the middle ages and conventionally held to have been characterized by a surge of interest in classical scholarship and values. Free coursework on a comparison of the medieval and renaissance eras from essayukcom, the uk essays company for essay, dissertation and coursework writing. Northern artistic renaissance focused more on empirical observation and accurately paying attention to details of visual reality the italian artistic renaissance, however, accurately portrayed visual reality through proportion, perspective, and human anatomy. 2018.
http://nmassignmentqcwl.presidentialpolls.us/a-comparison-from-the-renaissance-to.html
Presentation is loading. Please wait. Published byJulia Elliott Modified over 5 years ago 1 COMPLETE AND INCOMPLETE COMBUSTION 2 COMPLETE COMBUSTION In a combustion reaction, oxygen combines with another substance and releases energy in the form of heat and light. When oxygen is available in sufficient amounts complete combustion occurs 3 COMPLETE COMBUSTION This means that all of the carbon atoms and hydrogen atoms from the hydrocarbon molecules combine with oxygen atoms to form carbon dioxide and water. WHAT IS A HYDROCARBON?? – A hydrocarbon is a compound that is composed only of the elements carbon and hydrogen. 4 COMPLETE COMBUSTION the general equation of the complete combustion of a hydrocarbon is: C x H y + O 2 –> CO 2 + H 2 O 5 COMPLETE COMBUSTION Complete combustion is a more efficient process for generating heat since the flame is mostly heat and little light. 6 INCOMPLETE COMBUSTION When a reaction has too little oxygen incomplete combustion is the result. A bright yellow flame is produced during incomplete combustion. In addition to this, soot and toxic carbon monoxide can also be formed through incomplete combustion 7 INCOMPLETE COMBUSTION An example equation of the incomplete combustion of propane is: 2C 3 H 8 (g) + 7O 2 (g) 2C (s) + 2CO (g) + 2CO 2(g) + 8H 2 O (g) 8 INCOMPLETE COMBUSTION The products of incomplete combustion include carbon dioxide and water vapour as well as carbon, carbon monoxide or both Incomplete combustion is a more inefficient process for generating heat since it has less oxygen and therefore more light is produced rather than heat. This will be seen with a more yellow flame. 9 Carbon Monoxide Why is the formation of carbon monoxide a serious concern? Carbon monoxide is a toxic gas that is both colourless and odourless. Carbon monoxide can bind to oxygen in the blood which will decrease the number of available oxygen binding sites in a person. Symptoms of carbon monoxide poisoning include headache, dizziness, and nausea. Eventually suffocation can be a result of carbon monoxide poisoning. Proper ventilation and the use of carbon monoxide detectors in the home can help to prevent carbon monoxide poisoning. 10 Complete and Incomplete Combustion using Propane Propane + oxygen carbon dioxide + water C 3 H 8 (g) + 5 O 2 3 CO 2 + 4 H 2 O Propane + oxygen carbon + carbon dioxide + water C 3 H 8 (g) + 3 ½ O 2 1 ½ C + 1 ½ CO 2 + 4 H 2 O Oxygen is limited and therefore carbon is produced as a result. Similar presentations © 2021 SlidePlayer.com Inc. All rights reserved.
http://slideplayer.com/slide/6086408/
The advent of the 20th century saw a revolution in the art world in which artists moved away from representational art, which aimed to represent a depiction of a visual reality; and towards abstraction, which gave them independence from the visual world. In the strict sense, abstract art bears no trace of anything recognizable in the natural world but the genre is not limited to these works as it saw numerous artists who practiced varying degree of abstraction. Fauvism of Henri Matisse, and Cubism of Georges Braque and Pablo Picasso, used partial abstraction. These movements were an important precursor to the abstract movements which ultimately dominated the art world in the 20th century. These included Orphism, Suprematism, Neoplasticism, Optical art and most prominently Abstract Expressionism. Know more about the development of abstract art by studying the 10 most famous abstract artists, the movements they were related to, their contribution and their greatest works. #10 Robert Delaunay Nationality: French Lifespan: 12 April 1885 – 25 October 1941 The initially works of Robert Delaunay were Neo-Impressionist but with time he moved towards abstraction. His 1912 work Simultaneous Windows was his last semi figurative work before he began experimenting with complete non-objectivity. His most important contribution to abstract art was co-founding the Orphism art movement, an offshoot of Cubism which focused on pure abstraction and bright colors. The movement aimed to dispense recognizable subject matter and thus played a key role in the development of abstract art. Delaunay’s abstract paintings were based on the optical characteristics of brilliant colors that were so dynamic they would function as the form. By being the leader and most famous figure of Orphism, Delaunay is ranked among the most influential abstract artists. Abstract Masterpiece: Rhythm, Joy of Life (1930) Other Famous Abstract Works:- Hommage to Blériot (1914) Simultaneous Contrasts: Sun and Moon (1913) #9 Kazimir Malevich Nationality: Russian Lifespan: February 23, 1878 – May 15, 1935 Geometric abstraction is a form of abstract art based on the use of geometric forms. Kazimir Malevich was the founder of the art movement known as Suprematism, which focused on basic geometric forms, such as circles, squares, lines, and rectangles, and the use of limited range of colors. He is thus a pioneer of geometric abstract art. His 1915 Suprematist painting Black Square is one of the most famous and influential works in the history of abstract art. Malevich was also an art theoretician and wrote the book The World as Non-Objectivity, which outlined his suprematist theories. He was a key figure in the development of total abstraction and reducing a painting to its geometric essence. Abstract Masterpiece: Black Square (1915) Other Famous Abstract Works:- White on White (1918) Suprematist Composition (1916) #8 Willem de Kooning Nationality: Dutch-American Lifespan: April 24, 1904 – March 19, 1997 Abstract Expressionism was a post World War II art movement which was the first specifically American movement to gain international prominence and one of the most influential movements in abstract art. It incorporated a variety of styles and emphasized on conveying strong emotional or expressive content through abstraction. Willem de Kooning was one of the most prominent Abstract Expressionists who specialized in distorting figure painting to the level of abstraction and blending various styles to create impressive canvases. He also created sculptures in his late career. De Kooning’s paintings have regularly sold for record prices. His Woman III was sold for $137.5 million in 2006, the second highest price at the time, and in 2015, his Interchange was sold for $300 million, which remains the highest price paid for a painting as of July 2017. Abstract Masterpiece: Woman I (1952) Other Famous Abstract Works:- Interchange (1955) Woman III (1953) #7 Victor Vasarely Nationality: Hungarian–French Lifespan: April 9, 1906 – March 15, 1997 Op art, a short form for optical art, in a genre in abstract art in which the artist creates an optical illusion through precise manipulation of patterns, shapes and colors. It usually consists of non-representational geometric shapes, most often creating an illusion of movement. After initially working as a graphic designer and a poster artist, Victor Vasarely eventually became one of the founders and the most famous figure of the Op art movement, among the most influential movements in abstract art. Op art not only influenced the art world but also spread to other areas including architecture, computer aided design, animation and fashion. Vasarely’s painting Zebra (1937) is considered one of the earliest examples of OP art. He went on to create some of the movement’s most renowned works in both painting and sculpture. Victor Vasarely is known as the “Father of Op Art”. Abstract Masterpiece: Zebra (1937) Other Famous Abstract Works:- Vega-Nor Vonal Stri #6 Alexander Calder Nationality: American Lifespan: July 22, 1898 – November 11, 1976 One of the most influential sculptors of the twentieth century and perhaps the most acclaimed abstract sculptor, Alexander Calder is famous for his invention of the mobile, an abstract sculpture that moves in response to touch or air currents by taking advantage of the principle of equilibrium. In addition to mobiles, Calder made static sculpture called stabiles, wire sculptures, toys, theatrical sets, paintings in oil and gouache, and even jewelry and numerous household objects. Calder also created monumental sculptures including .125 for JFK Airport in New York City in 1957, Spirale for UNESCO in Paris the following year and his largest sculpture El Sol Rojo in 1968 outside the Aztec Stadium for the New Mexico Summer Olympic Games. Two months after his death in November 1976, Alexander Calder was awarded the Presidential Medal of Freedom, the highest civilian honor in the United States. Abstract Masterpiece: Flamingo (1974) Other Famous Abstract Works:- Lobster Trap and Fish Tail (1939) Arc of Petals (1941) #5 Mark Rothko Nationality: Russian-American Lifespan: September 25, 1903 – February 25, 1970 Mark Rothko, or Markus Yakovlevich Rothkowitz, belonged to a Russian Jewish family which immigrated to the United States when he was a child. He moved through a number of styles in his artistic career including Surrealism before he developed his own signature style. Rothko is considered a pioneer of Color Field painting, a style within Abstract Expressionism in which color is the main subject itself. Though Rothko is regarded as one of the leading abstract artists, he insisted that he was not an abstractionist as his primary focus was discovering mysticism and esoteric aspects of colors and its combinations. Painting was a method of spiritual expression for Rothko and many viewers have broken down in tears in front of his works. Despite his statement, the contribution of Rothko to Abstract Expressionism is monumental. Abstract Masterpiece: Orange, Red, Yellow (1961) Other Famous Abstract Works:- Untitled (Black on Grey) (1970) No. 10 (1950) #4 Georgia O’Keeffe Nationality: American Lifespan: November 15, 1887 – March 6, 1986 American Modernism was an artistic and cultural movement which peaked between the two World Wars. It was marked by a deliberate departure from tradition and use of innovative forms of expression. Georgia O’Keeffe became the leading figure in American Modernism by challenging the boundaries of artistic style with her paintings, which combined abstraction and representation. She is most famous for her dramatically large, sensual close-up of flowers which essentially made them into abstract works. Georgia O’Keeffe is not only the most famous female abstract artist but also one of the most influential figures of 20th century art. She was awarded the Presidential Medal of Freedom in 1977. Abstract Masterpiece: Black Iris III (1926) Other Famous Abstract Works:- Red Canna (1924) Blue and Green Music (1921) #3 Piet Mondrian Nationality: Dutch Lifespan: March 7, 1872 – February 1, 1944 Piet Mondrian began as a conventional artist, and experimented with Luminism and Cubism, before becoming the most influential contributor to the De Stijl art movement which advocated pure abstraction by a reduction to the essentials of form and color. He coined the term neoplasticism for his abstract art in which he only used the straight line, the three primary colors, and the neutrals of black, white and gray. Mondrian is considered an important leader in the development of abstract art. His work inspired two influential movements, the German Bauhaus movement which focused on simplified lines and color theory; and New York’s Minimalism which was based on geometric forms and a narrow color palette. Abstract Masterpiece: Broadway Boogie Woogie (1942–43) Other Famous Abstract Works:- Composition with Red, Yellow, and Blue (1937–42) Composition II in Red, Blue, and Yellow (1930) #2 Jackson Pollock Nationality: American Lifespan: January 28, 1912 – August 11, 1956 Drip painting is a form of abstract art in which paint is dripped or poured onto the canvas, rather than being carefully applied. Jackson Pollock is the most famous practitioner of drip painting to the extent that he was dubbed “Jack the Dripper” by TIME magazine. Pollock’s technique of pouring and dripping paint popularized the term action painting, a method in which the physical act of painting itself is an essential aspect of the finished work. His most famous works include Blue Poles, which was purchased by the National Gallery of Australia in 1973 for A$1.3 million, a then world record for a contemporary American painting; and No. 5, 1948, which created the world record for the highest price paid for a painting when it was sold for a price of $140 million. Jackson Pollock is not only the most famous Abstract Expressionist artist but also one of the leading figures of 20th century art. Abstract Masterpiece: Number 5, 1948 Other Famous Abstract Works:- Number 11, 1952 (Blue Poles) One: Number 31, 1950 #1 Wassily Kandinsky Nationality: Russian Lifespan: December 16,1866 – December 13, 1944 Initially a teacher of law and economics, Wassily Kandinsky gave up his promising career to pursue his interests in art. He rose to prominence in the 1910s to become one of the leading figures in modern art. Kandinsky is a pioneer of abstract art and he painted some of the earliest works in the genre including what is known as the First Abstract Watercolour. Music, being abstract in nature, was an inherent part of his art and he named some of his spontaneous works as “improvisations” and elaborate ones as “compositions”. Apart from being a painter, Kandinsky was also a prominent art theorist whose books had an enormous and profound influence on future artists. For his tremendous contribution in moving the art world away from representational traditions and towards abstraction, Wassily Kandinsky is considered by many as the “Father of Abstract Art”.
https://learnodo-newtonic.com/famous-abstract-artists
CROSS REFERENCE TO RELATED APPLICATION FIELD OF THE INVENTION BACKGROUND OF THE INVENTION SUMMARY OF THE INVENTION DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS This application claims the benefit under 35 USC 119(e) of U.S. provisional application No. 62/144,350, filed Apr. 8, 2015, the contents of which are herein incorporated by reference. This invention relates to the field of precision timing, and more particularly to a low latency digital clock fault detector, for example for use in digital communications. Most integrated circuits (ICs) require one or more periodic toggling signals, known as clocks, to function. In the design of reliable, available, and serviceable (RAS) systems, clock fault detection is important for assessing system health and for triggering automatic corrective action, such as selecting a redundant clock source or transferring control to backup equipment. FIG. 1 10 12 14 14 16 18 a, b, Many circuits have been used for clock fault detection. One prior art example uses a delay line and flip-flop as shown in . This design, which comprises a delay line , multiplexer , pair of flip-flops inverter , and OR gate , has a low latency and does not require another clock. However, it suffers from several disadvantages. The delay line position selected by the multiplexer must be tuned for the clock frequency, which may not be known a priori. Variations in the delay line over process and temperature may require a calibration scheme. If a range of frequencies is required, the delay line requires a large number of taps. A multi-tap delay line primitive may not be available in all digital design libraries, and building a delay line from individual buffer or delay cells can make timing difficult to control. Embodiments of the invention provide a digital circuit that continuously monitors activity on a clock using another known working reference clock, and reports when the monitored clock fails by asserting a status signal. The digital circuit may operate with a low latency, allowing any corrective action to be taken more quickly in the event of a fault. According to the present invention there is provided a low latency digital clock fault detector, comprising an edge detector including a delay line for generating pulses on edges of an incoming clock signal, the width of said pulses being determined by the length of said delay line; a watchdog timer comprising flip-flops in a pipeline configuration, said watchdog timer having a first input held at a static logic level, a second input receiving a reference clock, and a third reset input, said watchdog timer being responsive to said pulses to maintain a stable output in the presence of said pulses and generate a fault indication in the absence of said pulses. A digital circuit in accordance with the invention offers various advantages. It can detect a clock failure with very low latency and low latency variation. One embodiment has an efficient hardware implementation, and is built entirely from standard digital logic primitives. The nominal frequencies of the monitored clock and reference clock may differ by a large amount, which is tunable by adjusting the circuit. Because the circuit triggers a fault based on the ratio between the monitored clock and reference clock, it can accept a wide range of input frequencies without any configuration. It also does not require a high frequency monitoring clock. Because of these advantages, the digital circuit is particularly well-suited for cross-monitoring in master clock redundancy applications with multiple clock sources of varying but equal nominal frequencies. In accordance with another aspect of the invention there is provided a method of detecting faults in a clock signal, comprising generating pulses of predetermined width on edges of an incoming clock signal; and monitoring said pulses with a watchdog timer that maintains a stable output in the presence of said pulses and generates a fault indication in the absence of said pulses. FIG. 2 A digital fault detector with a single alternate edge clocking scheme in accordance with one embodiment of the invention is shown in . 101 20 102 103 An input clock to be monitored mclk is first fed to an edge detector circuit comprising a delay line and an XOR gate in a feed-forward configuration. An XNOR gate may be used in place of the XOR gate, if the resulting polarity is more convenient for the downstream logic. 20 101 104 102 107 108 The edge detector circuit acts as a clock doubler. Every rising or falling edge on mclk generates a pulse on the edge detector output . The width of this pulse is determined by the latency of the delay line , which should be constrained to be greater than the asynchronous reset time of downstream flip-flops , , but less than the smallest of the expected times that the mclk is high or low. Typically, a fixed structure can be found e.g. a few buffers that will satisfy both of these conditions over all operating conditions. If desired however, the delay line can be made configurable with multiple taps and a multiplexer. 104 22 The edge detector output is used as an asynchronous reset for a watchdog timer . A watchdog timer is an electronic timer that is used to detect and recover from malfunctions. During normal operation, the monitored circuit regularly restarts the watchdog timer to prevent it from timing out. If, due to a hardware fault or program error, the monitored circuit fails to restart the watchdog timer, the timer will elapse and generate a timeout signal. The timeout signal can be used to initiate corrective action or actions. 22 107 108 107 105 20 107 108 109 22 109 24 24 In this case the watchdog timer comprises two or more alternating edge flip-flops , in a pipeline configuration with a static logic 1 input and clocked by a reference clock signal rclk . As long as mclk is running, pulses are produced by the edge detector , the flip-flops , with outputs f1, f0 will be repeatedly reset, and the output of the watchdog timer will remain at logic 0. If mclk fails, the pulses will stop, and a logic 1 will propagate to the watchdog timer output , reporting a fault to the synchronizer . This is the point of no return. Once the logic level 1 has propagated to the synchronizer , the fault detector will output a logic value of 1 on the fault line even if the monitored clock mclk suddenly recovers at this point. The length of the pipeline can be adjusted based on the relative frequency between mclk and rclk and the tolerance for declaring a fault. If rclk is much faster than mclk, more than two flip-flops will be required in the chain. If desired, the pipeline length can be made configurable using a multiplexer. 109 22 24 110 111 110 24 The output of the watchdog timer is fed to a synchronizer comprising two back-to-back alternating edge flip-flops , . The output of flip-flop is shown as s1. The synchronizer ensures synchronous timing relationship to any downstream digital logic running on the same clock. It also acts as a metastability trap to reduce the probability that metastability effects will propagate into downstream logic and additionally enforces a minimum pulse width of one clock period on its output. If desired, the synchronizer could be clocked by a different internal clock. If the downstream logic is asynchronous and uses a latch structure tolerant of a potentially metastable signal, the synchronizer could be foregone completely, further reducing fault detection latency. 113 113 104 114 22 110 111 24 113 A reset input is provided to ensure the initial condition of the circuit does not report a fault. The reset also serves to disqualify a fault in the case that rclk itself is known to have failed as detected by an equivalent circuit, thus preventing a deadlock situation. The reset input is combined with the edge detector output using an OR gate to asynchronously reset the flip-flops in the watchdog timer . The flip-flops , in the synchronizer use the reset input directly. FIG. 3 FIG. 2 FIG. 2 22 24 201 204 205 209 An alternative embodiment shown in provides a further enhancement to the circuit presented in . The single pipeline of alternating edge flip-flops from in the watchdog timer and synchronizer of have been replaced with two complementary pairs of alternating edge flip-flops , , and , forming a dual alternating edge clocking scheme. The output of the two pipelines are combined with an OR gate (). Alternatively, a single pipeline could be built from true dual edge clocked flip-flops, if those are available. Clocking on both edges serves to reduce the latency of the clock fault detector, at the cost of additional hardware complexity. a FIGS. 4 a FIG. 4 4 20 e The operation of the fault detectors is illustrated in the timing diagrams of , to . shows the external signals mclk and rclk and the edge signal det generated by the edge detector . It is assumed that the monitored clock fails at the point in time marked by the vertical line failure. FIG. 2 b c FIGS. 4, 4 b c FIGS. 4, 4 107 In the case of single alternate edge clocking as shown in and , when the monitored clock mclk stops running and is stuck low at the failure point, the reference clock rclk is aligned such that the de-assertion of edge detection pulse det arrives within the reset removal time of the first-flip flop in the watchdog timer. At this point, it is indeterminate whether the first flop-flip (output f1) will clock in the logic 1 or will remain in reset. The levels of the signals f1, f0, s1, s0 are shown in for the earliest and latest possible detection scenarios. FIG. 3 d e FIGS. 4, 3 With dual alternative edge clocking as shown in and , both pipelines sample on opposite edges, and thus the latest detection case is only one half clock cycle later than the earliest detection. In both cases, the greyed areas represent the period during which the fault detector can output a fault. The minimum latency is represented by the start of this period, and the maximum latency by the end of this period. FIG. 2 FIG. 3 Fault detection latency is measured as the time between the first missing clock edge and the time where the synchronized fault signal is asserted and shown for the earliest and latest possible point of detection. Compared with single alternating edge clocking as implemented in the embodiment of , dual alternating edge clocking as implemented in the embodiment of has a lower average latency, lower maximum latency, and lower latency variation. Latency variation is particularly important in applications where the outage resulting from a clock failure needs to be precisely compensated for. The minimum latency, and thus safety margin for declaring false alarms, remains the same for both schemes. It will be understood that the flip-flops described herein are D-type flip-flops. It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the invention. For example, a processor may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor DSP hardware, network processor, application specific integrated circuit ASIC, field programmable gate array FPGA, read only memory ROM for storing software, random access memory RAM, and non-volatile storage. Other hardware, conventional and/or custom, may also be included. The functional blocks or modules illustrated herein may in practice be implemented in hardware or software running on a suitable processor. It will also be appreciated that the expression “circuit” covers both software and hardware implementations, for example making use of primitives. BRIEF DESCRIPTION OF THE DRAWINGS This invention will now be described in more detail, by way of example only, with reference to the accompanying drawings, in which: FIG. 1 is a prior art digital fault detector based on delay line and flip flops; FIG. 2 is a schematic diagram of a clock fault detector with single alternating edge clocking in accordance with one embodiment of the invention; FIG. 3 is a schematic diagram of a clock fault detector with dual alternating edge clocking in accordance with another embodiment of the invention; and a e FIGS. 4to 4 are timing diagram for the described clock fault detectors.
Employee Experience Survey Instructions Welcome! Whether you’re an organization returning to the survey or brand new to the process, we’re excited to have you be a part of this journey! Here you’ll find step-by-step instructions to help you and your organization through the survey process from start to finish. These survey resources include videos, resources, templates, and answers to some frequently asked questions for all phases of the survey process. Information for each stage: Before the Survey Launches Before March 21 - Registration instructions - HRIS instructions - Checklists for Survey Liaisons and CEOs While the Survey is Open April 25–May 12 - How to check your participation rates - Encouraging staff participation After the Survey Closes After May 12 - How to thank staff - How to view and interpret survey results - Signing up for a consultation - Sharing results with staff Important Dates - Registration Form and HRIS Due — Tue, March 21, 2023 - Passover — April 5–13, 2023 - Survey Opens — Tue, April 25, 2023 - Survey Closes — Fri, May 12, 2023 - Access to Your Results — June 2023 - Post-Survey Consultations — Summer and Fall 2023 FAQs What types of questions are on the survey? The survey includes mostly Likert or rating scale questions, on a five-point system ranging from strongly agree to strongly disagree. Participants can share verbatim comments for any question. Other questions include optional demographic questions and open-ended questions. The survey covers topics like employee engagement, leadership, collaboration, and communication. This year’s survey questions are still being finalized. Here are last year’s questions for reference. Why did you add, delete, or change some questions? Workplace culture and the overall work experience has changed a lot over the past 3 years, so we’ve made some adjustments to the questions included in the survey. Most questions (about 80%) remain the same year over year and will still be comparable to 2022 or to prior years. Can we take the survey earlier or later than April 25 - May 12, 2023? The Employee Experience Survey is administered only between April 25 - May 12, 2023. We can’t administer it outside this timeline. If this timing isn’t right for your organization, you may find Pulse Surveys a useful alternative. You can administer a Pulse Survey at any time outside the Employee Experience Survey timeline, and the questions are customizable for your needs. Is the survey confidential? Leading Edge guarantees the confidentiality of all organizations and employees. We are committed to ensuring no individual organization’s data is ever made public. Any data that is shared will be in aggregate and will adhere to strict confidentiality thresholds. To ensure that staff feel comfortable being as honest as possible, all answers and comments that employees provide in the survey are 100% CONFIDENTIAL. At no point will your organization have access to individual responses or know who has responded to the survey. Leading Edge will protect this information through a set of safeguards and security protocols, and results will be reported to organizational leaders as part of the organization's overall data. Leaders will be able to view and group data in different ways, but at no point will organizations have access to responses associated with any individual. What will Leading Edge do with the data overall? The primary use of your data is to deliver a customized report to you about your own organization’s employee experience. Secondarily, Leading Edge shares aggregate results with the broader community to highlight trends and patterns in our sector in an annual report. These reports include only the combined data from all of the organizations that have taken the survey. We will never share specific data from your organization with anyone or include your organization's name in our reports without explicit permission. We may ask your permission to share examples of successful action plans that you implemented in response to your survey data. Our CEO / Survey Liaison changed. What should we do? Please log in to Formsite to edit the contacts on your registration form. You can edit this form even if you’ve already submitted it. What is the CEO Survey and should our CEO take it? The CEO Survey is designed for the most senior operational professional at your organization. This survey asks questions that are more relevant to the work they do than the questions on the Employee Experience Survey. CEOs of organizations with 10 or more employees will receive the CEO Survey and their employees will all receive the Employee Experience Survey. The surveys are administered at the same time even though the questions are different. CEOs should only take the CEO Survey. If you have more questions that you don’t see on this page, please reach out to us at [email protected]. What Are the Expectations of the CEO and Survey Liaison? The Survey Liaison is the primary contact person for Leading Edge throughout the survey process. We will be in touch with Liaisons often between now and June to make sure they stay on track in providing us and your staff with the information needed for a successful survey. We expect CEOs (CEO refers to the senior most professional) to be committed to a confidential, thoughtful survey experience, to have the intention of finding strengths and growth opportunities from their results, to share those results with staff, and to take action on results. Expectations Here is a breakdown of Survey Liaison and CEO expectations from start to finish in the survey process: Survey Liaisons Due dates & time commitment: ▢ Attend an HRIS Info Session: (90 minutes) ▢ Register by Completing the Registration Form & HRIS: March 21 (1 to 5 hours, varies based on organization’s size and demographics) ▢ Share Details About the Survey with Staff: March 27 (30 minutes) ▢ Complete the Test Survey for Survey Liaisons: March 28 (10 minutes) ▢ Remind Staff to Participate in the Survey: April 20 (15 minutes) ▢ Thank Staff for Sharing Their Feedback: May 15 (15 minutes) ▢ Review Survey Results: June 20 (1 to 5 hours, depending on organization’s size) Download Survey Liaison & CEO checklists as PDF CEOs Due dates & time commitment:
https://www.leadingedge.org/page/survey-instructions/
May 25, 2007 · therapist ratings of volunteer application essay sample ethan frome essays the therapeutic relationship (tr), but not homework compliance (hc) sample apa paper with appendices predicted cbt outcome at posttreatment (n = 138) and at 1-year follow-up (n = 121) for anxious cbt homework children (aged 9 to 13 years). you can use cbt self help worksheets to discover underlying thoughts and thought patterns. cognitive behavioral therapy: the self-practice worksheets are essential because of the cbt homework way cbt works treatment adherence has posed a substantial challenge not only for five steps of problem solving patients but also for the health profession for many decades. cbt teaches a person how to cbt homework slow down. common techniques used in cbt-i include:. it is based on the theory that our behaviors result from thought processes, and to change our feelings and behavior, we must problem solving group exercises first change our thinking patterns homework is a is creative writing hard well-established yet extremely dissertation topics in business under-emphasized aspect of the rational-emotive/cognitive behavioral orientation. they include best college essays cognitive therapy, rational behavior therapy, dialectic behavior therapy, rational living therapy, and rational emotive behavior therapy homework is a central feature informative essay introduction examples of cognitive-behavioral therapy (cbt), how to essays topics given its educational emphasis. angry and sadthought: members. common and most effective cbt homework ideas the students asking what is the importance of homework in cbt should understand that inaccurate thoughts can reinforce the negative emotions or thoughts cbt homework pattern. example & practice worksheet was designed to bridge this gap psychological homework will help you work more reading comprehension homework directly on your problems and will accelerate your progress. cognitive restructuring. 7 thoughts on “Cbt homework” - Owen Post author If you are free to watch comic videos on the web then I suggest you to pay a quick visit this web site, it contains truly so comic not only videos but also additional information. - Christopher Post author Appreciate you for sharing these kinds of wonderful articles. In addition, the optimal travel and also medical insurance program can often ease those fears that come with visiting abroad. Your medical emergency can quickly become very expensive and that’s absolute to quickly decide to put a financial burden on the family finances. Setting up in place the excellent travel insurance package prior to setting off is well worth the time and effort. Cheers - Amia Post author Fantastic website. Lots of useful information here. I’m sending it to several pals ans also sharing in delicious. And naturally, thank you for your effort! - Logan Post author Within YouTube video embed code you can also give parameters in accordance to your wish like width, height or even border colors. - Rebecca Post author Wow that was odd. I just wrote an incredibly long comment but after I clicked submit my comment didn’t appear. Grrrr… well I’m not writing all that over again. Anyway, just wanted to say fantastic blog! - Andrew Post author Hello, I desire to subscribe for this weblog to get most recent updates, so where can i do it please help. - Sophia Post author One additional technique in favor of advertising your webpage is posting comments on different sites with your weblog link.
http://writing-essayservice.net/2020/10/22/cbt-homework_a1/
Reviewed by Michael Pitts. Automation and Utopia: Human Flourishing in a World without Work is crafted as a response to fears over an automated future in which humans are made obsolete by technological developments. Written by John Danaher, senior lecturer of law at the National University of Ireland, Galway, the text consists of two main sections, which cover automation and the possibility of a utopian future, respectively. After outlining the scope and purpose of his research, in the first chapter Danaher forecasts the obsolescence of humankind in an automated world. But this is not as catastrophic as it may sound since, for Danaher, “Obsolescence is the process or condition of being no longer useful or used; it is not a state of nonexistence or death” (2). In the rest of the automation section, Danaher responds to two propositions: that automation in the workplace is both possible and desirable, and that automation outside of the workplace is potentially dangerous and its threats must therefore be mitigated. After making his case for why automation should be conditionally embraced, in the second section Danaher turns to two possible, ‘improved’ societies with automation fundamental to their economies, the cyborg and virtual utopias. While the cyborg utopia enables humankind to remain valuable members of the economy, occupying the cognitive niche that has historically provided an initial evolutionary advantage to the species, Danaher posits that such a future will likely maintain the degradations of employment, enhance our dependency upon machines, and disrupt humanist values while, due to the technological advancements it requires, ensuring no worthy improvements to human wellbeing in the near future. Following up this analysis of the cyborg polity, Automation and Utopia concludes with a presentation of what Danaher views as the ideal, improved society, the virtual utopia. This improved society, in which humankind ventures into the virtual world to enhance its flourishing, is presented by Danaher as an ideal goal towards which humankind may aim since, as the author posits, it will ensure human agency, pluralism, stability, a myriad of alternative utopias, and a meaningful connection to the non-virtual, real world. Pivotal to Danaher’s assessment of automation, and a possibly utopian future, are his views on labor and the avenue he identifies as optimal for human flourishing, the virtual utopia. For the purposes of his argument, he adopts a definition of work which he acknowledges as unusual and likely controversial, since it excludes “most domestic work (cleaning, cooking, childcare)” as well as “things like subsistence farming or slavery” (29). Defining work as “any activity (physical, cognitive, emotional etc.) performed in exchange for an economic reward, or in the ultimate hope of receiving an economic reward,” Danaher builds the case that obsolescence is almost certain and could result in as low as 10% or as high as 40% of the future population remaining employed (28). Such a development is framed as a positive result since work, he emphasizes, has a negative effect upon employees and improving it in the current economic milieu is, according to him, a more difficult route to take than shifting towards a virtual utopia. Specifically, Danaher argues that improving work, which often involves fissuring, precarity, colonization, classic collective action, domination, and distributive injustice is unlikely in our current system since it “would require reform of the basic rules of capitalism, some suppression or ban of widely used technologies as well as reform of the legal and social norms that apply to work” (83). Though this dismissal of the possibility of improving working conditions is short-sighted and ignores the likelihood that labor organizing will prove necessary as technological advances continue, this weakness of the text stands on its periphery. More important to Danaher’s vision of the future is his adoption of an approach that is interestingly more radical than such efforts to protect workers: the introduction of a universal basic income and the normalization of technological unemployment in current economic systems. Danaher envisions this radically different distribution of economic power as a salient feature unique to the virtual utopia. Danaher rejects the cyborg utopia, believing it will threaten the prospect of universal basic income and technological unemployment and ensure the continuation of work and the injustices endemic to capitalistic systems. In considering the virtual utopia, Danaher’s audience must consider the ethics and consequences of a nation in which utopian games and escape become a salient feature of its culture. This ideal society is marked by its focus upon virtual worlds as the mechanism by which human flourishing may take place. By venturing into simulations that are shaped to satisfy the desires and needs of individual users, it avoids the problems of a single utopian ideal that must be enforced upon all citizens. It can therefore, as Danaher explains, “allow for the highest expressions of human agency, virtue, and talent… and the most stable and pluralistic understanding of the ideal society” (270). Yet as with the cyborg utopia, the virtual utopia is plagued with ethical complications. The question of what actions are permissible in such a simulated environment is closely related to the ethical considerations surrounding cyborgs and artificial intelligence. In very briefly confronting this topic, Danaher asserts that the same moral constraints that shape human interactions in daily life will impact those occupying the virtual world. He supports this argument by pointing out that some of the characters inhabiting the simulation will be operated by human players and that interactions with such players will have ethical dimensions. In addition, he asserts that other actions may be deemed intrinsically immoral even without a corresponding ‘real-life’ consequence. Danaher asserts that, though there will be some moral frameworks unique to the virtual utopia, there will be no major alteration to human ethics. The virtual utopia, he claims, is therefore a reasonable goal for the post-work society since it enables human flourishing and protects values such as individualism and humanism. Danaher is also keen to emphasize that “the distinction between the virtual and the real is fluid” (229). He rejects the “stereotypical” science fictional view of virtual reality, as something that is only produced within immersive technological simulations, like the Matrix or Star Trek’s Holodeck. On the other hand, he also rejects the “counterintuitive” view that everything humans experience is virtual reality in that our reality is constructed through language and culture. Instead, Danaher offers a middle position. Some things may be more virtual than others, but nothing is wholly virtual or wholly real. He sees virtual utopia as being filled with emotionally and morally meaningful interactions, but in the context of relatively inconsequential stakes (rather than survival, or struggle for hegemony). A Holodeck-style simulation is only one of many ways this could be accomplished. Automation and Utopia delves significantly into the topic of possible futures at the intersection of ethics, technology, and humanism. It is a valuable resource for scholars, students, and laypeople engaged with conversations surrounding the advancement of automation in the 21st-century, its impact upon economics and workers, and optimal approaches to accommodating such new technologies through the advent of a post-work society. The work continues discussions at the intersection of technology and labor, but necessitates broader considerations related to the virtual utopia Danaher proposes. Namely, it does not convincingly explain how virtual utopia will avoid the ethical pitfalls outlined in relation to the cyborg utopia. It also does not thoroughly discuss how such simulations may be safeguarded from economic exploitation at the hands of those owning or operating these systems, or address the potential for intersectional inequalities. Finally, Danaher does not comprehensively discuss how such escapism and the further minimization of human interaction in the natural world may impact climate and the environment. Though it is difficult to accurately predict, estimations of both the ecological and psychological effects of a society in which the main mechanism of human interaction is not within nature but instead within a virtual world are vital to identifying optimal utopian aims. Overall, Automation and Utopia productively dives into the topics of technological advancement and labor policy, proposes thought-provoking socioeconomic policies related to the challenges of automation, and necessitates further discussions concerning ‘the ideal society,’ its connection to technology, and the impact it may have upon human psychology and the environment.
https://vector-bsfa.com/tag/utopia/
Marginalisation is an issue which reaches across much of modern society, individuals and groups are intentionally or unintentionally excluded from discussions or participation for any number of reasons. In workplaces it is not uncommon for individuals to feel marginalised and excluded from discussions in their offices and social settings whether they are co-located or part of a virtual or distributed team. Despite all of the efforts of an organisation to ensure an inclusive working environment employees still can be marginalised due to rosters, vacations, specific task allocation that separates them physically or intellectually from their colleagues. Within a virtual environment, where individuals and groups are working remotely from each other the risks of marginalisation are even stronger simply due to the physical distance. The impact of actual or perceived marginalisation on the effectiveness of a virtual team can be quite hard to gauge but can have a deep impact on the outcomes of the work being undertaken. At its simplest level, marginalisation can lead to groups going off in different directions with their work, all genuinely believing they are doing what is needed for the overall success of their project but in reality their efforts may be wasted, and in being misaligned through marginalisation, their misdirected efforts will cost the organisation both time and money to correct. At its most extreme, marginalised workers may start to actively undermine the endeavours of their employing organisation as a form of rebellion to the perception of being marginalised. They feel excluded from the main group on the project, and every little rumour and piece of news serves to feed their frustration. In these extreme situations the project can suffer severe loss of efficiency, costs may start to get out of control and schedules are extremely hard to maintain. Avoiding marginalisation in your teams Actively working to treat all individuals and groups on your project equally, regardless of their geographic location is absolutely key to avoiding the perception of marginalisation. While this sounds like an obvious statement, the reality of implementing it can be challenging since the leadership of the project will be more visible to those they are co-located with, leading to the potential of communicating more openly and freely with their local personnel and on occasion forgetting those located remotely. To overcome this, managers need to actively consider how the things they say and do in one location will be heard and understood in the other locations, and to take positive steps to visit and spend time with the other locations on as frequent and fair a schedule as possible. This requirement to travel then introduces other issues to projects, issues of travel cost and the time of the project leaders. Travel budgets are often restricted and, if not properly considered and estimated at the outset, un could then lead to an inability of the leadership to make the numbers of trips needed to maintain the relationships with their personnel. Likewise, the time available for most managers to spend traveling to, and spending time in each of their project locations can be limited and can also place stresses on their personal and working lives, these pressures must be managed carefully to avoid the managers themselves becoming victims to the pressures of their project, travel can be shared amongst executive leadership groups to allow a balanced level of workload. However, the travel needs, both budgetary and time away from home base need to be considered and appropriately budgeted through all stages of a project. Share your experiences Have you thoughts on the impact of marginalisation in a virtual team environment you would like to share? If so, we would love to hear from you. How can we help? Ulfire specialises in supporting organisations plan, establish and run high performing virtual teams. We combine extensive practical experience from decades of involvement in virtual teams, with current, real world, academic research into the way members of virtual teams collaborate. Please contact us to discuss ways we can help your business, or sign up using the form below to receive our regular newsletter. Ulfire Newsletter Please enter your details below to register for our newsletter containing updates and insights into effectively running virtual teams.
https://ulfire.com.au/impact-marginalisation-virtual-teams/
This summer, the CHEOPS satellite breezed through thermal tests in France and vibration tests in Switzerland, demonstrating that it is ready to operate in the extreme cold of space and also fit to withstand the mechanical stresses of launch. Keeping the CHEOPS telescope and detector at a stable temperature will be vital for obtaining accurate measurements of the brightness of stars. Temperature variations of several degrees can cause small distortions in the telescope structure resulting in artificial changes in the measured star brightness. Even more challenging requirements apply to the detector: CHEOPS will measure the minute changes in the apparent brightness of a star caused by the transit of an exoplanet across the stellar disc, and its detector must be stable to within a hundredth of a degree to ensure sensitivity to such small changes. In orbit, the satellite will be warmed directly by the Sun, and it will also receive infrared light radiated by the Earth and sunlight reflected by the Earth's surface. In spite of these direct sources of heat, all the external surfaces on CHEOPS will be exposed to cold space and, if not heated, may easily reach temperatures as low as -200 degrees Celsius. As the satellite orbits the Earth and turns to observe different target stars, the illuminated areas will change all the time, causing temperature variations that, if not actively compensated, could degrade the accuracy of planetary transit measurements. To counteract these variations, the satellite is equipped with a control system that uses onboard heaters to keep the telescope tube at a temperature of -10 degrees Celsius. The CCD detector, whose noise performance improves at low temperatures, will be cooled to an even lower -40 degrees Celsius. The initial passive cooling of the detector is achieved by coupling it to dedicated radiators that are exposed to the cold of space; in addition, due to the extremes of heating and cooling experienced by the spacecraft, the detector assembly also makes use of high heat capacitance materials and is actively controlled by heater lines that limit temperature fluctuations to within 0.01 degrees Celsius. Thermal vacuum tests that characterised the optical performance of the science instrument under the assumption of adequate onboard thermal control, were described in CHEOPS journal #9. The latest thermal-vacuum tests, which took place between 20 July and 2 August at the Airbus Engineering Validation Test (EVT) facilities in Toulouse, France, aimed at ensuring that the onboard thermal control systems will do their job when subjected to realistic in-orbit conditions. These are the first thermal tests that have been performed on the integrated science instrument and spacecraft platform at realistic temperatures. To this aim, the satellite was placed in a vacuum chamber and surrounded by shrouds cooled to -165 degrees Celsius using gaseous nitrogen. The black shrouds created a very cold, space-like environment that absorbed heat from the satellite. The warming effect of the Sun was simulated by mounting flat heater plates around CHEOPS and using them to cause rapid changes in heating that reproduced, or purposely exceeded, the range of conditions expected during operations. The temperature of the spacecraft was monitored at more than a hundred different places to observe its response to changes in external heating. The test results have not only provided direct evidence that the thermal control systems perform correctly and that CHEOPS can operate in an extreme thermal environment, but have also returned a wealth of data that will be used to refine a 5000-node mathematical model of the spacecraft’s thermal behaviour. With this, it will be possible to accurately predict, by means of computer simulations, in-flight scenarios that are impractical to test directly. Right after the thermal-vacuum tests were completed in France, the satellite was shipped to Zurich, Switzerland to perform mechanical vibration tests. During launch on a three-stage Soyuz vehicle, CHEOPS will experience vigorous shaking: mechanical loads will be transmitted from the launch vehicle to the satellite at lift-off and during a series of subsequent thrust and separation events. Each of these events will cause vibrations at particular frequencies and the satellite must be able to withstand the entire spectrum of vibrations. On-ground vibration testing took place at RUAG Space, Zurich between 14 and 22 August using a high performance electromagnetic shaker to apply vertical and lateral sine-wave oscillations. The satellite has a one-off design and its construction has involved carefully considered choices about all details of manufacture, tolerances, accommodation of equipment, payloads, harnesses and other fittings. The purpose of this pre-flight vibration testing was to confirm that the satellite as built is robust to the rigours of launch. There is a trade-off involved during vibration testing. If the loads applied are too weak, the satellite will be under-tested and unexpected failures could occur at launch, but if the loads are too strong, over-testing could lead to unnecessary failures during the test. To find the optimum balance between under-testing and over-testing, the strength of vibration loads applied by the shaker at different frequencies has been fine tuned using mathematical simulations, to levels which meet the design specification and verification requirements of the Soyuz launcher in the particular case of CHEOPS. The fine-tuning was performed using so-called coupled loads analysis, taking into account the actual launch scenario, in which the satellite is secured to the launcher, and performing simulations based on the fully-assembled launcher. These loads caused mechanical oscillations at different frequencies in the entire launch vehicle, and the corresponding dynamic responses of the CHEOPS satellite were predicted using a structural model of the satellite. In particular, the responses at the satellite-to-launch-vehicle interface of the coupled system served as reference during satellite vibration testing and a so-called notching procedure was implemented to reduce the vibration strength in particular frequency ranges and prevent over-testing. The on-ground vibration testing has provided important confirmation that both the unique design and skilled workmanship of the CHEOPS satellite meet their mechanical requirements. After the successful tests, the mission can now move to its next phase with confidence. The satellite was shipped to ESA's technical centre in The Netherlands, where it arrived on 30 August, for acoustic noise and electromagnetic compatibility tests. Acoustic noise testing is complementary to the vibration tests performed in Zurich, exposing the satellite to vibrations up to and beyond 2000 Hz, whereas the mechanical vibration tests only reached frequencies as high as 100 Hz. When testing in The Netherlands is complete, CHEOPS will return to Airbus Defence and Space Spain in Madrid for the final preparations before being shipped to Europe’s spaceport in Kourou, French Guiana, for launch.
http://sci.esa.int/cheops/60575-13-cheops-chilled-and-checked-shaken-and-not-stirred/
3D printing with high-performance carbon fiber 1 March 2017 Lawrence Livermore National Laboratory (LLNL) researchers have become the first to 3D-print aerospace-grade carbon fiber composites, opening the door to greater control and optimization of the lightweight, yet stronger than steel material. The research, published by the journal Nature Scientific Reports, represents a “significant advance” in the development of micro-extrusion 3D printing techniques for carbon fiber, the authors said. A carbon fiber composite ink extrudes from a customized direct ink writing (DIW) 3D printer, eventually building part of a rocket nozzle. Click to enlarge. Carbon fiber composites are typically fabricated one of two ways—by physically winding the filaments around a mandrel, or weaving the fibers together like a wicker basket, resulting in finished products that are limited to either flat or cylindrical shapes, said Jim Lewicki, principal investigator and the paper’s lead author. Fabricators also tend to overcompensate with material due to performance concerns, making the parts heavier, costlier and more wasteful than necessary. However, LLNL researchers reported printing several complex 3D structures through a modified Direct Ink Writing (DIW) 3D printing process. Lewicki and his team also developed and patented a new chemistry that can cure the material in seconds instead of hours, and used the Lab’s high performance computing capabilities to develop accurate models of the flow of carbon fiber filaments. Computational modeling was performed on LLNL’s supercomputers by a team of engineers who needed to simulate thousands of carbon fibers as they emerged from the ink nozzle to find out how to best align them during the process. We developed a numerical code to simulate a non-Newtonian liquid polymer resin with a dispersion of carbon fibers. With this code, we can simulate evolution of the fiber orientations in 3D under different printing conditions. We were able to find the optimal fiber length and optimal performance, but it’s still a work in progress. Ongoing efforts are related to achieving even better alignment of the fibers by applying magnetic forces to stabilize them. —fluid analyst Yuliya Kanarska The ability to 3D print offers new degrees of freedom for carbon fiber, researchers said, enabling them to have control over the parts’ mesostructure. The material also is conductive, allowing for directed thermal channeling within a structure. The resultant material, the researchers said, could be used to make high-performance airplane wings, satellite components that are insulated on one side and don't need to be rotated in space, or wearables that can draw heat from the body but don’t allow it in. A big breakthrough for this technology is the development of custom carbon fiber-filled inks with thermoset matrix materials. For example, epoxy and cyanate ester are carefully designed for our printing process, yet also provide enhanced mechanical and thermal performance compared to thermoplastic counterparts that are found in some commercially available carbon fiber 3D printing technologies, such as nylon and ABS (a common thermoplastic). This advance will enable a broad range of applications in aerospace, transportation and defense. —materials and advanced manufacturing researcher Eric Duoss The direct ink writing process also makes it possible to print parts with all the carbon fibers going the same direction within the microstructures, allowing them to outperform similar materials created with other methods done with random alignment. Through this process, researchers said they’re able to use two-thirds less carbon fiber and get the same material properties from the finished part. The researchers will next turn to optimizing the process, figuring out the best places to lay down the carbon fiber to maximize performance. There have been discussions with commercial, aerospace and defense partners to move forward on future development of the technology. The Laboratory Directed Research and Development program funded the study.
Sodium Bicarbonate in Cardiac Arrest Management Written by Anand Swaminathan REBEL EM Medical Category: Resuscitation Background: As with all medications in cardiac arrest (i.e. epinephrine, amiodarone) the benefits of sodium bicarbonate administration have been discussed and debated for decades. While it is clear that sodium bicarbonate can play a role in resuscitation of arrest due to hyperkalemia, it’s role in patients with acidemia resulting from or causing arrest is unclear. In theory, raising the pH may be beneficial but the use of bicarbonate increases serum CO2 which may be deleterious as it creates a respiratory acidosis. Despite the absence of good evidence, sodium bicarbonate continues to be used in non-hyperkalemic cardiac arrest management. Article: Ahn, S et al. Sodium bicarbonate on severe metabolic acidosis during prolonged cardiopulmonary resuscitation: a double-blind, randomized, placebo-controlled pilot study. J Thorac Dis 2018; 104(4): 2295-2302. [Epub Ahead of Print] Clinical Question: In patients with prolonged, non-traumatic, cardiac arrest resulting in acidosis, does the administration of sodium bicarbonate lead to a higher rate of sustained ROSC? Population: Patients that failed to achieve ROSC after 10 minutes of standard ACLS and a femoral ABG demonstrating pH < 7.1 or bicarbonate < 10 mEq/L Intervention: Sodium Bicarbonate (NaHCO3) 50 mEq/L Control: Normal Saline (NS) 50 ml Outcomes: Primary: Sustained ROSC (palpable pulse > 20 minutes) and change in pH (abstract states sustained ROSC, methods section states change in pH) Secondary: Survival to hospital admission, good neurologic survival at 1 and 6 months (CPC 1 – 2) Design: Prospective, double-blind, randomized, placebo-controlled, single-center pilot trial Excluded: Patients with DNR orders, ROSC within 10 minutes, unavailable ABG at 10 minutes, no severe acidosis on 10 minute ABG or eCPR started. Primary Results: 157 patients presented during the enrollment period with cardiac arrest 50 patients were randomized to NaHCO3 or NS 107 were excluded for the reasons noted above Sustained ROSC (> 20 minutes): 10% No patients survived at 6 months (this was a very sick subset of cardiac arrest patients) Baseline characteristics were similar between groups Critical Results: Change in acidosis Statistically significant difference in both pH and HCO3 between groups pH at 20 minutes 6.99 (NaHCO3) vs 6.90 (NS) p = 0.038 HCO3 21.0 (NaHCO3) vs 8.00 (NS) p = 0.007 Sustained ROSC No statistically significant difference 4.0% (NaHCO3) vs 16% (NS) p = 0.349 Survival to hospital admission No statistically significant difference 4.0% (NaHCO3) vs 16% (NS) p = 0.349 Good neurologic outcome at 1 month No statistically significant difference 0.0% (NaHCO3) vs 4% (NS) Strengths: Sound methodology to attempt to answer the question: randomized, double-blinded, placebo-controlled Adds information to a question with limited previous research Incorporated intervention to balance out accumulation of CO2 secondary to bicarbonate use (increasing ventilation to 20 breaths/minute after administration) Limitations: Small, single center study Unclear what the primary endpoint was (stated as sustained ROSC in the abstract but as change in acid-base status in body of study) Neither primary endpoint was patient centered. Would prefer to see good neurologic outcome as the primary outcome in subsequent studies Bicarbonate dose was fixed instead of weight based. This may have led to both under and over-dosing of bicarbonate. A dose of 1-2 mEq/kg may have been more appropriate. Blood samples for 10 minute analysis may have been venous instead of arterial which may depress the pH and bicarbonate values Hyperventilation may have benefited group receiving NaHCO3 (by countering respiratory alkalosis) but may have harmed group receiving NS Quality of CPR, and time to defibrillation not included, both of which can be confounding factors ROSC definition was a palpable pulse. Would argue this is not the gold standard: TTE, TEE, A-Line, EtCO2 much more sensitive and some profound shock patients may have erroneously been included in the no ROSC group Although not mentioned as statistically significant..more patients in the placebo arm had a shockable rhythm than patients in the bicarb arm (8.0% – 2pts vs 0% – 0pts) Other Issues: All patients ventilatory rate was increased from 10 to 20 breaths per minute for 2 minutes after administration of NaHCO3 or NS to counteract the increase CO2 produced by NaHCO3 Author’s Conclusions: “The use of sodium bicarbonate improved acid-base status, but did not improve the rate of ROSC and good neurologic survival. We could not draw a conclusion, but our pilot data could be used to design a larger trial to verify the efficacy of sodium bicarbonate.” Our Conclusion: We agree with the authors conclusions. While the use of NaHCO3 improved the surrogate endpoint of acid-base status, there was no patient centered improvement seen in this study. Potential to Impact Current Practice: This small pilot study should not change clinical practice. Indiscriminate use of NaHCO3 in cardiac arrest should not be performed. However, providers should continue to use their judgement as to which patients with arrest may benefit from NaHCO3. Bottom Line: The use of NaHCO3 does not appear to improve clinically meaningful outcomes. A larger study should be undertaken to further evaluate this clinical question. Post Peer Reviewed By: Salim R. Rezaie, MD (Twitter: @srrezaie)
When I wrote a reply (Social Liberty and Economics) to Bill Anderson's column 'Capitalism and the State,' my hope was that I might be able to show Bill how he has gone wrong in some of his conclusions. Unfortunately, with his reply to me, 'Social Economics and Liberty,' he dashed those hopes and held fast to his fables about laissez-faire. Now Bill, I'm really worried you, because if you persist in believing in these fables, you will waste your obvious talents on a futile crusade that can only lead to a wasted life spent in pursuit of irrational and contra-humane concepts of human organization. Economic Science: Economic science, like all other sciences, has progressed a little in the past 230 yrs or so. So to start off with, please Bill, no more Adam Smith quotes. Both Smith and Jefferson lived before the Industrial Revolution and the era of international free trade during the latter half of the 19th Century. Adam Smith is not the first and last word on free markets. In fact, Adam Smith said that he originally wanted to dedicate The Wealth of Nations to his mentor, the French Physiocrat, Francois Quesnay, but he died before the book was finished. The understanding of the economic laws of free market exchange and the property-based social order did not spring fully formed from the brow of Adam Smith like Athena from the brow of Zeus, but have been discovered and rediscovered since recorded history. Insightful discoveries have been made about the nature of economics throughout human history. For example, the first recorded writings of the Babylonians are records of economic transactions, and the Confucians 2,400 years ago observed which economic laws are most conducive to a prosperous society. The medieval Scholastics of the Salamanca School developed an early understanding of marginal utility theory and applied it to economic exchange, banking and currency. I noticed Bill didn't attempt to rebut my arguments about the true nature of class (caste) warfare and the benefits of corporations and the division of labor, and I suspect I know why. Please Bill, take the time to read Mises and Rothbard. Their books are big and fat and juicy, but well worth the investment of your time. It will prevent you from squandering your energy in a fruitless search for the property-less society. Don't limit yourself to that economic nonsense that they teach you in University. Anyway, reading Bill's reply was quite discouraging. Not only did he suggest the superiority of the collective factory, but also advocated the abolition of property itself (including money, I presume). What's next, Bill, the abolition of language, too? Why not abolish all forms of communication! The State: As to the origins of the state, or governmental monopoly, it is assuredly not based in the creation of 'private' property, as Bill seems to believe. Since property is an extension of the mind and body, it is natural and inevitable, and purposeful labor will create property and wealth. A government arises out of plunder. Foreign or domestic elements will pillage the peaceful producers for their own gain. Eventually, they will realize that instead of raiding them every year, they can conquer them and exact tribute perpetually. They do not establish a monopoly to defend their already existing property, which they used to conquer the producers. Instead, just as I said, they establish the monopoly to tax, to establish a permanent source of annual income. To exploit the weak. Property isn't theft. Taxation is theft. It's obvious to me what Thomas Jefferson meant in the quote you included ('Whenever there are in any country uncultivated lands and unemployed poor, it is clear that the laws of property have been so far extended as to violate natural right.'). The state has prevented free, that is unowned lands from being homesteaded by the labor of the poor. How could it do this? By decreeing that these lands belong to the state, or to 'the people.' It is the state that has increased poverty in Jefferson's quote, which, I'm quite sure, refers to the denser urban populations of Europe, in contrast to the vast lands of America. Social caste, not property, defines the state, since even penniless nobles in the past didn't cease to be nobles. The alternative to the caste system of inherited social rank is the property-based society, the natural hierarchy of property. Feudal societies are not based on property, but rather on monopoly and military rank -- on control or authority, not self-ownership. The feudal system is based on hereditary governorship of provinces (divisions) of the state, not on the ownership of legitimately acquired (via exchange of previous legitimately acquired) property. Bill believes '[o]nly a property-less society can be stateless.' But how can you argue against the existence of property when to do so you demonstrate ownership of your own body to argue against it? Laissez-Faire: Bill also seems confused about what ownership is. Ownership is not monopoly. Monopoly is an imitation of ownership. Under legitimate, true ownership, because it is an extension of the mind and body, ownership can only be over inanimate things, the product of the mixing of labor and unowned objects. Monopoly, however, is illegitimate precisely because it claims not ownership over its facilities and products, but ownership of its customers. Whether it's a gas or water utility or the state's legal and protection monopolies, without having to compete for customers, they exert a control over their customers that simply is not possible under freedom. No 'wealthy capitalists,' as Bill describes them, 'own large segments of the economy.' No one owns large segments of the economy. Factories and land are worthless unless they continue to produce income. Should they become unable to produce goods for some reason, and cannot be sold for their components, their value drops to near zero. The economy is not owning property, but the production and exchange of property. I wonder if Bill would allow me to behave as I wished in his home without him 'trying to tell me that I have to do what he says or leave 'his' building?' Is it really because the state says that I should respect your property the only reason to do so? If the state didn't attempt to justify its existence by persecuting assaults against civilizational decency, and instead promoted and abetted social destruction, like it did in the Soviet Union during the period of War Communism, would that stop most people from choosing to respect the lives and property of others, whether in their own home or places of business? Bill asks why wouldn't people use their employer's 'machines and goods in any factory for their own ends . . . in the absence of such a police force?' Could it be because, unlike Bill apparently, their time preference rate is significantly lower and they take the future into account before they act. Such self-indulgent behavior would leave them unemployable, and not just unemployable, but a danger to any potential customers they might be able to attract using their purloined property from their employer. Companies do not need 'reactionary' hierarchies that 'act as an iron fist' because most mature adults understand that it is in their own self-interest to be responsible, respectful of others, enterprising and honest. Unlike the state, the laissez-faire society is a naturally 'self-policing' one because high time preference behavioral choices have immediate, negative sanctions. There is no coercion or violence involved. There is no coercion involved in cooperating by taking consideration of others into account before you act, just as you aren't coerced by nature to continue breathing or eating. Just as Bill agreed with me that liberty is all of one tapestry; that there are not separate social freedoms, and other economic freedoms, I hope he'll agree that economics is a similar single tapestry, with no separate scheme of social economics and political economics. Economic science is universal and true, and what we call the state or government is a perversion of economic law, the same perversion that any attempt to realize economics without property entails. The state rather than being an instrument for the creation and enforcement of property, is itself the negation of property. Mercantilism: Regarding today's economy, yes, today's economy is what should be called mercantilist, i.e., the alliance of the state and established producers (big business). Ironically, it was this system that Marx dubbed capitalism, unaware that he did not understand the true free society of laissez-faire individualism. Today's economy is in no sense 'free-markets.' Its clear from his writing that Bill misconstrues mercantilism for laissez-faire, the same as Marx did when he coined the term capitalism, which, like Queer and other words used for denigration, have been adopted by its intended targets. When we anarcho-capitalists refer to capitalism, we mean absolute free trade, not mercantilism. Mercantilism is a form of oligarchy, but oligarchy is an inherent feature of syndicalism as well, indeed it is syndicalism's very purpose! When Bill complains that "it is no more or less difficult to move to the middle of nowhere, invent one's own currency, protect one's own property, and start one's own society, than it is to remove ourselves from the existing capitalist order in our society today," he is forgetting that the features of society require more than one person. In laissez-faire, you must cooperate with others to achieve your ends. One cannot 'invent one's own currency' if there were no others willing to accept it in exchange for goods and services. Absent the state monopoly, businesses would be free to offer their services, including property protection, competing currencies and other services, which have existed in the past free from state control. Hierarchy: After denouncing hierarchy, Bill suggests we need to unite, presumably in some sort of organization to work towards the common purpose of abolishing the state. But how would this be done without hierarchy or strategic authority and planning? Without a system of costing and expenditure control? How would you organize presumably thousands or even millions of people without the division of labor? Without establishing a hierarchy of specialized organizations? Hierarchy is inevitable. It is inevitable because of the limitations of time and space on the human person. Presumably, Bill desires the perfectly static egalitarian society, which he probably believes is realized in the many student organizations he belongs to on campus, and falsely believes that it is natural. But under a regime of equality there can be, obviously, no social mobility, and if individual progress cannot be measured against the gains or decline or stasis of others, how can you know your standard of living is increasing? The quality of life can go down, as well as up. And under Bill's imagined production system, it will surely decline greatly. Syndicalism: Bill is a member of several political action organizations which unlike productive business, produce little or nothing of value to anyone, except perhaps to their members, which of course is their purpose, because unlike profit-based enterprises, which are created to serve customers, these organizations exist to propagandize the views of their membership with little to demonstrate profit or loss. And since Bill is so scornful of the role of the enterprising individual and his hierarchical organization, I'd like to ask him to reflect on who originated the idea to form these groups he's a member of. Surely several individuals didn't all suddenly suggest the idea all at once in the same instant. Instead one person suggested it and others agreed. What is truly disconcerting is that Bill, while denouncing mercantilism, which he mistakenly assumes to be laissez-faire, as a regime of oligarchy and oppression, fails to see that this is the same regime that he advocates in his desire for economic syndicalism to replace both competitive corporations and the state monopoly. To illustrate this, I'll quote from Bill's favorite economist Adam Smith: under mercantilism, Smith wrote in The Wealth of Nations, 'the interest of the consumer is almost constantly sacrificed to that of the producer.' And under Bill's preferred organization of production, who are the producers? The workers who then 'own' the firm or whatever they work in. In such an organization, where members cannot be fired or hired and labor is not specialized, the interests of the customer and competition between producers are surely the last thing to be considered by the workers-owners, who will no doubt feel the need to cartelize production to prevent economic losses to competing workers collectives. Under syndicalism, the workers form an oligarchy, since the purpose of expropriating the owners is so the workers may enjoy the full income of the firm. Ironically, by attempting to avoid the tyranny of monopoly, Bill would impose on labor the evil of monopsony! And under syndicalism, unlike genuine free markets, it will be the producers vs. the consumers. I fear that Bill doesn't have a clear vision of how these syndicalist workers collectives would actually operate day-to-day. How would these democratic production collectives actually conduct business without having a name? Without being able to market over large areas? Forcing this upon society would lead to social regression by reversing the division of labor, and cause standards of living to decline to a more primitive level. Creating a society so fearful of hierarchy would produce a rampant suspicion and distrust of those with obvious talents for organization and planning, and enthrone fear of others as its guiding principle, should any one be placed out of necessity in the position of making decisions for the entire group. Perhaps Bill merely misunderstands the use of the word corporation, meaning corpus, or body, i.e., a body of men, but owning a corporation does not mean owning its members who cannot be owned, but rather the name and tools and housings. Corporations: Bill refuses to understand the nature and purpose of the corporation, which is to act as a gatherer and organizer of knowledge and resources by an entrepreneur to produce a good or service. Essentially, a corporation is a little daily market, where a buyer of labor services and supplies (the company) purchases every day the services and goods produced by its supplier of labor: employees. The company is the customer of its employees, who are selling their services to the company every day. And because a company is this little marketplace for goods and services every day, it can calculate profit and loss. Being able to calculate profit and loss is absolutely vital to any enterprise, indeed any endeavor, including Bill's syndicalism. Because what if the worker's firm loses money? What then? Does Bill assume that these syndicates will never lose money, or whatever he proposes to replace money with? And without prices for things, which money provides as a standard of comparison between goods and services, how will these firms know what needs to be produced? Will they produce their wares without regard for consumer demand? Or will they force consumers to accept their goods? Without free exchange, the alternative is coercion, violence and oppression. But Bill makes these assertions towards the system of laissez-faire. Unfortunately, Bill is unfamiliar with economic science and the calculation problem of socialism, which also exists in his syndicalism. And if the worker-owners can't sell their shares of their company or whatever, how is this better than public ownership, where taxpayers are told they own the Grand Canyon, the Statue of Liberty and the White House, yet can't sell their ownership in them? What Bill doesn't understand is that while the purpose of labor is to work in the company to produce its products, that the goods and services (the commodities) sold by the company are their product, the product of the entrepreneur is the company itself. The workers produce products/commodities that compete with other commodities on store shelves. But the entrepreneur's product (the company) competes with other companies for "shelf space" in the mind of potential customers. A company competes against other companies for a place in customers' minds as a source for that particular good or service. To operate a successful enterprise through time requires a specialization of talent on different functions within the overall enterprise, such as strategic planning, marketing, production and so on, which requires the division of labor, which creates the hierarchy. Once tasks are organized, immediately a hierarchy is formed as the tasks are assigned and managed. Without this specialization on different responsibilities within the organization, and authority over the organization and its production, and the hierarchy that the limitations of time and space impose, mutually beneficial, large-scale cooperation cannot be possible. Unless, maybe, Bill has found some way to counteract the physical limitations of time and space on the human body? Conclusion: Bill, if you decide to reply again, please describe to us how a property-less economy would actually function without organizations, money, managerial hierarchy, the division of labor and profit and loss. This is the least you can do, since you want others to follow you into this bold vision of the future of human social organization. If you sincerely wish to mobilize the masses to rise up against the state, don't you think that you need to convince them how their daily needs for food, clothing and shelter, as well as the modern standards of living they demand, such as electricity, refrigeration, electronics, air conditioning, etc. will be provided for?
http://strike-the-root.com/content/libertarian-economics-and-social-democracy?mini=calendar%2F2022-09
What is Labor Market? The labor market or the job market is a widely tracked market that functions through the supply and demand dynamics of people seeking employment (workers) and organizations/people rendering employment (employers). - MicroeconomicsMicroeconomicsMicroeconomics is a study in economics that involves everyday life, including what we see and experience. It studies individual behavioural patterns, households and corporates and their policies. It deals with supply and demand behaviours in different markets, consumer behaviour, spending patterns, wage-price behaviour, corporate policies.read more looks at the labor market at the individual (firm and worker) level of demand and supply. The supply of labor increases as the wages increase until a point when the marginal utility of each additional hour of wage starts decreasing. Once that happens people forego additional work for leisure activities and supply declines. - Demand in microeconomics is determined by the marginal costMarginal CostMarginal cost formula helps in calculating the value of increase or decrease of the total production cost of the company during the period under consideration if there is a change in output by one extra unit. It is calculated by dividing the change in the costs by the change in quantity.read more and marginal revenueMarginal RevenueThe marginal revenue formula computes the change in total revenue with more goods and units sold." The value denotes the marginal revenue gained. Marginal revenue = Change in total revenue/Change in quantity sold. read more of the product. If the marginal revenue from each additional unit of product is less than its marginal cost, the demand for labor will decline. You are free to use this image on your website, templates etc, Please provide us with an attribution linkHow to Provide Attribution?Article Link to be Hyperlinked For eg: Source: Labor Market (wallstreetmojo.com) Components of Labor Market in Macroeconomics In macroeconomicsMacroeconomicsMacroeconomics aims at studying aspects and phenomena important to the national economy and world economy at large like GDP, inflation, fiscal policies, monetary policies, unemployment rates.read more, the labor market is a function of the following components. #1 – Labor Force Part of the working-age population that is employed or actively looking for employment. The labor force is a function of population growth, net immigration, new entrants, and the number of retirees from the labor force. #2 – Participation Rate It is determined as the size of the labor force as a percentage of the size of the adult civilian non-institutional population. #3 – Non-Labor Force It is the difference between the size of the adult civilian noninstitutional population and the size of the labor force. #4 – Unemployment Level Difference between the labor force and the number of workers currently employed. #5 – Unemployment Rate The number of unemployed workers as a percentage of the labor force. #6 – Employment Rate The number of employed workers as a percentage of the labor force. Example of Labor Market Let’s discuss an example of the labor market. An economy has a total civilian non-institutional population of 100,000, out of which, 80,000 people are in their working age. Out of the working-age population, 75,000 are actively employed or are looking for employment (forming a part of the labor force) while 5,000 do not form a part of the labor force. Out of the labor force of 75,000 people, 4000 people are unemployed (an unemployment rate coming to be 5.3%). See the below illustration for more details. Types of Unemployment in Macroeconomics The following are types of unemployment in microeconomics. You are free to use this image on your website, templates etc, Please provide us with an attribution linkHow to Provide Attribution?Article Link to be Hyperlinked For eg: Source: Labor Market (wallstreetmojo.com) #1 – Frictional UnemploymentFrictional UnemploymentFrictional Unemployment is an unemployment type (besides Structured & Cyclical) that occurs when workers have resigned & either are looking for a new job, switching from one job to another, or entering the workforce for the first time.read more – It is due to the time taken for people to find jobs. #2 – Structural UnemploymentStructural UnemploymentStructural unemployment occurs when there is a disparity between the knowledge and skills demanded by an employer and those offered by his or her employees. It is typically caused by economic changes such as recessions, deindustrialization, and other factors, and it results in individuals being unable to find work due to differing skill requirements.read more – It happens due to a mismatch between the skills required by firms and offered by workers in the job market. If the skills offered are not in demand and ones that are in demand cannot be supplied, it causes structural unemployment. #3 – Natural Rate of Unemployment – It is an addition of frictional and structural unemployment and is the rate that prevails when the economy is in equilibrium. #4 – Cyclical UnemploymentCyclical UnemploymentCyclical unemployment is one of the types of unemployment, which usually happens during the contraction phase of the business cycle where the unemployment rate starts rising as businesses start laying off its employees during the recession period & unemployment rate decreases during the expansionary phase of the business cycle.read more – Any rate of unemployment higher than the natural rate is caused due to cyclical reasons. It generally happens when the economy is not in good shape and the demand for workers is generally low as the demand for final goods and services also gets reduced. The natural rate of unemployment in the United States is 5%. It touched a high of 10% during the financial crisisFinancial CrisisThe term "financial crisis" refers to a situation in which the market's key financial assets experience a sharp decline in market value over a relatively short period of time, or when leading businesses are unable to pay their enormous debt, or when financing institutions face a liquidity crunch and are unable to return money to depositors, all of which cause panic in the capital markets and among investors.read more of 2009 and came back to a normal 4.9% years later in 2016. Advantages of Labor Market Some of the advantages of the labor market are as follows: - Labor market and analysis are useful in forming broader level economic policies for the benefit of citizens of the country. A wide range of government decisions is taken based on the shape of labor markets. - The labor market is an important gauge of structural changes happening in the economy or industry. It helps policymakers draft and implements new policies if they are aware of the structural trends. - Helpful in determining labor productivityLabor ProductivityLabour productivity is a concept used to measure the worker's efficiency as the output value produced by a worker per unit of time. By comparing the individual productivity with average, it can be identified whether a particular worker is underperforming or not.read more if the same number of workers are helping produce the higher value of goods and services. - Labor markets also help determine the prevailing average wage rates that are useful in making many economic decisions including inflation control by the central banks. Limitations of Labor Market Analysis Some of the limitations of the job market analysis are as follows: - Labor market analysis does not factor in the emotional and psychological factors that go into employment or unemployment status of individuals. - The analysis is majorly applicable in capitalist countries, where the job market is fairly developed. It is not applicable in a market dominated by a few employers or where the normal economic activity if distorted due to extraordinary reasons. - Labor markets also do not factor in the role of unpaid labor like unpaid interns, who are employed, not paid but contributing to the economic activity. Important Points - Labor markets change based on the level of economic activity (recessions and booms) and structural changes in the economy (technological changes, change of habits, etc.) - The high rate of immigration can distort equilibrium in the market result in high unemployment rates. Conclusion - The labor market is an integral part of any economy. It is because of the efforts the workers put that brings about economic activity and growth. Analyzing the labor market is tough. Different theories have different approaches to the labor market. However, the most followed and effective is the macroeconomic theory talked about earlier in the article. - Analysts and economists study the variables in macroeconomic theory to access the health of the labor market and determine the essential steps to be taken to bring the economy back to equilibrium if there has been a shift. - The labor market is a must study for a student of economics to understand the nuances of the economy and business. It is also interesting to study how the market has changed over a long period of time.
https://www.wallstreetmojo.com/labor-market/
The human brain is an amazing piece of work. Every time you utter a sound, or hear one, there are dozens of things that happen subconsciously and take the sound and reduce it to one of several distinct sounds that we use in our language. The problem is that these distinct sounds are different in different languages. When you come into a new language and you hear a sound you're not used to, you automatically try to fit it into one of your previous categories of sounds. This can cause interesting problems. Let's illustrate this with a (slightly-hypothetical) analogy. There is one group of people from the Land of Men, and another from the Land of Women. In the Land of Men there are only a few colours: red, blue, brown, yellow, pink, green, and a few more. In the Land of Women, however, there are many more: chartreuse, magenta, terracotta, viridian, lavender rose, etc. Whole books could be written about the colours in the land of women, and indeed, some have. When the men visit the Land of Women, they have no end of trouble. You see their road signs are colour-coded. The women have no problem with this. Their stop signs are rust-coloured and their yield signs are painted in auburn. Now the men, they look at both of these colours and see brown. So as far as they can tell, all stop signs are brown in the Land of Women; however, sometimes women will stop at the stop signs and sometimes they drive right through. Obviously the women must be terrible drivers. Likewise, the women notice the men have an annoying habit of always stopping at yield signs. Similarly, speakers of different languages compartmentalize the sounds they hear in words into different categories. For instance, in English the words 'toe' and 'so' are distinguished by their initial consonants: 'toe' begins with the sound /t/ while 'so' begins with /s/. However, many speakers of the language Tok Pisin do not differentiate between these sounds, and they may be interchanged without changing the meaning of words (e.g. [tupu] or [supu] for the word tupu, meaning 'soup'). Thus knowing how languages classify sounds is at least as important as knowing what sounds they use in the first place. We can speak of a language's phonology as being how it carves up the acoustic space into meaningful units. This is an area of study practiced by phonologists. Contents Phonemes The basic unit of study of phonology is the phoneme, which may be defined as sets of phones which function as one unit in a language, and provide contrast between different words. In other words, a phoneme is a category that speakers of a language put certain sounds into. For instance, returning to the Tok Pisin example above, the sounds [s] and [t] would both belong to the phoneme /t/. (In the IPA, phonemes are conventionally enclosed in forward slashes //.) As another example, try pronouncing the English words keys and schools carefully, paying close attention to the variety of [k] in each. You should find that in the first there is a noticeable puff of air (aspiration), while in the second it is absent. These words may be written more precisely phonetically as [kʰiz] and [skulz]. However, since aspiration never changes the meaning of a word, both of these sounds belong to the phoneme /k/, and so the phonetic representations of these words are /kiz/ and /skulz/. It should be evident why it is appropriate to refer to the phoneme as a level of abstraction away from the phone. We have removed a layer of information which, while interesting in itself, does not interact in many aspects of a language. The phonemic inventory of a language is the collection of phonemes in a language. We looked at English's in the last chapter. Allophony Two phones are called allophones if they belong to the same phoneme. For instance, in Tok Pisin [t] and [s] are allophones of /t/, and in English [k] and [kʰ] are allophones of /k/. Allophones are often conditioned by their environment, meaning that one can figure out which allophone is used based on context. For example, in most varieties of American English, the English phoneme /t/ is realized as a tap [ɾ] between vowels in normal speech when not preceding a stressed vowel, for example in the word "butter". In a case like this we can say that the plosive [t] and tap [ɾ] allophones of the phoneme /t/ are in complementary distribution, as every environment selects for either one or the other, and the allophones themselves may be referred to as complementary allophones. Similarly [k] and [kʰ] are in complementary distribution, as [k] mainly occurs in the sequence /sk/, while [kʰ] occurs elsewhere. By contrast, allophones may sometimes co-occur in the same environment, in which case they are in free variation. For example, the English word cat's word-final /t/ phoneme may be realized either with an audible release, or as the tongue held in the gesture without being released. These phones, notated as [t] and [t̚] in the IPA, are free variants, as either is allowed to occur in the same position. Similarly [s] and [t] are free variants for some speakers of Tok Pisin. Minimal pairs An important question which may have occurred to you already is: how can we tell what is a phoneme? One of the most robust tools for examining phonemes is the minimal pair. A minimal pair is a pair of words which differ only in one segment. For example, the English words do /du/, too /tu/, you /ju/, moo /mu/ all form minimal pairs with each other. In a minimal pair one can be sure that the difference between the words is phonemic in nature, because the segments in question are surrounded by the same environment and thus cannot be allophones of each other. In other words, they are in contrastive distribution. This is not a foolproof tool. In some cases it may by chance be impossible to find a minimal pair for two phonemes even though they clearly contrast. In many cases it is possible to find near-minimal pairs, where the words are so similar that it is unlikely that any environment is conditioning an allophone. Finally this also requires some common sense, since phonemes may be in complementary distribution without being likely allophones. For instance, the English phonemes /h/ and /ŋ/ (both occurring in the word hung /hʌŋ/) can never occur in the same environment, as /h/ is always syllable-initial and /ŋ/ always syllable-final. However few would suggest that these phonemes are allophones. Since English speakers never confuse them, they are auditorily quite different, and substituting one for another in a word would render it unintelligible. Unfortunately there is no hard-and-fast consensus on precisely how to be sure sounds are allophones or not, and in many languages there is vigorous debate. Phonological Rules Phonotactics Phonotactics are the rules that govern how phonemes can be arranged. Look at the following lists of made-up words: - Pfilg - Dchbin - Riaubg - Streelling - Mard - Droib The first three are 'unpronounceable' because they violate English's phonotatic constraints: 'pf' and 'dchb' aren't allowed at the start of a syllable, while 'bg' isn't allowed at the end. The next three are nonsensical words, but they do not violate phonotactics, so they have an 'English-like' feel. Lewis Carroll was particularly skilled in the art of creating such words. Some of his creations were immortalised in his poem Jabberwocky. Here are a couple of stanzas from his famed work: |'Twas brillig, and the slithy toves| Did gyre and gimble in the wabe; "Beware the Jabberwock, my son! Note that different languages have different phonotactics. The Czech Republic has cities like Brno and Plzeň, while the Mandarin for Amsterdam is Amusitedan. Czech phonotactics allow for really complicated consonant clusters, while Mandarin allows for none. Coarticulation Effects Morphophonology Morphophonology (or morphophonemics) looks at how morphology (the structure of words) interacts with phonology. In morphophonology one may talk about underlying or morpho-phonemic representations of words, which is a level of abstraction beneath the phonemic level. To see how this follows from the definition of morphophonology, it is necessary to look at an example. Compare the Biloxi words: - de 'he goes' - da 'don't go' - ande 'he is' - anda 'be!' - ide 'it falls' - ide 'fall!' - da 'he gathers' - da 'gather!' Some also use this approach to deal with cases of neutralization and underspecification. Compare the Turkish words: - et 'meat' - eti 'his meat' - et 'to do' - edi 'he does' Similar patterns in other words in Turkish show that while final stops are always devoiced, some will always voice when followed by a vowel added by suffixing, while the others always stay voiceless. Phonemically both ets must be represented as /et/, because phonemes are defined as the smallest units that may make words contrast (be distinguishable), so if we said the word for 'to do' was phonemically /ed/ then the two words would have to contrast! Still, we would like to say that on a more abstract level the word for 'to do' ends in a different segment, which doesn't surface (be realized) in some positions. The level of abstraction above the phoneme is known as an underlying or morpho-phonemic representation, and as is conventional we will indicate it here with pipes ||. Underlyingly, these Turkish words may be represented as |et|, |eti|, |ed|, and |edi|, and in the same way other Turkish words with this type of voicing alternation underlyingly end in a voiced stop, which surfaces as a voiceless phoneme when word-final. The parallelism between the morpho-phonemic layer and the phonemic layer should be clear. Just like how phonemes surface as phones conditioned by their environment, underlying segments surface as phonemes. The important difference is that the surfacing of morpho-phonemic segments as phonemes occurs after morphological processes (e.g. adding endings on to words) take place. In a sense, morphophonology is morphologically informed, while plain phonology isn't. Issues In some theoretical frameworks of speech (such as phonetics and phonology for applied linguistics and language teaching or speech therapy), it is convenient to break up a language's sounds into categorical sounds—that is, sound types called 'phonemes'. The construct of the phoneme, however, is largely a phonological concern in that it is supposed to model and refer to a transcendental entity that superstructurally and/or psychologically sits over the phonetic realizations and common variations of a sound in a language. For example, if the English phoneme /l/ is posited to subsist, it might be said to do so because the /l/ of 'light' creates a clear contrast with a phonetically similar sounding word, such as 'right' or 'write' (both of which have a distinct /r/ at the beginning instead of a distinct /l/). Thus, 'light' and 'write' are a 'minimal pair' illustrating that, in English at least, phonemic /l/ and phonemic /r/ are distinct sound categories, and that such a distinction holds for realized speech. Such a model has the profound weakness of circular logic: phonemes are used to delimit the semantic realm of language (lexical or higher level meaning), but semantic means (minimal pairs of words, such as 'light' vs. 'right' or 'pay' vs. 'bay') are then used to define the phonological realm. Moreover, if phonemes and minimal pairs were such a precise tool, why would they result in such large variations of the sound inventories of languages (such as anywhere from 38–50 phonemes for counts of English)? Also, it is the case that most words (regardless of homophones like 'right' and 'write', or minimal pairs like 'right' and 'light') differentiate meaning on much more information than a contrast between two sounds. The phoneme is really a structuralist and/or psycholinguistic category belonging to phonology that is supposed to subsist ideally over common variations (called 'allophones') but be realized in such ways as the so-called 'clear' [l] at the beginning of a word like 'like' but also as the so-called 'dark' [l] at the end of a word like 'feel'. Such concerns are really largely outside of the realm of phonetics because structuralist and/or psycholinguistic categories are really about cognitive and mentalist aspects of language processing and acquisition. In other words, the phoneme may (or may not) be a reality of phonology; it is in no way an actual physical part of realized speech in the vocal tract. Realized speech is highly co-articulated, displays movement and spreads aspects of sounds over entire syllables and words. It is convenient to think of speech as a succession of segments (which may or may not coincide closely with phonemes, ideal segments) in order to capture it for discussion in written discourse, but actual phonetic analysis of speech confounds such a model. It should be pointed out, however, that if we wish to set down a representation of dynamic, complex speech into static writing, constructs like phonemes are very convenient fictions to indicate what we are trying to set down (alternative units in order to capture language in written form, though, include the syllable and the word). Workbook section Exercise 1: Kalaallisut Kalaallisut, or Greenlandic, is an Eskimo-Aleut language spoken by most of the population of Greenland, and has more speakers than all other Eskimo-Aleut languages combined. While Kalaallisut is currently written using five vowel letters, it is analyzed as having only three underlying vowel phonemes. From the following words, deduce Kalaallisut's phonemic vowel inventory and what conditions the allophonic vowels: - assaat - forearm - assoqquppaa - goes windward of it - assoruuppoq - pulls himself together - ilisimannippoq - has knowledge of something - isuma - mind - kikkut - which, whom, whose (pl.) - mulequt - leaf - nukarlersaat - the youngest of them - nuliariipput - they are married - orsuut - blubber - paamaarpoq - is slow - paaq - soot - qinnilinnik piiaat - screwdriver - sakiak - rib - terlippoq - is safe - uagut - we - utoqqaq - old - uffarvik - bathtub - ullortuvoq - the day is long - versi - verse (Words taken from this Greenlandic English Dictionary.) Notes - ↑ Other conventions, such as double pipes || || or double slashes // // may also be seen.
https://en.wikibooks.org/wiki/Linguistics/Phonology
Virtual reality (VR) uses technology to create a completely simulated environment in which a user can experience and interact with that environment . The hardware for virtual reality typically includes a computer capable of real-time scene simulation; wearable devices (e.g., haptic gloves) that sense and respond to motions of the user; a display for visual output; devices for audio feedback; and trackers for body, head, and eye. Virtual reality optics include the cameras that capture raw data for scene simulation; fiber optics used in gloves and clothing to send and receive data; head-mounted displays (HMDs) that generate 3D perception; immersive and semi-immersive projection displays; and sensors that track the motion of the user and their eyes. Currently, the virtual reality optics of most interest are head-mounted displays (HMDs), which are also known as near-eye displays.
https://www.synopsys.com/glossary/what-is-virtual-reality-optics.html
Every life experience, from our birth to our death, can be reduced down to electrical stimulation of our brains from sensory organs providing us with information about the world around us. “Reality” is our interpretation of these electrical signals, which means that our brains are essentially our own reality. Whatever you feel, hear, see, taste, or smell is an interpretation of the world around you that exists solely in your own brain. In general, even if we understand this concept, we work under the assumption that our interpretations are pretty close to the external world. Actually, this is not true at all. In certain crucial ways, the brain “sees” things that do not actually reflect the information that is being presented to our senses. We each live in our own reality bubble, constructed of both how we perceive using our senses and how our brains interpret these perceptions. This is exemplified by the concept of color. Color in itself is not a property of the world around us; rather, it is a category created by our perceptions. To experience the world with meaning, the brain just filter the world through our lenses. This is what makes virtual reality so intriguing for the future of communication in a variety of fields. Now, our method of communicating our perception is with words. Words have proven to be ineffective for relaying our intentions and interpretations. With virtual reality, there is the potential for us to literally show each other way we see. Virtual reality allows us to reveal a world without our filter, which could endow mankind with a new method of communication that is a sort of telepathy, bringing the gap that exists due to our own unique interpretations of the world. With virtual reality, there is no ambiguity of what we mean like there is when we speak our intentions. This results in a truly perfect understanding, as all parties hold the exact same information. Understandably, excitement about these possibilities translates across a variety of fields. In this blog, we will look into the history of virtual reality, how it works, and its various applications. Though the concept of virtual reality has been around since the 1950s, most people were not aware of it until the 1990s.1 However, the beginnings of this revolutionary concept started well-before it was conceived. If you think about virtual reality getting its start under the idea of creating the illusion of being somewhere other than where we actually are, it can be traced back to the panoramic paintings of the early 19th century. 2 These murals were designed to fill the entire field of vision of the viewer to make the paintings come to life, creating the illusion of really being there. Clearly, the desire to see things differently than our reality has been present for centuries. In the 1930s, Stanley G. Weinbaum would predict virtual reality in his science fiction short story, “Pygmalion’s Spectacles.”3 The story centers around a virtual reality system that uses goggles to broadcast a holographic recording of different experiences that involve all of the senses. In 1956, the first step towards virtual reality came to existence with the invention of the Sensorama.4 The Sensorama was invented by cinematographer Morton Heilig, who produced short films for the machine that immersed the viewer in the experience using a 3D display, vibrating seats, and smell generators. In the 1960s, Heilig followed the Sensorama with the invention of the Telesphere Mask, which was the first head-mounted display and featured stereoscopic 3D imagery and stereo sound. In 1961, Philco Corporation engineers created the Headsight, a head-mounted display as we know them today.5 This technology used a different video screen for each eye as well as a magnetic motion tracking system linked up to a closed circuit camera. It was designed to see dangerous situations from a distance for military purposes. As the user moved their head, the camera would move so they could look around the environment naturally. This was the first step towards the head-mounted displays we know today, though it was not integrated with a computer. This would come later, in 1968, when Ivan Sutherland with his student Bob Sproull created the first virtual reality head-mounted display that connected to a computer called the Sword of Damocles.6 This heavy device hung from the ceiling as no user could comfortably support the weight of the machine, and required being strapped into it. In 1969, computer artist Myron Kruegere developed a series of “artificial reality” experiences that were responsive.7 Projects GLOWFLOW, METAPLAY, and PSYCHIC SPACE ultimately led to VIDEOPLACE technology, which allowed people to communicate through this responsive virtual reality. In 1991, virtual reality became publicly available through a series of arcade games, though they were still not available in homes. In these games, a player would wear VR goggles, which provided immersive stereoscopic 3D images. Some units even allowed for multi-player gaming. In 1992, the sci-fi movie “The Lawnmower Man” introduced the concept of virtual reality to the general public, with Pierce Brosnan playing a scientist who uses virtual reality to turn a man with an intellectual disability into a genius.10 Interest in virtual reality peaked, and in 1993, Sega announced that they would be releasing a VR headset for the Sega Genesis console, though this technology failed to develop and it was never actually released. In 1995, Nintendo also attempted to release a 3D gaming console, though it flopped due to how difficult it was to use and it was discontinued shortly after it was released. In 1999, the concept of virtual reality became mainstream with the film “The Matrix,” in which some characters live entirely in virtually created worlds; though previous films touched on the concept, it was “The Matrix” that had a major impact. In the 21st century, virtual reality technology has seen rapid development. As computer technology has evolved, prices have gone down, making virtual reality more accessible. With the rise of smartphones has come the HD displays and graphics capabilities necessary for lightweight, usable virtual reality devices. Today, technology such as camera sensors, motion controllers, and facial recognition are a part of daily technological tasks. Today, companies like Samsung and Google have started offering virtual reality through their smartphones, and videos game companies like PlayStation offer VR headsets for their games. The rising prevalence of virtual reality headsets has made this technology widely known. Given the strives VR technology has made in the last decade, the future of virtual reality offers fascinating possibilities. For the sake of simplicity, we will explain how virtual reality works through head-mounted displays, as this is the most widely known virtual reality technology. In most headsets, video is sent from a computer to the headset using an HDMI cable.11 They use either two feeds to one display or one LCD display per eye. Additionally, lens are placed between the pixels and the eye, which can sometimes be adjusted to the specific distance between the eyes. These lenses are used to focus the picture for the individual eye and create a stereoscopic 3D image using the technology that Wheatstone created centuries ago. VR head-mounted displays also immerse the user in the experience by increasing the field of view, meaning the width of the image.12 A 360-display is not necessary and too expensive, so most headsets use around a 100 or 110 degree field of view. For the picture to be effective, the frame rate must be a minimum of 60 frames per second, though most advanced headsets go beyond this, upwards of 100 frames per second. Another crucial aspect of VR technology is head tracking.13 Head tracking means that the picture in front of you moves with you as you move your head. The system used for head tracking is called 6DoF (six degrees of freedom) and it plots your head on a X,Y, and Z axis to measure all head movements. Some technology that may also be used include a gyroscope, magnetometer, and accelerometer, depending on the specific headset. Headphones are also used in VR headsets to increase immersion. In general, either binaural or 3D audio is used to give the user a sense of depth of sound, meaning it can sound like a sound is coming from the side, behind, or a distance from them. Currently, motion tracking technology is still being perfected in these VR headsets. This means that some technology uses motion sensors to track body movements, such as the Occulus Touch, which provides wireless controllers that allows you to use your hands perform actions in a game. Finally, eye tracking is the latest component to be added to certain VR headsets. In these, an infrared sensor monitors the user’s eye movements so that the program knows where you are looking in your virtual reality. This allows in-game characters to react to where your eyes are and it also makes the depth of field more realistic. Further development of this technology is also set to reduce motion sickness, as it will make it feel more realistic to your brain. With a greater understanding of this revolutionary technology, you can see how it can be useful in an infinite number of ways to a variety of different realms. Virtual reality has already provided a lot of value to the military as one of the earliest motivations for this technology, with more possibilities on the horizon. Currently, virtual reality is being used to train soldiers for war.14 It is not hard to understand why the military leapt on this technology, as it allows a user to experience a dangerous environment without any actual danger to them. This makes military training not only safer, but more cost-effective in the long run, as real or physically simulated situations are quite expensive and can cause damage to costly equipment.15 Combat simulators are a common application of VR for the military, using headsets to give soldiers the illusion of being at war.16 This not only prepares them for the experience of war, it gives them a space in which they can practice using military technology with the ability to do it over again if they make a mistake. It also allows them to practice with each other within a virtual world, enhancing the communication of a unit.17 These virtual reality headsets also allow soldiers to prepare to make important decisions while in stressful situations.18 Given the demographics of army recruits in training (young adult men), this method of training is highly effective, as this group has grown up playing video games and finds this learning method appealing.19 Not only does virtual reality have applications for training soldiers, it may also be a helpful tool for helping them heal after combat; specifically, it may help treat PTSD.20 The idea is that virtual reality may allow soldiers to be exposed to potential triggers in a safe environment that allows them to process their symptoms and enables them to cope with new situations. In the future, the military will likely take advantage of further developments in VR technology by enhancing the realism of the simulators. It is likely that more humanitarian and peacekeeping training will be done through the use of VR. It is likely that facial recognition technology will be incorporated in order to assess a person’s emotional state, which may help enhance communication further both between soldiers and with interacting with people in foreign countries. Regardless of how this new technology is applied, it is certain that the military will be at the cutting edge of the latest VR technology. Presently, the entertainment industry is next in line after the military to benefit the most from further development of virtual reality technology. Most obviously, the world of gaming has seen impressive (and not so impressive) advancements with VR headsets. Just a couple years ago, virtual technology through video games seemed unlikely to actually come to fruition. Today, the three most prominent VR game systems are the Oculus Rift, Playstation VR, and the HTC Vive.21 Each features games that allow the user to immerse themselves into an environment, whether it is a boxing ring, a truck, or Gotham. The future of VR in gaming will likely center around the development of better eye tracking and motion detecting within virtual reality. With these developments, video games will be more immersive than ever. Today, mobile phone companies are competing to create the most compelling VR device. Google recently released the Daydream View, a VR headset that is designed to be more comfortable and technologically advanced than its predecessor, Google Cardboard.22 Samsung has also recently released a comparable device called the Gear VR.23 Both of these devices allow the user to virtually visit anywhere in the world, use a series of apps, and also, as can be expected, play immersive games. As virtual reality technology becomes more prevalent, affordable, and usable, it is certain that more of these devices will saturate the market. The future of virtual reality is beyond anyone’s wildest imagination at the moment, but suffice it to say, it is safe to assume that the technology will only get more realistic from here. The potential applications for this technology are enormous in the military, the private sector, and the world of psychology, but other areas are set to benefit as well in ways we cannot anticipate. With time, virtual reality may be commonly available in everyone’s living room. Regardless of its specific future applications, virtual reality is set to change the world. If you want to learn more about the fascinating technology behind VR or its applications, see the links below for further reading.
https://www.meraglim.com/blog/risk-assessment-software/the-past-present-and-jaw-dropping-potential-future-of-virtual-reality/
Considerations when Using RTI Models with Culturally and Linguistically Diverse Students Janette Klingner University of Colorado at Boulder, National Center for Culturally Responsive Educational Systems Response to Intervention Models • In the newly reauthorized IDEA, eligibility and identification criteria for LD have changed [614(b)(6)(A)-(B)]: • When determining whether a child has a specific learning disability • The LEA is not required to consider a severe discrepancy between achievement and intellectual ability. • The LEA may use a process that determines if a child responds to scientific, research-based intervention as part of the evaluation. Response to Intervention Models • Some critical issues we will discuss: • What should “research-based interventions” at the first and second tiers look like for culturally and linguistically diverse students? • What counts as research? We need to find out not only “what works,” but what works with whom, by whom, and in what contexts. • What should the RTI model look like for culturally and linguistically diverse students? Response to Intervention: A Three-tiered Model • Special • Education • Intensive assistance, • as part of • general education • support system Research-based instruction in general education classroom 1st Tier • Research-based instruction at the first tier is for all students and consists of explicit instruction in: • phonological awareness, • the alphabetic principle (letter-sound correspondence), • fluency with connected texts, • vocabulary development, and • comprehension. 2nd Tier • The second tier is only for those students who do not reach expected benchmarks using a progress-monitoring assessment instrument such as the DIBELS—the Dynamic Indicator of Basic Early Literacy Skills. • Students receive additional intensive support in small groups or individually. • This support is provided within general education. • Students may receive this additional support in their classrooms or in a different setting. 3rd Tier • Students who continue to struggle are then provided with a third tier or level of assistance that is more intensive. It is this third tier many would consider to be special education. Critical Issues • The RTI model presumes that if a child does not make adequate progress with intensive research-based instruction, he or she must have an internal deficit of some kind. • How do we ensure that the child has in fact received culturally responsive, appropriate, quality instruction? • As with earlier identification criteria, this model must be based on students having received an adequate “opportunity to learn.” The RTI model is based on the principle that instructional practices or interventions at each level should be based on scientific research evidence about “what works.” However, it is essential to find out what works with whom, by whom, and in what contexts— What Do We Mean by “Research-based”? One size does not fit all. Reflection & Discussion • What does it mean when we say a practice is “research-based”? What assumptions do we make? • How do we account for language and culture when designing interventions, conducting research, and generalizing findings? • What kinds of questions do we need to ask as researchers and / or “consumers” of research? What Counts as Research? • We promote a broader view of what counts as research and what sorts of empirical evidence are relevant to complex issues that involve culture, language, social interaction, institutions, and cognition (Gee, 2001). • This is particularly important as we move to RTI models. We value qualitative and mixed methods approaches able to answer questions about complex phenomena that help us: understand essential contextual variables that contribute to the effectiveness of an approach, increase our awareness of implementation challenges, and provide information about the circumstances under which and with whom a practice is most likely to be successful. What Counts as Research? What Counts as Research? • Much can be learned by observing in schools and classrooms where culturally and linguistically diverse students excel as readers. a positive, cooperative classroom environment, with much reinforcement of students; excellent classroom management; explicit instruction in word-level, comprehension, and writing skills; frequent experiences with high-quality literature and students engaged in a great deal of actual reading; High-achieving first grade classrooms included….. (Pressley, Allington, et al., 2001; Pressley, Wharton-McDonald et al., 2001) made sure students were involved in tasks matched to their competency level, and accelerated demands on students as their competencies improved; carefully monitored students and provided scaffolded support; encouraged students to self-regulate; and made strong connections across the curriculum and with students’ lives & experiences. The most effective 1st grade teachers…. Research-based Interventions: What Works With Whom, By Whom, and In What Contexts? • These issues of population validity and ecological validity are essential if research results are to be generalized - yet frequently seem to be ignored. • Experimental research studies tell us what works best with the majority of students in a research sample, not all students. With Whom? • When deciding if a practice is appropriate for implementation as part of an RTI model, it should have been validated with students like those with whom it will be applied. • Although the National Reading Panel report “did not address issues relevant to second language learning” (2000, p. 3), the report’s conclusions are commonly cited as support for Reading First initiatives for all students. Research reports should include information about: the language proficiency, ethnicity, life experiences (e.g., socio-economic, specific family background, immigration status) Data should be disaggregated to show how interventions respectively might differentially affect students from diverse backgrounds. With Whom? With Whom? • When research studies do not include culturally and linguistically diverse student populations, or fail to disaggregate data based on important variables, what does this say regarding the researcher’s assumptions about what matters, who counts, and what works? • English language learners are often omitted from participant samples because of their limited English proficiency. • Yet language dominance and proficiency are important research variables and can affect treatment outcomes. • Leaving students out of studies limits the external validity and applicability of such studies, especially for teachers who have ELLs in their classes. By Whom? • On-going analyses of general education classrooms should be an essential component of RTI models. • School personnel should first consider the possibility that students are not receiving adequate instruction before it is assumed they are not responding because they have deficits of some kind. By Whom? • We must observe in classrooms and note the: • Quality of instruction • The relationship between a teacher and students • How culturally and linguistically diverse students are supported • How the teacher promotes interest and motivation • What do we conclude about students’ opportunities to learn? By Whom? • Is the teacher… • skilled in effective intervention and assessment procedures for culturally and linguistically diverse students? • knowledgeable about the importance of culture in learning? • knowledgeable about second language acquisition, bilingual education and English as second language (ESL) teaching methods? • Does the teacher… • have the attributes of culturally responsive teachers? • build positive, supportive relationships with students? • work well with students’ families and the community? • help most culturally diverse students succeed to high levels? • collaborate well with other professionals? In What Contexts? • It is essential to examine school contexts when implementing RTI models. • A student can be considered at-risk at one time and not at another, in one class but not in another, and in one school but not in another (Richardson & Colfer, 1990). • Are there culturally diverse children in some schools who respond favorably to an intervention and comparable culturally diverse children in another school who do not respond as well? In What Contexts? • Variations in program implementation and effectiveness across schools and classrooms are common (see the First Grade Studies for a classic example, Bond & Dykstra, 1967). • What is occurring when this happens? • Is it the program, the teachers’ implementation, or the school context? • What is it about the system that facilitates or impedes learning? • Schools are dependent on larger societal influences that should not be ignored. In What Contexts? • To conclude that failure resides within students when they do not progress with a certain intervention, and then move them onto the second or third tier in an RTI model or decide they belong in special education without considering other factors is problematic. Revised RTI Model • Special • Education • Intensive assistance, • as part of • general education • support system Referral to a Child Study Team or Teacher Assistance Team Culturally responsive instruction in general education classroom 1st Tier • The foundation of the first tier should be culturally responsive, quality instruction with on-going progress monitoring within the general education classroom. • We see this first tier as including two essential components: • (a) research-based interventions, and • (b) instruction by knowledgeable, skilled teachers who have developed culturally responsive attributes Culturally Responsive RTI Model • In their teacher education programs, as well as through ongoing professional development, teachers should become familiar with: • instructional strategies linked to academic growth for culturally and linguistically diverse students, • the language acquisition process and the unique needs of ELLs, and • assessment procedures for monitoring progress, particularly in language and literacy. • Teachers need to know if their interventions are effective and how to adjust instruction for students who do not seem to be responding. Culturally Responsive Literacy Instruction • What does it mean to provide culturally responsive literacy instruction? • All practice is culturally responsive—but responsive to which culture(s)? • Culture is involved in all learning. • Culture is not a static set of characteristics located within individuals, but is fluid and complex. Culturally Responsive Literacy Instruction Includes explicit instruction in phonological awareness, the alphabetic code, fluency, vocabulary development, comprehension strategies. Includes frequent opportunities to practice reading with a variety of rich materials in meaningful contexts. Emphasizes cultural relevance and builds on students’ prior knowledge, interests, motivation, and home language. But, culturally responsive instruction goes beyond these basic components. In conceptualizing culturally responsive literacy instruction, we draw upon Wiley’s (1996) framework for working with diverse students and families: • accommodation, • incorporation, and • adaptation. Accommodation requires teachers and others to have a better understanding of the communicative styles and literacy practices among their students and to account for these in their instruction. • “Literacy learning begins in the home, not the school … instruction should build on the foundation for literacy learning established in the home” (Au, 1993, p. 35). • Several qualitative studies have shown that, even in conditions of substantial poverty, homes can be rich in print and family members engage in literacy activities of many kinds on a daily basis. Incorporation requires studying community practices that have not been valued previously and incorporating them into the curriculum. • “We must not assume that we can only teach the families how to do school, but that we can learn valuable lessons by coming to know the families, and by taking the time to establish the social relationships necessary to create personal links between households and classrooms” (Moll, 1999, p. xiii). • “Teachers and parents need to understand the way each defines, values, and uses literacy as part of cultural practices--such mutual understanding offers the potential for schooling to be adjusted to meet the needs of families” (Cairney, 1997, p. 70). Adaptation involves the expectation that children and adults must acculturate or learn the norms of those who control the schools, institutions, and workplace. • Culturally and linguistically diverse parents want to give their children linguistic, social, and cultural capital to deal in the marketplace of schools, but are unsure how to go about doing this. • “When schools fail to provide parents with factual, empowering information and strategies for supporting their child’s learning, parents are even more likely to feel ambivalence as educators [of their own children]” (Clark, 1988, p. 95). Wiley’s framework can be used as a backdrop for helping us think about culturally responsive literacy instruction and RTI models. It is not enough to implement isolated evidence-based interventions. Instructional methods do not work or fail as decontextualized practices, but only in relation to the socio-cultural contexts in which they are implemented. Reflection & Discussion 1st Tier • What should the first tier look like for culturally and linguistically diverse students? • Who should be responsible for making sure students are receiving opportunities to learn at the first tier? • What can you do in your role to make sure Tier 1 includes culturally responsive instruction? When students have not made adequate progress when taught using appropriate, culturally responsive methods, a second tier of intervention is warranted. This tier is characterized as providing a level of intensive support that supplements the core curriculum and is based on student needs as identified through progress monitoring. 2nd Tier 2nd Tier Reflection & Discussion • What should Tier 2 look like for culturally and linguistically diverse students? • Should Tier 2 interventions be individualized or the same for ALL learners at the Tier 2 level? • Who should provide Tier 2 interventions, with what preparation? • Where should interventions take place? • What funds should be used to provide these services? This phase starts with a referral to a Teacher Assistance Team or a Child Study Team. This step should overlap with the second tier (i.e., the provision of intensive support should not stop for a referral to begin). 3rd Tier 3rd Tier Reflection & Discussion • What aspects of the traditional referral process should be kept? What needs to be changed? • Who should be on the TAT or CST or other team? For what purpose? What should be the role of the: • Classroom teacher? Parent? • Special education teacher? Psychologist? • English language acquisition specialist? • 3. How should “response to intervention” data be used? • 4. What further assessments should be done at this level? • 5. What additional data should be collected? 3rd Tier • The make-up of the team should be diverse and include members with expertise in culturally responsive instruction, and, if appropriate, expertise in English language acquisition and bilingual education. Data-based Decision-Making 3rd Tier • Teams should determine how to alter the support a student has been receiving and develop specific instructional objectives based on student performance and other data. • An important role for the team should be observing the student in her classroom as well as in other settings. In the model we propose, this tier would be special education. The hallmark of instruction at this level is that it is tailored to the individual needs of the student, and is even more intensive than at previous tiers. 4th Tier We are encouraged by the potential of RTI models to improve educational opportunities culturally and linguistically diverse students. RTI models represent a new beginning and a novel way of conceptualizing how we support student learning: along a continuum rather than categorically. RTI Models Represent a New Beginning Need for Ongoing Dialogue about Critical Issues • At the same time, we are concerned that if we do not engage in dialogue about critical issues, RTI models will simply be like old wine in a new bottle, in other words, just another deficit-based approach to sorting children. • It is our responsibility to make sure this does NOT happen. Closing thoughts… • What would an effective RTI model for culturally and linguistically diverse students look like? • How will we know when we have succeeded?
https://www.slideserve.com/fran/considerations-when-using-rti-models-with-culturally-and-linguistically-diverse-students
FIELD OF INVENTION BACKGROUND OF INVENTION SUMMARY OF INVENTION BRIEF DESCRIPTION OF THE DRAWINGS DETAILED DESCRIPTION OF INVENTION A method of controlling a power plant, a power plant controller, and a power plant are provided. A grid operator or balancing authority may determine the base amount of active power that power producing facilities such as fossil plants, hydro plants or wind power plants may be allowed to produce at any specific time, i.e. the amount of power required by customers supplied by the grid, and issue appropriate references to any power plant that feeds into the grid. Such power plants are generally characterized by their ability to regulate the real or active power within some dynamic range depending on various factors such as environmental conditions, the number of power production units up and running, the type of technology used by the power plant, etc. The active power response rate and the reactive power response rate of a power plant also depend on the plant type. The proportions of active power and reactive power that are required from the power plants may vary according to the load on the grid and the grid voltage. Conventional power producing plants - in which power is not generated from "renewable" energy sources - can increase or decrease their power output as necessary and can respond to the momentary demand situation. In the case of power production plants that produce or generate electricity using renewable energy sources such as wind energy or solar energy, the amount of power that can be output depends to some extent on the environmental situation, for example the strength of the wind in the case of a wind power plant; or the time of day and extent of cloud cover in the case of a photovoltaic power plant. Reactive power is injected or extracted from the power network in order to control the grid voltage. Conventional power networks, comprising only power production plants that feed into the electricity grid, generally comprise some means of absorbing and generating reactive power, and the amount of reactive power that is absorbed or released is governed by the grid operator. Reactive power is the imaginary component of the power vector. Reactive power is absorbed or generated (released) by different components or elements of a power production facility, such as shunt capacitors, shunt reactors, etc., and these must be controlled precisely since the flow of reactive power in the power production facility influences the voltage levels at the point of connection between the power production facility and the electricity grid. Renewable power plants that use state of the art power electronics are capable of supplying reactive power control directly from an inverter only using reactors or capacitors as supplemental components whenever the grid needs additional reactive capability. The amount of power generated by a power production facility is regulated or managed by a plant operator, which ensures that the power fed from a power plant into the grid fulfils the grid requirements at all times. A conventional power plant controller can be designed to operate with a local operator-controlled voltage or reactive power reference (also referred to in the following as an "MVAr reference"), or it can be configured to operate with a remote controlled voltage or MVAr reference issued by the grid operator or transmission operator. As indicated above, the ability of a renewable power plant to respond to grid demands - whether for active or reactive power - is limited by its dependency on the current or momentary environmental conditions. It is therefore an object to provide an improved way of operating a power plant. This object is achieved by the method of controlling a power plant, by the plant controller, and by the power plant of the claims. The method of controlling a power plant - connected at a point of connection to an electricity grid and comprising at least one power production facility connected to at least one power storage facility - comprises receiving or obtaining a reactive power grid demand and/or voltage grid demand of the electricity grid; obtaining operating data for the power plant facilities; computing reactive power and/or voltage references for the power plant facilities on the basis of the grid demands and the operating data; and operating the power plant facilities on the basis of the references such that a net reactive power and/or a net voltage at the point of connection satisfies the grid demands in order to fulfill the relevant grid requirements. The power storage facilities are connected to the power production facilities, which is to be understood to mean that energy output by a power production facility can be transferred in some manner to one or more of the power storage facilities. Energy transfer can take place over the electricity grid, or a power storage facility may be directly fed by a power production facility, effectively bypassing the electricity grid. An advantage of the method is that not only power production facilities but also power storage facilities can be collectively controlled so that the potential of each facility to contribute to a reactive power and/or voltage requirement can be fully exploited. Since renewable power plants such as wind parks can only regulate their output within the constraints of the energy available in the wind, the accuracy of a forecast that takes into account how many turbines might be stopped due to service, or how many turbines might have lower production due to wear on some component waiting to be exchanged etc., determines how far in advance a plant operator can realise the available regulation range. The method can overcome this disadvantage, since the integration of power storage - even only to a limited extent - into a wind power plant or a solar power plant will significantly increase the regulation capability of such plants and further increase the power plant's ability to offer ancillary services such as voltage support or low voltage ride-through to the grid operators. The plant controller for controlling a power plant - connected to an electricity grid at a point of connection and having at least one power production facility connected to at least one power storage facility - comprises a grid monitoring unit for determining a reactive power grid demand and/or a voltage grid demand of the electricity grid; a facility monitoring unit for obtaining operating data of the power plant facilities; a reference computation unit for computing reactive power and/or voltage references for the power plant facilities on the basis of the grid demands and the operating data; and a distribution unit for distributing the references to the power plant facilities such that the facilities operate to fulfill grid requirements at the point of connection. An advantage of the plant controller is that control of the power storage facilities and control of the power production facilities can be combined in such a way that each of these facilities can contribute, according to its capability, to the reactive power and the voltage at the grid connection point or "point of common connection", usually abbreviated as PCC. This also allows the power storage facilities and power production facilities to be used to their optimum capacity. By computing "customized" references for each of the power facilities, these can always be operated to ensure that the power plant as a whole provides the required levels of reactive power and/or voltage at the point of common connection, as specified by the grid demands given or communicated to the plant controller. The power plant comprises a number of power production facilities connected to a number of power storage facilities, and the power plant is realised for connection to an electricity grid at a point of connection and comprises plant controller for controlling the power plant using the method. An advantage of the power plant is that it can control any number of power production facilities and any number of power storage facilities, while these power facilities can be of any type or nature. The different or varying production capabilities of power production facilities can be optimally combined with the different or varying storage capabilities of power storage facilities at any one instant, while always satisfying reactive power and/or voltage requirements of the grid at the power plant's point of connection. By incorporating power storage facilities into the power plant, a plant controller can offer better voltage or reactive power support than prior-art power plants that do not incorporate power storage facilities. Another advantage of the power plant is that the power production facilities and the power storage facilities need not be located geographically close to each other. Particularly advantageous embodiments and features are given by the dependent claims, as revealed in the following description. Features of different claim categories may be combined as appropriate to give further embodiments not described herein. The arrangement of interconnected power production facilities and power storage facilities are to be understood to collectively comprise a "power plant". As indicated above, for geographically separate power facilities, energy transfer between a power production facility and a power storage facility of the power plant can take place over the grid, so that the power plant can be regarded as a "virtual" plant. Even if the facilities are geographically remote from each other, they can be collectively controlled by the plant controller. Therefore, in the following, the terms "power plant", "virtual power plant", "collective power plant", "combined power plant" and "aggregate power plant" can have the same meaning and may be used interchangeably. Connection points of the facilities of the power plant to the grid can be collectively regarded as a "single" point of connection, which can also be referred to as a "virtual", "combined" or "aggregate" point of connection. Preferably, the facilities of the power plant according to the network are connected to the plant controller over a suitable communications network. Such a connection allows controllers of geographically separate facility arrangements to exchange data, for example storage capacities of storage facilities, which data can be used in determining the references or setpoints for the power facilities, as will be explained below. A power production facility is typically used to generate power that is fed into the electricity grid for consumption. Some of the generated power can be transferred - as required - to a power storage facility. Therefore, in the following, the term "power facility" can refer to a power production facility or a power storage facility. A "facility" can comprise a single unit or it may comprise a number of units. For example, a power production facility such as a wind park can comprise any number of power generating units, in this case wind turbines. Equally, a power storage facility can comprise a plurality of storage units, for example a thermal energy storage plant comprising a bank of thermal energy storage units controlled by a common controller. The power plant controller can directly manage a power facility by generating references for a controller of the power facility, for example a "park pilot" in the case of a wind park. In this case, the power plant controller preferably communicates with the controller, which in turn distributes the references to the individual units of the wind park. A power storage facility of the power plant can be "reversible" or "non-reversible". A reversible power storage facility can output energy in the same form as it was input into the storage facility. An example of a reversible power storage facility might be a battery, which can be charged using an electrical current, and which outputs an electrical current when discharged. Another example might be a hydro-electric facility, for which electrical energy is used to pump water into a reservoir, and which outputs electricity again when the stored water is used to drive a turbine. A non-reversible power storage facility outputs energy in a different form. An example of a non-reversible power storage facility might be a synthetic natural gas facility, for which electrical energy is used to synthesize gas. The gas can be supplied directly by consumers without undergoing any further conversion. In a power plant, a power production facility can comprise any of the group of power production facilities comprising a wind power plant; a tidal power plant; a solar power plant; any other type of power plant based on renewable energy sources; or any other type of power production facility that is capable of producing energy that can be fed into an electricity grid and/or converted for storage in a power storage plant of the power plant. Similarly, in a power plant, a power storage facility can comprise any of the group of power storage facilities comprising a thermal storage facility; a battery storage facility; a flywheel storage facility; a compressed air storage facility; a synthetic natural gas storage facility, or any other type of power storage facility that is capable of converting electrical energy and storing it in a form from which it can later be retrieved. Preferably, a power storage facility is a reversible power storage facility that can provide some level of reactive power, whenever required, to the point of common connection. The grid demand or grid reference is given to the reference computation unit and is used as a basis from which to compute or calculate U (voltage) or Q (reactive power) references to be sent to a storage facility or unit, a power production facility or unit, a sub-controller, etc. The reference computation unit preferably generates references that are tailored to the capabilities of the power facilities. Therefore, in a particularly preferred embodiment, the step of obtaining operating data of a power plant facility comprises obtaining up-to-date or momentary data related to the power and/or reactive power and/or voltage of that power plant facility. For example, a power production facility can report a momentary power output, a momentary reactive power output, and a momentary voltage level. Theses values can reflect the performance of the power production facility as it operates according to the reference that was most recently received and that is still applicable. In the event that the power production facility could "increase" its output, it can also report a potential power output, a potential reactive power output, and a potential voltage level. For example, a power production facility might report a momentary reactive power output of +50MVAr, and a potential reactive power output range of +120 MVAr to -40 MVAr, signalling that is capable of increasing its export by 70MVAr or reducing its import/export by 90 MVAr. To use this data optimally or most efficiently, the reference computation unit also preferably determines the aggregate momentary situation at the PCC. Therefore, in a further preferred embodiment, the step of obtaining operating data of the power plant facilities comprises measuring net power and/or net voltage and/or net reactive power and/or net power factor at the point of common connection of the electricity grid. Knowing the value of a net or aggregate variable, and knowing the individual contributions and potential of each power facility, the reference computation unit can compute individual references for each power facility to obtain a desired aggregate value, while exploiting the potential of each power facility. By spreading the grid demand over a plurality of power facilities, unfavourable situations can be avoided in which a power facility would be compelled to operate at its limit, or in which a grid requirement would not be fulfilled. The reactive power of a power plant supports the local grid voltage. A grid demand might be to provide a certain amount of reactive power or to absorb a certain amount of reactive power, depending on the grid situation. In the event of a grid contingency such as a voltage drop or a fault on the grid side, it may be necessary for the power plant to provide additional reactive power for grid voltage support. Therefore, in a preferred embodiment, the method comprises the step of measuring net voltage at the or virtual point of common connection, computing voltage references for the power plant facilities, and operating the power plant facilities to obtain a desired reactive power at the point of common connection. Each power facility can then adjust its power output to provide the necessary proportion of reactive power. Another grid demand might be to maintain a certain power factor or to adjust the power plant output to a certain power factor. A grid operator may request such an adjustment to the power factor reference during certain times of the week, for example at peak consumption times whenever large transfer of load is expected. Therefore, in a preferred embodiment, the method also comprises the step of measuring a net power factor at the point of common connection, computing power factor references for the power plant facilities, and operating the power plant facilities to obtain a desired power factor at the point of common connection (while at the same time satisfying the underlying reactive power/voltage demand). Each power facility can then adjust its reactive power output to provide the necessary power factor. Another grid demand might be to provide a certain voltage at the point of common connection. The voltage at the PCC may vary in dependence on several factors, for example the momentary grid demand. Therefore, in a preferred embodiment, the method comprises the step of measuring net voltage at the point of common connection, computing voltage references for the power plant facilities, and operating the facilities to obtain a desired voltage at the point of common connection. Each facility can then distribute its voltage reference over any sub-units, such as individual turbines of a power plant or individual storage units of a storage facility, in order to provide the necessary aggregate or net voltage. As indicated above, a power facility can produce and absorb reactive power. Usually, a conventional power production facility is said to export or import reactive power, depending on whether the power production facility is outputting reactive power into the grid, or absorbing reactive power from the grid. The ability of a wind power plant to import or export reactive power is limited by the network design, actual terminal voltage at each unit, the number of active or operational units and the current capability of any converters in the power plant. The ability of a storage facility such as a battery storage system to deliver reactive power does not depend on the level of battery charge, instead its reactive capability may depend on the combined active and reactive current capability of an inverter of the storage facility. Therefore, a power plant will have a collective upper limit and a collective lower limit to the amount of reactive power than can be transferred. In a power plant, each power facility will have its own upper and a lower limit. Therefore, a preferred embodiment of the control method comprises the step of monitoring a reactive power transfer - i.e. a reactive power import or a reactive power export - within the power plant relative to predefined upper and/or lower limits and performing suitable remedial measures should the reactive power tend to move beyond one of those limits. For example, as long as the reactive power transfer lies within the predefined upper and/or lower limits, the power plant facilities can be operated to obtain the desired voltage at the point of common connection. During this "voltage control" of the power facilities, the reactive power of each facility may vary. Should the situation arise in which the reactive power being transferred approaches an upper or lower limit, the power plant facilities are preferably operated to ensure that the reactive power of each facility and/or the overall power plant stays within the predefined limits. In other words, when a limit is approached, the priority shifts to deliver additional reactive power support from units that have not yet reached their limits. Preferably, the reactive power references are computed so that the desired voltage is also met, so that both grid code requirements are fulfilled. When power is transferred over a transmission line, some transmission line losses inevitably occur. Therefore, in a further preferred embodiment, a reactive power or voltage reference for a power plant facility is computed on the basis of a physical or geographical location of the power facility in relation to the utility grid. By taking the physical plant location into account, for example by directing power facilities closer to the PCC to provide more MVAr than facilities located further away, transmission losses can be kept to a favourable minimum. The step of computing reactive power and/or voltage references for the power plant facilities on the basis of the grid demands and the operating data amounts to determining a "facility contribution plan" for the operation of the power plant facilities. Such a facility contribution plan takes into account the grid demand that must be satisfied as well as the potential or capacity of each of the power facilities to make a contribution to satisfying a net reactive power demand or a net voltage demand. The active power and reactive power grid requirements can fluctuate according to demand. For example, in the case of a fault such as a low-voltage fault, the power plant must be able to deliver a certain amount of active and reactive power to "ride through" the fault. Since certain types of power facility can provide active power as well as reactive power, in a particularly preferred embodiment, the power plant controller comprises a reference distributor unit for distributing active power references and reactive power references between the power facilities according to an active or "real" power component and a reactive power component of a grid requirement in response to a fault. If the power storage facilities are not constrained by charging or discharging active power, their full current capability can be used for delivering or absorbing reactive power. The central controller may prioritize reactive power support under some grid conditions at the expense of active power if low voltage ride-through is more important than ramping requirements. Generally, a power production facility is realised and controlled to avoid transients in the case of a sudden increase or decrease in the grid voltage. Therefore, in a further preferred embodiment, the step of controlling the power facilities is implemented according to a combined voltage droop characteristic of the power facilities at the point of common connection and/or according to a combined reactive power for the power facility. In this way, the power facilities can collectively operate on a desired droop curve. Other objects and features will become apparent from the following detailed descriptions considered in conjunction with the accompanying drawings. It is to be understood, however, that the drawings are designed solely for the purposes of illustration and not as a definition of the limits of the invention. Fig. 1 shows a power plant according to an embodiment; Fig. 2 shows a power plant controller according to an embodiment; Fig. 3 shows a prior art arrangement of power plants connected to an electricity grid. In the diagrams, like numbers refer to like objects throughout. Objects in the diagrams are not necessarily drawn to scale. Fig. 1 shows a power plant 1 according to an embodiment. In this simple exemplary embodiment, the power plant 1 comprises a first arrangement 10 of power facilities with a first plant controller 100, and a second arrangement 11 of power facilities with a second plant controller 110, connected to an electricity grid 3 at a virtual or combined point of connection 300 (indicated by the broken line). The plant controllers 100, 110 are connected by means of a suitable communications network 12, so that the power facilities of the power network 1 can be controlled collectively as a "virtual" power network. A grid operator 4 or grid controller 4 (not necessarily part of the power plant 1) issues required grid values Qref, Uref that should be fulfilled at the grid. In this embodiment, the first arrangement 10 of power facilities comprises power production facilities 101, 102 and power storage facilities 103 that are interconnected, i.e. power can be transferred between these facilities independently of the grid 3. Here, the second arrangement 11 of power facilities comprises one or more power storage facilities 111 that can be charged using power from the grid 3, and that can release power to the grid 3 as required. Since the plant controllers 100, 110 are interconnected, they can be controlled collectively to transfer power between the first and second power facility arrangements 10, 11 over the grid. This diagram only shows a very simple arrangement, and it will be noted that the power plant 1 can comprise any number of power facility arrangements, and that each power facility arrangement can comprise any number of interconnected power production facilities and power storage facilities. Fig. 2 In response to a grid demand values Qref, Uref issued by the grid controller 4, each plant controller 100, 110 evaluates the momentary situation regarding its power facilities. To this end, each plant controller 100, 110 is supplied with the necessary information about each facility, as will be explained with the aid of . Using the data supplied, each plant controller 100, 110 computes reactive power and voltage references ref_101, ref_102, ref_103, ref_111 for each of the facilities 101, 102, 103, 111. The references ensure that each facility contributes sufficiently to the overall grid demand Qref, Uref so that the power plant 1 as a whole supplies the desired amount of power to the grid 3 at the aggregate point of common connection 300. By controlling the power plant 1 using this method, the net reactive power and/or the net voltage at the combined point of common connection reach the desired values, ensuring that the power plant 1 as a whole can fulfill the relevant grid code requirements. Fig. 2 Fig. 1 shows a functional block diagram of a power plant controller 100, 110 in a power plant 1 according to an embodiment. A power plant controller 100, 110 can control a plurality of power facilities 101, 102, ..., 111. Here, the power facilities comprise a first wind park 101 with any number of wind turbines WT, a second power production facility 102, and a power storage facility 111 such as a battery or hydroelectric station. Of course, the power plant 1 can comprise any combination of power production and power storage facilities, and is not limited to the types mentioned here. Furthermore, since the plant controllers 100, 110 are connected by a communications network and can share information, all data relevant to the operation of the power plant 1 is effectively available to each controller 100, 110, so that the functionality of more than one controller 100, 110 is shown here collectively as a single virtual controller. The combined point of connection for the power plant 1 is not shown in this diagram, but it is to be understood that the facilities 101, 102, 111 of the power plant 1 are connected to the grid in the same way or in a similar way as shown in , namely at a virtual or combined PCC. The plant controller 100, 110 is supplied with various types of information: The grid demand Qref, Uref is supplied by a grid operator (not shown). This is the desired output of the plant 1, i.e. the output that is to be measured at a point of connection to the grid in order for the power plant 1 to fulfil the grid requirements. The momentary measured values Qmeas, Umeas at the point of connection are also provided. The difference Qdiff, Udiff between the desired and actual values is computed in a control unit 13. A data collection unit 16 collects relevant momentary performance data 160 from each of the facilities 101, 102, ..., 111such as current active power production, terminal voltage, combined with knowledge about the capabilities or capacities of the facilities. A computation module 15 receives the momentary performance data 160, and the difference values Qdiff, Udiff. With this information, the computation module 15 determines an optimal facility contribution plan 151 for the transfer of reactive power between the power facilities 101, 102, ..., 111 in order to maintain the plant's output to the grid at the given operation point as specified by the grid demand. The facility contribution plan 151 is used by a distribution unit 14 to compute references ref_101, ref_102, ..., ref_111 for the individual contribution to reactive power and/or voltage of each of the power facilities 101, 102, ..., 111. In the case of a power production facility such as a wind park 101, a park pilot can distribute or divide the reference ref_101 among the turbines WT of the wind park by computing individual references for the turbines WT, making use of knowledge specific to that wind park 101, for example the placement of each turbine WT in the wind park, the number of active wind turbines or the number of wind turbines available for reactive power regulation, etc. Similarly, a power storage facility could comprise a plurality of storage devices such as batteries, and a facility controller could convert the input reference ref_111 to individual references for each of the storage devices. By computing the references ref_101, ref_102, ..., ref_111 in consideration of all relevant input information in this manner, the grid requirements can be met at a combined point of connection 300 for the power plant 1, while at the same time an optimal operation of each facility 101, 102, ..., 111 in consideration of important factors such as the combined voltage level as well as the voltage level or terminal voltage of individual facilities, or the transmission loss inside the virtual plant, etc. Fig. 3 shows a prior art arrangement of a power network 8 comprising power plants 80, 81 connected to an electricity grid 3. Here, the power plants 80, 81 are connected to the grid 3 at independent points of connection 30, 31. A grid controller 300 (not necessarily part of the power network 8) monitors the situation on the grid 3 and issues, among others, values for a required grid voltage Uref and/or a required grid reactive power Qref. Each power plant 80, 81 comprises a plant controller 800, 810 that converts the supplied grid demand values Uref, Qref into suitable references 802, 812 that control the output of a power production facility 801, 811. The power plants 80, 81 feed power into the grid 3 according to the extent that they are capable of fulfilling the grid demands Uref, Qref. However, the ability of a power plant 80, 81 to supply or absorb reactive power may be limited, so that such a power plant 80, 81 effectively acting on its own may often fail to fulfil a grid demand Uref, Qref. Although the present invention has been disclosed in the form of preferred embodiments and variations thereon, it will be understood that numerous additional modifications and variations could be made thereto without departing from the scope of the invention. For the sake of clarity, it is to be understood that the use of "a" or "an" throughout this application does not exclude a plurality, and "comprising" does not exclude other steps or elements. The mention of a "unit" or a "module" does not preclude the use of more than one unit or module.
Personalized or recommender systems are a particular type of information filtering applications. User profiles, representing the information needs and preferences of users, can be inferred from log or clickthrough data, or the ratings that users provide on information items, through their interactions with a system. Such user profiles have been used, for example in iGoogle, to provide personalized recommendations to the users. A user model is a representation of this profile, which can be obtained implicitly through the application of web usage mining techniques. Our work aims to develop Web usage mining tasks to model an intranet or local Web site recommender system. We will focus on the users activity on a university Web site, to customize the contents and structure the presentation of a Web site according to the preferences derived from the user’s activity. The customization is based on an individual’s user profile as well as a profile representing the collective interest of the entire user community, in this case all users accessing the Web site. The outcome will be personalized recommendations and presentation of a Web site with respect to the user’s needs.
https://www.scienceopen.com/hosted-document?doi=10.14236/ewic/FDIA2009.4
What are some disadvantages of Microsoft OneNote? While Microsoft OneNote has many benefits, there are also some potential disadvantages to consider. Here are a few to keep in mind: - Learning curve: OneNote is a powerful tool with many features, and it can take some time to learn how to use it effectively. Some users may find it overwhelming or confusing at first. - Organization: While OneNote provides many organizational tools, it can be difficult to keep track of notes and information as the amount of data in a notebook grows. Some users may find it challenging to find specific notes or information they need. - Syncing issues: OneNote relies heavily on syncing across devices and platforms, and sometimes this syncing can be slow or incomplete. This can lead to issues where notes and data are not up-to-date on all devices. - Limited formatting options: While OneNote offers some formatting options, they are not as extensive as other tools, such as Microsoft Word. Some users may find the limited formatting options to be frustrating. - Compatibility issues: OneNote is a Microsoft product, and it may not be fully compatible with non-Microsoft systems or software. This can make it difficult to collaborate with users who are not using Microsoft products. Overall, while OneNote is a powerful and versatile tool, it may not be the best choice for everyone. Users should carefully consider their needs and preferences before deciding if OneNote is the right tool for them.
https://miamifrp.com/what-are-some-disadvantages-of-microsoft-onenote/
The number of honey bee colonies fell by nearly 12% last winter, an international study involving the University of Strathclyde indicates. Beekeepers in 29 countries reported that, out of nearly 400,000 colonies they managed, 11.9% had failed to survive the winter. Cases of colonies perishing after problems occurred with their queen were higher than expected. The UK and Spain were worst affected, compared with the previous year, when other areas of Europe were hardest hit. The preliminary findings were made through a study by honey bee research association COLOSS, based in the Institute of Bee Health at the University of Bern. Dr Alison Gray, of Strathclyde's Department of Mathematics & Statistics, a partner in the study, said: "These loss rates vary considerably between countries. In this year's survey the highest losses were found in Ireland and Northern Ireland, followed by Wales and Spain. "The pattern of loss rates differs from last year, when higher mortality and loss rates were found in central Europe and countries to the east. This year the higher loss rates tend to be in the west and northern countries, although Spain had high rates of loss in both years. "All the loss rates quoted here include losses due to unresolvable queen problems after winter, as well as colonies that died over winter for various reasons. Losses due to queen problems were unexpectedly high in some countries and this will be a matter of further investigation. "The crucial role of honey bees in crop pollination means that maintaining colony numbers is of great importance to agriculture, the economy and food security. Honey bees also pollinate many flowering plants and trees important for other wildlife, and so have a vital role in maintaining the natural environment and biodiversity. "Our research with COLOSS studies the levels of colony losses and potential drivers of colony decline, including management practices, pests and diseases and environmental factors." The study found that the spring and early summer months of 2015, from March to July, were cold in Norway, Scotland, Sweden, Denmark and Ireland, with mean temperatures ranging from 12.8 -- 14.4 °C. This may have had negative effects on colony development, resulting in both relatively high numbers of dead colonies and unsolvable queen problems after winter.
Hematopoietic stem cells (HSCs) are unique in their capacity to give rise to all mature cells of the immune system. For years, HSC transplantation has been used for treatment of genetic and neoplastic diseases of the hematopoietic and immune systems. The sourcing of HSCs from human umbilical cord blood has salient advantages over isolation from mobilized peripheral blood. However, poor sample yield has prompted development of methodologies to expand HSCs ex vivo. Cytokines, trophic factors, and small molecules have been variously used to promote survival and proliferation of HSCs in culture, whilst strategies to lower the concentration of inhibitors in the culture media have recently been applied to promote HSC expansion. In this paper, we outline strategies to expand HSCs in vitro, and to improve engraftment and reconstitution of human immune systems in immunocompromised mice. To the extent that these “humanized” mice are representative of the endogenous human immune system, they will be invaluable tools for both basic science and translational medicine. 1. Introduction Hematopoietic stem cells (HSCs) were the first class of stem cells used for cell-based therapy in humans. Specifically, both autologous and allogeneic HSC transplantation (HSCT) have been practiced for decades to treat a variety of hematologic malignancies and congenital and autoimmune disorders [1, 2]. Fewer than 30% of patients requiring allogeneic HSCT have a histocompatible sibling, and it is exceedingly rare for patients to have an identical twin donor. Infectious complications and acute or chronic graft-versus-host disease (GVHD) remain the major obstacles affecting patient outcome after allogeneic HSCT . Strikingly, GVHD occurs in approximately 20% to 50% of patients who receive stem cells from a human-leukocyte-antigen- (HLA-) identical sibling donor. Chances increase to 50–80% for those who receive stem cells from an HLA-mismatched sibling or even from an HLA-identical unrelated donor, while chronic GVHD occurs in less than 50% of long-term survivors . Interestingly though, patients with acute GVHD have lower incidence of leukemia relapse, presumably owing to concurrent graft-versus-leukemia response . This beneficial effect may not be limited to patients with leukemia, because something similar was observed in lymphoma patients that received allogeneic bone marrow transplantation . Most commonly, HSCs are obtained by apheresis of adult peripheral blood after mobilization of bone marrow HSCs by granulocyte-colony stimulating factor (G-CSF) injections . As an alternative, HSCs can be isolated from fresh or banked umbilical cord blood (CB) , which is highly enriched in HSCs compared to peripheral blood. Benefits of cord blood for transplantation include availability of banked samples, absence of risk to the donor, and low risk of transmitting infectious diseases. A specific advantage of cord blood over bone marrow-derived HSCs is reduced incidence of graft failure and acute or chronic GVHD, especially when cryopreserved CB is used for transplantation [9–11]. Despite having higher HSC concentration than peripheral blood, CB samples are insufficient to provide enough CD34+ cells for successful transplantations in adults and therefore require “pooling” samples from multiple donors. In addition to poor yield, shortage of HLA-matched cord blood samples has stimulated the development of methodologies to allow ex vivo HSC expansion. In principle, these methodologies would maintain self-renewal and inhibit differentiation during the course of expansion. The vast majority of strategies to expand HSCs in vitro have focused on regulation of stem cell renewal and survival of HSCs mediated by intrinsic factors (transcription factors and signaling molecules) and environmental cues (cytokines, chemokines, and adhesion molecules) . An alternative application for HSCs has been to generate mice bearing human immune systems—so-called “humanized” mice. This experimental model was developed to address difficulties associated with studying human immune-related diseases in mice (this has been reviewed in [13–16]). Although a fully functional human immune system has not yet been achieved in the mouse, several strategies have been implemented with variable success. In this review, we consider various methodologies for maintaining HSCs for the purpose of reconstituting mice with human immune systems. 2. Mouse Models of Hematopoietic Stem Cell Engraftment The development of chimeric mice bearing human immune system components provides a valuable tool to study human immune responses using small animals. In terms of disease biology, humanized mice can be used to study infection with human-specific pathogens, human autoimmune diseases, and human-specific immune responses in many contexts. These unique models can be created by engraftment of immunodeficient mice with human CD34+ HSCs. A crucial step towards the creation of immunodeficient mice that efficiently accept xenografts was the crossing of nonobese diabetic (NOD) and severe combined immunodeficient (SCID) mouse strains . These NOD-SCID mice display T, B, and NK cell immunodeficiency, in addition to being deficient for macrophages and protein complement. These compound immune deficient mice enable increased chimerism upon HSC transplantation compared to SCID mice . However, these animals have poor human T and B cell maturation, which has limited their use in immunology research. Targeting of cytokine receptors with IL-2Rβ monoclonal antibody prior to transplantation of human HSCs has allowed for even greater engraftment efficiency and human T cell development in the NOD-SCID mouse thymus . Concurrently, new strains of mice deficient for the common cytokine receptor γ-chain (Il2rγ) have been generated. These include NODLtSz-SCID (NSG; Il2rγ is completely null), NODShi-SCID Il2rγ−/− (NOG; the Il2rγ chain lacks the intracytoplasmic domain) [19–22], and BALB/c Rag2−/− Il2rγ−/− mice (BRG) [23, 24]. These important immunocompromised mouse strains have become the most common vehicles for reconstitution of the human hematolymphoid system. Engraftment of CD34+ HSCs into these mice leads, under the right conditions, to differentiation and maintenance (for >6 months) of B and T lymphocytes, NK cells, dendritic cells, monocytes, erythrocytes, and platelets [20, 23, 24]. Myriad conditions contribute to the reconstitution success of engrafted immunodeficient mice. For example, the source of the HSCs plays an important role, with CD34+ HSCs derived from fetal liver or CB providing improved immune reconstitution compared to G-CSF-mobilized adult peripheral blood cells . The age of the recipient immune compromised mouse is also critical, with neonatal recipient mice exhibiting enhanced engraftment compared to adults . A third key factor is the genetic background of the recipient mouse strain. For example, NSG and BRG mice are equivalent in terms of generation of human B cells, dendritic cells, and platelets, whereas NSG mice are superior in supporting human T cell development [25, 26]. This salient difference on T cell development is based on a polymorphism in the gene encoding the signal-regulatory protein alpha (SIRPα) receptor . SIRPα is a receptor expressed mainly in macrophages, granulocytes, and dendritic cells, but its ligand, CD47, is almost ubiquitously expressed. SIRPα binds to CD47 and generates an inhibitory signal to macrophages, which prevents phagocytosis of CD47-expressing cells. Mouse SIRPα interacts weakly with human CD47, with the upshot being phagocytosis and therefore rejection of transplanted human cells. However, NOD mice have a polymorphic allele of SIRPα that binds with high affinity to human CD47, preventing human cells from macrophage-mediated phagocytosis and leading to graft tolerance. Although the presence of human cells can be detected in chimeric mice for 12 months, all hematopoietic subsets begin to decline around 6 months after transplantation [28, 29]. This effect is probably due to the inability of mouse cytokines to react with human receptors, leading to survival signal and trophic factor deprivation in transplanted human cells. One strategy to overcome this is supplementation with human cytokines; the concept is to create a more favorable immunologic environment for human cells within the mouse host. Another approach to transiently increasing hematopoietic cell lineages in humanized mice has been to inject recombinant proteins including interleukin (IL)-15 , IL-7 , B-cell activating factor , or hydrodynamic injection of a plasmid DNA mixture including IL-15 + Flt-3L and Flt-3L + granulocyte monocyte-CSF(GM-CSF) + IL-4 . Human IL-7 has also been expressed in BRG mice by in vivo lentiviral gene delivery, and this led to stable but supraphysiological levels resulting in increased abundance of T cells . Transgenic mice have also been used to stably increase expression of human cytokines. For example, forced expression of stem cell factor (SCF), GM-CSF, and IL-3 on the NOD-SCID mouse background (NS-SGM3) produced robust human hematopoietic reconstitution in blood, spleen, bone marrow, and liver and significantly increased myeloid cell numbers [35, 36]. Similarly, transgenic NSG mice expressing membrane-bound SCF exhibited a high degree of human CD45+ cell chimerism in irradiated and nonirradiated recipient pups. A more radical strategy has been to engineer a knock-in mouse in which the genes encoding mouse cytokines have been replaced by their human counterparts. Though laborious, this strategy has major advantages including stable expression of physiological levels of cytokines and localization to the right organ(s). Thus far, three mice have been reported, including one that expresses human thrombopoietin (TPO) , another expressing human CSF-1 , and an animal that expresses both human IL-3 and GM-CSF . The TPO knock-in mice demonstrated improved engraftment of human CD34+ hematopoietic and progenitor cells, especially in bone marrow, and long-term maintenance of chimerism for over 6 months. Interestingly though, generation of the myelomonocytic lineage was particularly favored in these mice compared with lymphoid lineages. Mice expressing human CSF-1 displayed increased frequency and more efficient differentiation of human fetal liver-derived HSCs into monocytes/macrophages in various organs and increased functional properties, such as migration, phagocytosis and activation, and response to LPS. Transplanted IL-3/GM-CSF knock-in mice had no significant improvement in human hematopoietic engraftment, although enhanced reconstitution with human alveolar macrophages was reported . This specific effect on the lung macrophage subset makes IL-3/GM-CSF knock-in mice a unique model to study the involvement of the immune system in human lung pathologies. Yet, iatrogenic events have been reported in some of these mouse strains. For example, the TPO knock-in strain developed thrombocytopenia , despite the fact that human TPO can support murine thrombopoiesis. This effect is likely owed to levels of human TPO expressed in this mouse that were ~10-fold lower than the endogenous murine TPO. In knock-in mice expressing IL-3/GM-CSF, nonengrafted mice developed pulmonary alveolar proteinosis caused by the absence of mouse GM-CSF . Figure 1 summarizes various approaches to the design of humanized mice. 3. Human HLA Transgenic Mice Humanized mice generated by transplantation of human HSCs have demonstrated long-term reconstitution and some degree of maturation of human T cells, evidenced by the presence of CD4/CD8 single-positive T cells in the spleen and peripheral blood [13, 15, 42], CD8+ T cells with effective cytotoxic activity against infection with Epstein-Barr virus (EBV) , and development of human B cells that produce antigen-specific IgM upon immunization with exogenous antigens [43, 44]. However, the extent of T and B cell maturation seems incomplete, leading to generation of cells that are not fully functional. For example, CD4+ T cells from humanized NOG mice responded poorly (compared with normal human T cells from healthy donors) to in vitro antigenic stimulation with anti-CD3 and anti-CD28 antibodies , and immunization of humanized mice with exogenous antigens was only able to induce a restricted immunoglobulin (Ig) G response [44–46]. HLA molecules are required for development of human T cells, and interactions between human B and T cells are essential to activate the molecular machinery responsible for B cell antibody class switching . Thus, impaired human B and T cell function in humanized mice has been attributed to the absence of HLA in the mouse thymus . In support of this notion, mice transplanted with human fetal liver and thymus under the kidney capsule and injected with HSCs (the bone marrow, liver, thymus (BLT) mouse model) had significantly improved human T and B cell function [49, 50]. Human HLA-DR (MHC class II equivalent) appears to play a more important role in T and B cell development and in T cell positive selection than HLA-A2 (MHC class I analog) in humanized mice. For example, while HLA-A2 transgenic mice elicit a slight improvement in human T cell reconstitution and function of T and B cells, HLA-DR4 or HLA-DR5 transgenic mice had significantly increased human cell reconstitution and better immune responses, including Ig class switching and elevated human IgG responses [51–54]. Furthermore, transgenic expression of human HLA-A2 in humanized NSG mice resulted in improved HLA-A2-restricted CD8+ T cell responses to both EBV and dengue virus infection [55–57]. It is noteworthy that in EBV-infected humanized HLA-A2 transgenic mice, T cell responses against lytic EBV antigens predominated over latent antigens, similar to what is observed in human EBV carriers . 4. Ex Vivo Expansion of Hematopoietic Stem Cells It has become increasingly clear that developing better humanized mouse models will rely, at least in part, on increasing the quality of the HSC input material. Over the past few decades, the study of hematopoietic development and dynamic HSC interactions within the niche has shed light on signaling molecules that play roles in HSC self-renewal and lineage commitment. Rooted in this work, select cytokines and growth factors have been used to maintain and expand HSCs in culture, either alone or in combination . Factors that have been used to promote expansion of human HSCs include Flt3 ligand , SCF [60–62], TPO [61, 63], IL-3 [64–66], IL-6 [65, 67], IL-11 [68, 69], and angiopoietin . Although combinations of these cytokines and growth factors have been shown to promote in vitro proliferation of HSCs, the durability of this effect during short-term culture is limited as is the ability to maintain HSCs in an undifferentiated state. One explanation for this is the high sensitivity of HSCs to their microenvironment. The role of cytokines, including IL-3 and IL-6, in the expansion of HSCs is somewhat controversial. On the one hand, there are reports of stimulatory effects on HSC ex vivo expansion and long-term repopulating capacity [64, 71, 72], whereas others have described an inhibitory effect [73, 74]. One strategy to address this has been to supply these factors by coculturing HSCs with stromal cells, although this technique has met with limited success , prompting investigation into other possible factors. Amongst these factors, pleiotrophin, the Notch receptor ligand Delta-1, angiopoietin-like protein 5 (Angptl5), hedgehog (Hh), p38 mitogen-activated protein kinase (MAPK) inhibition, prostaglandin E2 (PGE2), and StemRegenin (SR1) have been shown to stimulate human HSC proliferation and are further discussed below. The neurite outgrowth factor pleiotrophin has been shown to promote in vitro expansion of both mouse and human HSCs . This effect was observed on mouse bone marrow HSCs, where the protein caused a marked increase in numbers of long-term repopulating HSCs. Furthermore, treatment of human CB CD34+CD38−Lin− cells with pleiotrophin in serum-free media containing TPO, SCF, and Flt3 ligands induced modest ex vivo expansion, but the fraction of CD34+CD38−Lin−cells was significantly higher, indicating a selective effect on differentiation rather than proliferation. Additionally, studies of transplantation using limiting dilution showed increased short- and long-term repopulating capacity of pleiotrophin-treated human CB CD34+CD38−Lin− cells. It has been proposed that the effect of pleiotrophin could be mediated by activation of phosphoinositide 3-kinase (PI3K)/AKT signaling in HSCs. This effect is most likely refereed by activation of Notch signaling, as antagonism of PI3K or Notch signaling pathways inhibits pleiotrophin-mediated expansion of HSCs in culture . In fact, it is now widely appreciated that Notch signaling plays important roles in the regulation of proliferation and cell fate determination of HSCs. HSCs express Notch receptors, which bind to the transmembrane ligands Jagged-1, Jagged-2, and Delta . Activation of these receptors on murine hematopoietic precursors by the immobilized extracellular domain of Delta1 fused to the Fc domain of human IgG1, [Delta1(ext-IgG)] resulted in marked proliferation of progenitors capable of short-term lymphoid and myeloid repopulation [67, 78]. Similarly, human HSCs cultured in serum-free conditions supplemented with SCF, Flt3-L, TPO, IL-3, and IL-6 were also responsive to activation by Notch ligands, and CD34+ cells underwent ~100-fold expansion in presence of Delta1 (ext-IgG) and exhibited enhanced repopulating ability in an immunodeficient mouse model [79, 80]. However, the effect on Notch signaling appears to be dependent on ligand abundance, since low-density Delta-1 enhances proliferation of CD34+ cells, whereas higher amounts induce apoptosis of CD34+ precursors . One of the most pronounced amplifications of HSCs was achieved by supplementation with angiopoietin-like proteins (Angptls). These proteins were identified in mouse fetal liver CD3+ cells and may play a role in vivo in stimulating mouse fetal liver growth. In particular, Angptl2 and Angptl3 induced up to 30-fold expansion of long-term cultured mouse HSCs as determined by reconstitution analysis . The same group found that another Angptl protein, Angptl5, acted more specifically on human HSCs and promoted ~20-fold net expansion of repopulating human CB HSCs when used in serum-free culture media containing SCF, TPO, fibroblast growth factor-1, and insulin-like growth factor binding protein 2 . Although it appears that Angptls activate a distinct signaling pathway from other growth factors, the mechanism of action remains unclear. The hedgehog (Hh) signaling pathway has been implicated in primitive and definitive hematopoiesis. In this regard, Bhardwajand and coworkers reported 60 to 80% increased proliferation of human CD34+CD38−Lin− cells at 7 or 12 days in culture following addition of exogenous Hh in presence of SCF, G-CSF, Flt3 ligand, IL-3, and IL-6. Under these conditions, the cells retained their capacity to engraft into immunocompromised NOD-SCID mice . The Hh pathway may also play a role in acute regeneration by inducing cell cycling and expansion of HSCs . There is, however, controversial evidence regarding the role of Hh signaling in hematopoiesis. For example, studies targeting gain or loss of function reported no apparent effect on adult hematopoiesis [86, 87]. This discrepancy may be related to differences in experimental systems (e.g., human, mouse, or zebrafish), approaches (e.g., transgenic models, ES cells, or in vitro culture systems), genetic approaches to removing the Hh activator, smoothened (Smo), in Smo conditional knockout mice (i.e., use of different promoters to induce recombination), source of HSCs (e.g., fetal liver versus adult HSCs), and associated changes in developmental schedules. Regarding regulation of HSC self-renewal, p38 MAPK has been identified as a key intrinsic negative factor. Activation of p38 MAPK has been associated with induction of HSC senescence under different physiological and pathological conditions [88, 89]. In line with these observations, selective inhibition of p38 MAPK activity with the synthetic agent SB203580 promoted ex vivo expansion of mouse bone marrow and human CB HSCs [90, 91]. Human umbilical CB CD133+ cells expanded in the presence of the drug by about threefold versus vehicle and displayed better engraftment into NOD-SCID mice following transplantation . Improved self-renewal of HSCs following p38 MAPK inhibition has generally been attributed to inhibition of glycogen synthase kinase 3beta (GSK3β) and activation of the Wnt signaling pathway as evidenced by upregulation of the downstream target gene, HOXB4 . Recent investigations seeking to identify modulators of HSC proliferation and homeostasis have revealed new targets. For example, prostaglandin E2 (PGE2) was identified in zebrafish by high-throughput screening of bioactive compounds regulating HSC expansion . Receptors for PGE2 were found in mouse and human HSCs, and short-term ex vivo exposure of HSCs to PGE2 enhanced their homing to bone marrow via the chemokine receptor CXCR4 when transplanted into lethally irradiated hosts. These PGE2-treated cells also demonstrated increased proliferation, resulting in 4-fold increased long-term repopulating cell and competitive repopulating unit frequency, and enhanced survival associated with increased expression of Survivin . Although its mechanism of action is not known, PGE2 has been shown to interact with the Wnt pathway by stabilizing β-catenin , highlighting the pivotal and beneficial role of Wnt signaling in HSC biology. Another high-throughput screen, in this case of a drug library, revealed a purine derivative compound named StemRegenin 1 (SR1) that was able to promote in vitro expansion of mobilized human peripheral blood CD34+ cells in serum-free media containing TPO, IL-6, Flt3 ligand, and SCF . The proliferative effect of SR1 on CD34+ cells did not occur in the absence of cytokines; it was reversible and had an anti-proliferative effect at high concentration, indicating that SR1 enhanced cytokine-mediated signals within a defined dose range. Interestingly, there were species-specific differences on SR1 bioactivity, as the compound did not expand murine HSCs, but potently affected human, monkey, and dog bone marrow-derived CD34+ cells. Furthermore, umbilical CB-derived CD34+ cells cultured for 3 weeks in the presence of SR1 had striking 17-fold expansion that improved early and long-term in vivo repopulation capacity in immunocompromised mice, and these cells retained multilineage potential . The mechanism by which SR1 induces proliferation of HSCs is via direct binding to and antagonism of the aryl hydrocarbon receptor. Perhaps the most important among intrinsic factors are transcription factors, such as the members of the homeobox (HOX) gene family that have emerged as important regulators of hematopoietic cell proliferation and differentiation. In particular, HOXB4 and HOXA4 are potent ex vivo inductors of HSC expansion, as revealed by overexpression of these transcription factors in murine HSC culture experiments [97–99]. Human HSCs also appear to be responsive to HOX transcription factors. When cultured on stromal cells genetically engineered to secrete HOXB4, human long-term culture-initiating cells and NOD-SCID mouse repopulating cells expanded by more than 20- and 2.5-fold, respectively . Likewise, HOXB4 overexpressing HSCs cells also displayed increased proliferation in culture . In another report, Aurvray and coworkers showed that, in cocultures of HOXC4-producing stromal cells with human CD34+ HSCs, the HOXC4 homeoprotein expanded immature HSCs by 3 to 6 times in in vitro cloning assays and significantly improved in vivo engraftment in immunocompromised mice. Comparative transcriptome analyses of CD34+ cells subjected to HOXB4 or HOXC4 revealed that both homeoproteins regulated the same set of genes, indicating similar downstream effectors . Wnt signaling is another pathway involved in development and function of HSCs. Forced expression of β-catenin enhanced ex vivo proliferation of murine HSCs by increasing HOXB4 and Notch-1 expression , and Wnt-3a induced self-renewal of mouse HSCs . There is, however, controversial evidence from experiments based on constitutive activation of canonical Wnt signaling or β-catenin [105, 106]. Furthermore, no impairment of HSC function (e.g., self-renewal or reconstitution) was observed by inactivation of β- and γ-catenin [107, 108]. In experiments where the Wnt signaling pathway was modified by administration of GSK3β inhibitors or overexpression of Wnt5a, long-term repopulation was increased in mice transplanted with murine or human HSCs, but in vitro proliferation went unchanged [109, 110]. This controversy might be explained by differences in experimental systems or functional redundancy. 5. Inhibitory Signals and Control of Hematopoietic Stem Cell Expansion Most of the strategies designed to expand HSCs in vitro have focused on identifying molecules that promote self-renewal of the stem cell population. Comparably less attention has been directed toward inhibitory signals generated by the differentiated progeny and accumulated during the course of culture. Many of these factors, including transforming growth factor-β (TGF-β), tumor necrosis factor-alpha [111–113], and chemokines such as CCL2, CCL3, CCL4, and CXCL10 [114–116] have been reported to negatively impact the expansion of human hematopoietic stem and progenitor cells. Most of these inhibitory cytokines and chemokines are produced by monocytes and interact in an antagonistic manner with stimulatory factors from megakaryocyte origin (epidermal growth factor, platelet-derived growth factor subunit B, vascular endothelial growth factor, and serotonin) to modulate progenitor expansion . In addition, highly purified bone marrow-derived CD34+ cells also secrete detectable amounts of growth factors, cytokines, and chemokines, which could affect their proliferation in an autocrine or paracrine manner by exerting either stimulatory (kit ligand, Flt3 ligand, and thrombopoietin) or inhibitory influences (TGF-β1, TGF-β2, and platelet factor 4) . Various strategies for maintaining HSCs in culture are outlined in Figure 2. Recently, Csaszar and colleagues described an integrated computational and experimental strategy to enable reduction of inhibitory signals in HSC culture media . Based on the effect that feedback signaling from differentiated cells has on stem and progenitor cell expansion, the authors developed a fed-batch media dilution system, consisting of an input stream that results in a continuous increase in culture volume and, consequently, dilution of inhibitory signals. When compared to other media change protocols, including full media change every four days or every day, half media change twice a day, or continuous perfusion, the fed-batch system achieved the most effective enhancement in stem and progenitor expansion. Specifically, it yielded an 11-fold increase in HSCs from human cord blood after 12 days culture, and these cells demonstrated self-renewing, multilineage repopulating ability . Several new factors and molecules have been utilized that, when combined with the most commonly used cocktail of cytokines, and have been successful to varying degrees at improving in vitro HSC expansion. However, at present, combination of these new molecules aiming to optimize culture conditions has not yet been reported. Thus, whether these new factors will have additive or even synergistic effects remains to be proven. Considering the potential for in vitro expansion and long-term engraftment in immunocompromised mice, the most effective factors for expansion of human HSCs to date are SR1 and Angptl5. They have shown similar ability to induce proliferation in culture while preventing differentiation and long-term engraftment in immunocompromised mice. 6. Concluding Remarks Efficient ex vivo expansion of HSCs still remains an elusive goal. The limited capacity of HSCs to self-renew in culture and their propensity to differentiate despite addition of cytokines and trophic factors represent significant hurdles that limit expansion and long-term engraftment potential of these cells. Nonetheless, significant advances have been made in this area. For example, the identification of new molecules that promote HSC proliferation and maintenance in culture and strategies designed to reduce the levels of inhibitors in the media represent important advances in this area. With the creation of more sophisticated immunodeficient mice that exhibit improved reconstitution of the human hematolymphoid system, the application of HSCs to study human immune diseases in small animals has moved forward at a rapid pace. It deserves mentioning, however, that these models have limitations, such as the relatively short-term maintenance of chimerism and the poor reconstitution and function of T, B, and natural killer lymphocytes. The exogenous administration or endogenous expression of human growth factors or cytokines in these mice has improved both maintenance and reconstitution and holds future promise for the optimization of humanized mice. Acknowledgment This work was made possible by a Grant from the California Institute for Regenerative Medicine to T. Town (RM1-01735).
https://www.hindawi.com/journals/bmri/2013/740892/
On UKHRB we’ve considered a number of the potential human rights implications of the Covid-19 pandemic and the measures put in place to combat it (Alethea Redfern’s round up is the best place to start, there have been a number of posts since, and there will be a podcast coming up on the subject next week on Law Pod UK). It was only a matter of time before some of these issues started to come before the European Court of Human Rights and, on Wednesday, a case involving the UK Government concerning the impact of Covid-19 on conditions of detention in prison was communicated: Hafeez v the United Kingdom (application no. 14198/20). Communication of a case takes place where an issue is considered to require further examination and the respondent state is invited to submit written observations on the admissibility and merits of the case. It is also an indication that the Court does not consider the case, on its face, inadmissible. The applicant in Hafeez is a sixty-year old man with a number of health conditions, including diabetes and asthma. He was arrested pursuant to a request by the US Government for his extradition on drugs charges. He challenges the decision to extradite him, arguing that his pre-conviction and post-conviction detention conditions in the US would be inhuman and degrading; and that there is a real risk that he would be sentenced to life imprisonment without the possibility of parole. What makes this case of particular interest is that, to assist in its decision on the case, the court has asked the following question: Having particular regard to the ongoing Covid-19 pandemic, if the applicant were to be extradited would there be a real risk of a breach of Article 3 of the Convention on account of the conditions of detention he would face on arrival? Considerations for the Court As is well known, in the context of deportation or extradition, issues will arise when an individual is at a real risk of suffering treatment in violation of article 3 (Soering v the United Kingdom (07 July 1989, Series A no. 161). In such circumstances article 3 may impose an obligation not to expel the person in question to the receiving country. The applicability of article 3 to detention conditions was not always a prominent feature of ECHR jurisprudence. However, from the early 2000s, the Court began to find that prison authorities were under a positive obligation to provide appropriate conditions of detention. In Kudla v Poland (no. 30210/96, ECHR 2000-XI), the Court indicated a general expectation that a detainee is held in conditions which are compatible with respect for his human dignity, that the manner and method of the execution of the measure do not subject him to distress or hardship of an intensity exceeding the unavoidable level of suffering inherent in detention and that, given the practical demands of imprisonment, his health and well-being are adequately secured by, among other things, providing him with the requisite medical assistance. These general principles have since guided the Court’s jurisprudence on conditions of detention, including in the context of extradition. For example, in Babar Ahmad and Others v United Kingdom (no. 24027/07, ECHR 2012), the Court examined the detention regime of a maximum security prison in the US, finding that the conditions of detention, including the possibility of solitary confinement and a lack of opportunity to exercise, did not violate article 3. Given that the applicant in Hafeez suffers from asthma and diabetes, the obligation on the prison authorities to provide the requisite medical assistance will be particularly important. While article 3 does not guarantee a minimum level of medical treatment (N v United Kingdom [GC], no. 26565/05, ECHR 2008), the ability of the US authorities to cater for his health difficulties in light of the Covid-19 pandemic must be a relevant consideration. An indication of the Court’s likely approach can be found in cases such as Catalin Eugen Micu v Romania (no. 55104/13, ECHR 2016), where there was discussion of transmissible diseases as a public health concern in prisons, and Khokhlich v Ukraine (no. 41707/98, ECHR 29 April 2003 (available in French)), where the Court condemned the applicant’s detention in a cell with ten others, while he had a hepatitis B: la Cour ne peut que déplorer le fait qu’une personne atteinte d’une maladie grave et extrêmement infectieuse a été détenue dans une cellule de 24 m2 en compagnie de dix autres condamnés. The Court will also take note of the standards of European Committee for the Prevention of Torture (the CPT), a monitoring body whose work to prevent ill-treatment of persons deprived of their liberty has contributed significantly to improved detention conditions across Europe. The CPT has recently issued a statement of principles relating to the treatment of persons deprived of their liberty during the Covid-19 pandemic and observed that: special attention will be required to the specific needs of detained persons with particular regard to vulnerable groups and/or at-risk groups, such as older persons and persons with pre-existing medical conditions. This includes, inter alia, screening for COVID-19 and pathways to intensive care as required. The views of this non-judicial body are, of course, not determinative, but the Court has long relied on the CPT’s standards in its article 3 jurisprudence and, at the very least, it suggests the type of measures that may be necessary to prevent detainees from suffering inhuman or degrading treatment during the current pandemic. The Court’s decision in Hafeez will have implications beyond extradition cases: the assessment of whether the minimum level of severity has been met for the purposes of article 3 is the same regardless of whether the context is domestic or extra-territorial (Babar Ahmad §172). Thus what the Court says about the impact of Covid-19 on detention conditions will likely have ramifications for the prison systems in all 47 member states of the Council of Europe. All eyes will be on the Strasbourg court as it begins to grapple with the consequences of the current pandemic. You must log in to post a comment.
https://ukhumanrightsblog.com/2020/04/18/european-court-of-human-rights-to-consider-impact-of-covid-19/
Publisher: Cambridge, UK: New York: Cambridge University Press, 2016. 668 p. Reviewer: John T. Parry | March 2019 David Sadoff’s new book “aims to present a novel and robust framework for the operational and legal analysis of recovering fugitives abroad” (4). He easily achieves that goal. Bringing International Fugitives to Justice is the most significant book on extradition since Cherif Bassiouni published International Extradition and World Public Order in 1974 (the book that eventually became the magisterial International Extradition: United States Law and Practice). Sadoff’s book is welcome, not just for its general and significant contribution to scholarship on extradition, but also because of its timely intervention into critical debates about responses to international and transnational crime and it relevance to pressing concerns about the impacts of globalization on criminal networks and international cooperation. The Executive Director of the non-partisan Center for Ethics and the Rule of Law at the University of Pennsylvania Law School, Sadoff “adopts an agnostic posture” (6) about the policy issues raised by the choice between extradition and the various alternatives to it. His focus, instead, is on the position of states that seek the return of fugitives and the international standards and shared practices that govern those efforts. Notably, Sadoff is not concerned with differences among the internal law of various states with respect to the return of fugitives—a key distinction between his book and Bassiouni’s, which has extensive material on internal U.S. law. His emphasis on extradition as an international or transnational process means that he avoids getting bogged down in the political and legal issues that plague extradition litigation within some countries, and he is able to think in a more structural way about the issues that surround international fugitives. Sadoff writes clearly and authoritatively, drawing on deep research and experience. The first chapter of Bringing International Fugitives to Justice, appropriately, seeks to define critical terms. Sadoff brings precision to traditional but also ambiguous language in the field—such as the word “extradition” itself (43-49). He also introduces and defines new terms such as “international fugitive from justice” (30-34), and “host state” and “pursuing state” instead of requested and requesting state (34-37). This careful chapter provides the foundation for a uniformity of tone, thoroughness of coverage, and coherence of argument that is impressive in a book of this scale and scope. By contrast, this review can only provide a rough overview of Sadoff’s analysis and achievement. The second chapter of Bringing International Fugitives to Justice—on subject matter jurisdiction—is simultaneously introductory and substantive. Sadoff provides a valuable overview of extraterritorial jurisdiction, especially universal jurisdiction and its complications (78-95). He also covers the conflicts that arise when countries have concurrent criminal jurisdiction over the actions of a fugitive, and he includes an extremely useful discussion of passive personality jurisdiction, including its embrace by the United States for terrorism-related crimes (98-99). The remainder of Bringing International Fugitives to Justice is devoted to four broad topics: extradition itself, impediments to extradition, “fallback alternatives to extradition,” and “full-scale alternatives to extradition.” On the first topic, Sadoff provides a deep discussion of extradition, first as a concept (its history, purposes, and nature) and then as a legal process and a topic of bilateral and multilateral agreements among states. Sadoff ends this part of the book with an in-depth exploration of impediments to extradition, which includes such familiar and important topics as the requirement of dual criminality, statutes of limitation, specialty, double jeopardy, and the reluctance of many host states to surrender their own citizens. Sadoff also takes care in this section to note the role that political tensions and agendas can play in extradition decisions. The final pages of the extradition section revolve around the relationship between human rights and extradition (291-319). Traditionally, human rights concerns played little if any express role in extradition proceedings. Indeed, the United States continues, notoriously, to espouse a “rule of non-inquiry” that prevents domestic courts from asking about the legal processes, prison conditions, or other treatment that a fugitive will face upon extradition. As Sadoff details, however, many other countries have adopted a more progressive approach. European courts, for example, will ask whether it is likely that persons facing extradition outside of Europe will be subjected to cruel, inhuman, or degrading treatment, or whether they will be deprived of fundamental trial rights by the pursuing state. Sadoff covers all of these topics with thoughtfulness, and he also touches on several other aspects of the extradition-human rights relationship. The second core theme of Bringing International Fugitives to Justice is “remedial or collateral means to secure extradition” (325). Sadoff explains that this relatively short section of the book addresses the strategies a state can pursue to secure the extradition of a fugitive when “the host state has denied a request for extradition, it has signaled a reluctance to grant extradition, or an apparent impediment to extradition already exists” (327). Sadoff covers appeal of an extradition denial, modification of the extradition request, bilateral negotiations between the pursuing state and the host state, and seeking the intervention or assistance of a third state (327-44). For the discussion of fallback alternatives to extradition, Sadoff focuses on “second-order preferences”—by which he means efforts by a pursuing state to ensure that the fugitive is brought to justice somewhere at some time, even if not by immediate extradition to the pursuing state. Thus, he catalogs efforts that pursuing states can take to increase the likelihood of the fugitive’s eventual capture, such as sealed indictments, revoking passports, or freezing assets (348-53). Pursuing states can also seek assistance from other states in locating or detaining the fugitive (353-70). And, finally, the pursuing state can press the host state to prosecute the fugitive itself (and here Sadoff has a very helpful discussion of aut dedere aut judicare), promote the transfer of the fugitive to a third state for prosecution, or support prosecution before an international criminal court (371-87). The last major section of Bringing International Fugitives to Justice is a careful examination of the alternatives to extradition: immigration law, informal law-enforcement cooperation, and unilateral measures. Immigration law is most useful, of course, when fugitives reside in or are attempting to enter a country of which they are not a citizen. Although a state may have a relatively unconstrained ability to deny entry to an alien, its removal processes likely will have more cumbersome procedural steps and requirements. As Sadoff notes, a state’s immigration law may specify the country to which the alien is to be returned, which might not be the country that is requesting extradition (401-02). In addition, human rights concerns play a similar role in immigration and extradition proceedings (413-41). In the United States, for example, it is not at all clear that immigration removal in general would provide an easier path than extradition (which has a famously truncated process). That said, immigration proved to be a successful alternative to extradition in the cases of Joseph Doherty and Desmond Mackin, IRA members who managed to elude extradition from the United States but were ultimately deported to the United Kingdom in 1982 and 1992, respectively. The other two alternatives to extradition—informal cooperation and unilateral measures—are more controversial because they typically involve deception or force. Cooperation, for example, often includes misleading a third party or even other parts of the host country’s government. Thus, Sadoff describes the case of Abdullah Öcalan, who was a fugitive from terrorism charges brought by Turkey arising out of his leadership of the PKK (477-80). Öcalan came to Kenya in the company of Greek diplomats. Kenyan officials objected to Öcalan’s presence in their country but informed the Greek ambassador that Öcalan would be allowed to leave the country under escort. But, instead of escorting Öcalan to a civilian flight, Kenyan officials delivered him to Turkish law enforcement, who arrested him and flew him back to Turkey. Despite the deception practiced on the Greek government, Turkish courts and the European Court of Human Rights upheld Turkey’s authority to put Öcalan on trial. Unilateral measures, of course, are even more controversial because they take place outside of the “normal” bilateral or multilateral structures for the return of fugitives. Sadoff points out, however, that this category includes negotiating directly with fugitives for their return to the pursuing country (485-88). The other types of unilateral action—often known as by the more well-known term “rendition”—are “lure and capture operations,” “seizure and delivery operations,” and “interception operations” (482). Lure and capture involves the use of false pretenses designed to convince the fugitive to leave the host state for another country (not necessarily the pursuing country) where he or she will be arrested. This form of unilateral activity, while deceptive, does not violate international law, although some courts and the International Penal Law Association take the position that this kind of deception is illegal (501-08). Sadoff describes the typical seizure operation as “a clandestine operation in which the fugitive is physically taken off the street, from a railroad platform, or from a home, office, or hotel, whether in broad daylight or under cover of darkness, and escorted by foot across a national border, or brought to an automobile, sea-going vessel, or aircraft waiting to whisk him away from host State territory, sometimes following initial questioning in a safe house or other secure location” (494). He cites the seizures of Adolf Eichmann by Israeli agents in Brazil and of Humberto Alvarez-Machain by U.S. agents in Mexico as the best-known examples of seizure and delivery (494). Sadoff provides an extended and balanced assessment of the legal issues raised by these operations, although in the end he expresses appropriately grave doubts about their legality (508-34, 541-50). An interception operation takes place when the military forces of one country force an airplane or other form of transportation to deviate from its intended route and instead proceed to a third party state or to the pursuing state itself (498). A famous example of this tactic took place when U.S. military aircraft intercepted an Egyptian government aircraft over international waters and forced it to land at a NATO airbase in Sicily. The plane was carrying four members of the Palestinian Liberation Front who had hijacked the cruise ship Achille Lauro. These individuals ultimately remained in Italian custody and were convicted by an Italian court (499-500). Because these operations usually—but not always—take place outside the territory of any state, they do not raise as many international legal issues as seizure operations, but Sadoff once again provides a fair assessment of the issues (535-50). Bringing International Fugitives to Justice ends its discussion of the alternatives to extradition by considering their consequences in subsequent judicial proceedings as well as more broadly. To take one example, the traditional view is that a court with criminal jurisdiction over the fugitive will not “inquire into the alleged irregularity by which a fugitive was delivered to the pursuing state” (555). Sadoff examines this rule and its justification, and he also tracks the challenges to it that have begun to gain traction in several countries. Sadoff concludes Bringing International Fugitives to Justice with a series of “general observations” about the pursuit of international fugitives (593-96). He also forwards a series of recommendations designed to enhance the use of the legal processes associated with extradition and diminish the attractiveness of alternative methods, particularly those that essentially take the form of self-help (596-604). The reader can easily add the conclusion that Bringing International Fugitives to Justice will be a critical reference and guide for anyone interested in these issues. John T. Parry, Associate Dean of Faculty & Edward Brunet Professor of Law, Lewis & Clark Law School.
https://clcjbooks.rutgers.edu/books/bringing-international-fugitives-to-justice-extradition-and-its-alternatives/
Surfactants are versatile chemicals which, at a sufficiently high concentration, will arrange themselves into organized molecular assemblies known as micelles this paper examines a method for determining this concentration. Example: determine the ph of 010 m hcn (aq) for hcn, k a = 49 x 10–10 1 identify all major species in solution this is an aqueous solution of a weak acid, so the major species are: hcn & h 2o 2 identify all potential h+ transfer reactions that could contribute to the [h. An aqueous solution is a solution in which the solvent is water it is mostly shown in chemical equations by appending (aq) to the relevant chemical formula for example, a solution of table salt , or sodium chloride (nacl), in water would be represented as na + (aq) + cl − (aq. To calculate concentration in ppm, first determine the mass of solute (in grams) and the mass of the total solution (in grams) next, divide the mass of solute by mass of solution, then multiply by 1,000,000. Department of chemistry chem 230 when the reaction between fe3+ and scn- (thiocyanate) ions in an aqueous solution comes to equilibrium, the solution contains reactants and the product, fe(scn)2+ the chemical equation for this reaction is the absorbance can be used to determine the equilibrium concentration of fe(scn)2+ knowing the. The concentration of ions in solution depends on the mole ratio between the dissolved substance and the cations and anions it forms in solution so, if you have a compound that dissociates into cations and anions, the minimum concentration of each of those two products will be equal to the concentration of the original compound here's how that works: nacl_((aq)) - na_((aq))^(+) + cl_((aq. To calculate the ph of an aqueous solution you need to know the concentration of the hydronium ion in moles per liter the ph is then calculated using the expression: the ph is then calculated using the expression. Determining the mass percent composition in an aqueous solution jove, cambridge, ma, (2018) while percent by mass is used to determine solution concentration, percent by mole is typically used to calculate the percent of anelement or group in a molecule its use in the laboratory, and how to determine it for an aqueous solution. A simple spectrophotometric method for the determination of iron(ii) aqueous solutions m jamaluddin ahmed, uttam kumer roy laboratory of analytical chemistry, department of chemistry a simple spectrophotometric method for the,mjahmed,ukroy hno 3, and were rinsed several times with high-purity deionized water stock solutions and. Aqueous equilibria 5 sample exercise 172 calculating ion concentrations when a common ion is involved practice exercise calculate the formate ion concentration and ph of a solution that is 0050 m in formic acid (hcooh, ka = 18 10 4) and 010 m in hno 3. If a rate of 800 ml of an aqueous solution of potassium hydroxide concentration of 025 mol / l are partially neutralized by 200 ml of an aqueous solution of nitric acid concentration of 050 mol l determine ph of the final solution koh + hno3 - h2o . The objective of the experiment was to determine the copper concentration in an aqueous solution using a redox titration called iodometry a standard copper solution was made using 1562g of pure copper sulfate pentahydrate. To calculate the molarity of a solution, you need to know the number of moles of solute and the total volume of the solution to calculate molarity: what is the concentration (m) of an aqueous methanol produced when 0200 l of a 200 m solution was diluted to 0800 l. This concentration unit is similar to ppm or ppb except it focuses on the solute as a percent (by mass) of the total solution it is appropriate for realtively large solute concentrations. To determine the distribution coefficient (the equilibrium concentration ratio) of iodine between the immiscible solvents water and cyclohexane (c 6 h 12) this is a study of equilibrium the distribution coefficient will be derived from data obtained by performing a chemical analysis for iodine in. An aqueous solution of sodium hypochlorite (naocl) is a clear, slightly yellow liquid, and is commonly determine the concentration of the complex formed this can then be used to calculate the initial concentration of hypochlorite procedure safety precautions. Table of values the data in the table are for the equilibrium between the aqueous gas and the free gas k h = pi/[gas(aq)] to calculate the concentration of the molecule in solution. Calculate the hydronium ion concentration in an aqueous solution that contains 250 × 10-6 m in hydroxide ion 400 × 10-9 m calculate the hydroxide ion concentration in an aqueous solution that contains 350 × 10-4 m in hydronium ion. Determine the concentration of an aqueous solution that has an osmotic pressure of 56 atrn at 37°c if (a) the solute is glucose, and (b) the solute is sodium chloride. An aqueous solution consists of at least two components, the solvent (water) and the solute (the stuff dissolved in the water) usually one wants to keep track of the amount of the solute dissolved in the solution we call this the concentrations one could do by keeping track of the concentration. Second, you should be able to calculate the amount of solute in (or needed to make) a certain volume of solution third, you might need to calculate the volume of a particular solution sample fourth, you might need to calculate the concentration of a solution made by the dilution of another solution. With the final concentration of the product, you can determine the change in product concentration and, therefore, the changes in the reactant concentrations the reaction table is shown below: in this experiment, 02 m hno 3 serves as the solvent. A simple titration (oxidimetry) method using a methylene blue-platinum colloid reagent is effective in determining the concentration of hydrogen gas in an aqueous solution the method performs as effectively as the more complex and expensive electrochemical method molecular hydrogen is useful for. The relative strengths of acids may be determined by measuring their equilibrium constants in aqueous solutions in solutions of the same concentration, stronger acids ionize to a greater extent, and so yield higher concentrations of hydronium ions than do weaker acids. 2018.
http://rhessayyghi.paperfolder.info/determining-the-concentration-of-an-aqueous.html
An increasing number of K-12 instructors looking for innovative ways to improve their students’ critical thinking skills are finding instructional, realistic simulations as a valuable tool to enhance learning. Apart from offering the potential to engage students in deep and immersive learning, simulations in K-12 education also empower understanding in students as opposed to just superficial learning that requires only memorization. In this blog, we explore instructional simulation and the various ways in which it can enhance K-12 learning. What is a Realistic Simulation? Simulations in K-12 learning are essentially specific instructional scenarios where a student/learner is placed in an atmosphere defined by the teacher and represents a distinct reality within which students can interact. What is important to note here is that in the world of realistic simulation, the educator controls various parameters and uses it to achieve the expected instructional results. Students then experience the reality of different scenarios and gather meaning from them. Simply put, a realistic simulation is a form of experiential learning that fits well with the principles of constructive student-centered learning and teaching. In general, instructional simulations incorporate some or all of the below-mentioned characteristics: - Scenarios: Realistic simulations present a particular problem to solve or a situation to react to in a specific context. This problem or situation could include a certain time frame and/or a set of resources/tools. - Role-playing: Simulations often place learners in a particular role within the scenario. Few of them also require students to collaborate with learners in other roles working through the same problem, albeit from different perspectives. - Environment: Instructional simulations in K-12 learning replicate an authentic situation/ location in some way, such as a chemistry lab or a hospital room, which can be built either in physical or digital spaces. - Open-ended: Realistic simulations often require learners to make various decisions, wherein each of these decisions impact the progress they make in that scenario and determine what decisions they’ll make next. - Reflection: Simulations generally rely on structured reflection through journaling, discussions, or similar assignments, to effectively evaluate the decisions that were made by connecting them to the outcomes they led to, and reinforcing what students learned from the experience. Various Ways Realistic Simulation Enhances K-12 Learning There are multiple reasons why simulation learning is gaining momentum in K-12 learning. Apart from reducing the overall education costs, simulations can also offer an engaging learning experience. They help introduce an interactive component to K-12 classes that are designed not just to develop students’ skills but also to teach them to successfully apply those skills in a range of scenarios. Let’s explore 5 ways in which using realistic simulations can help enhance student learning: 1. Promotes the use of critical and evaluative thinking in students Being open-ended in nature, simulations promote the use of critical and evaluative thinking among students. In addition to this, they encourage K-12 students to contemplate the implications of different scenarios so that the situation feels real, thus leading to a more engaging interaction by learners. Using realistic simulations also gives students concrete roadmaps of what it means to think and act similar to a scientist and do scientific work. They allow students to change various parameter values and see what happens. This helps learners develop a feel for why the variables are important, along with teaching them the significance of magnitude changes in parameters. 2. Facilitates prompt feedback Realistic simulation-based learning allows students to receive instant feedback on their learning and its effectiveness as well as their way of using the equipment, system, or rules. In addition to this, educators can offer constructive feedback to students right away, which allows them to improve their current skills, try new skills/methods as alternatives, or improve the old ones. Since realistic simulations by their very nature cannot be passive learning, they ensure that students are active participants in learning, anticipating outcomes, and formulating various new queries/questions to ask. 3. Leads to experiential practice Leveraging simulations in K-12 learning promotes conceptual understanding through experiential practice. They help students easily understand the nuances of a concept or a lesson. In general, students find them much more engaging as compared to other activities. This is simply because the student experiences the online learning activities first-hand instead of hearing about it or seeing it. 4. Enables better knowledge retention A significant challenge faced by educators while designing any course structure is to ensure the retention of knowledge imparted. Students are more likely to retain knowledge if they get an opportunity to implement the skills they learned. Simulation-based learning programs enable K-12 students to understand the actions to be taken in a given situation, which helps them retain this knowledge more quickly. A good simulation often includes a strong reflection summary that helps students further reflect and think about how and why they behaved as they did during the simulation. 5. Fosters development of skills Simulations typically include a range of activities that allow students to practice structured learning, communicating, and collaborating with their peers. Apart from this, they also get to replicate what is often required in an actual setting such as discussing, presenting, negotiating, and listening. This helps develop their communication and problem-solving skills. Another benefit of realistic simulations is that they allow for repetition, which means that students can work through the scenarios many times to explore how different decisions impact the final outcome. To Conclude Realistic simulation-based learning is an innovative strategy that K-12 educators can use not just to teach course-related concepts but also to offer students multiple opportunities to apply newly acquired skills, knowledge, and ideas in a well-designed practice setting that mirrors the real world. As more and more K-12 institutions and courses develop hybrid/virtual learning options, realistic simulations may be an excellent way for students to thoroughly practice what they’re learning and be evaluated remotely.
https://www.hurix.com/how-does-realistic-simulation-enhance-k-12-learning/
Before understanding why an anti-inflammatory diet can be helpful and is one of the hottest diets right now, we must first understand what inflammation is. When you hear the word “inflammation,” you may immediately think of the swelling or redness that occurs when you stub your toe. These are two outward signs of inflammation, but that’s not all. Inflammation occurs naturally as part of the body’s immune response. When your body fights an infection or injury, it sends inflammatory cells to the rescue. This results in the classic signs: swelling, redness and sometimes pain. It is completely normal and natural. As long as the body is in control, of course. The story changes when the inflammation persists and never goes away completely. This chronic inflammation means your body is always on high alert, and it can trigger major health issues, including heart disease, diabetes, Alzheimer’s disease, and cancer. Fortunately, you have some control over your inflammation levels. Factors such as smoking, being overweight or obese, and excessive alcohol consumption can increase your risk of inflammation. Diet also plays a role, and some experts say adjusting the foods and drinks you consume might be a better way to reduce inflammation levels than relying on medication. Only taking chronic pain medication when needed is probably also a good idea, as many medications have unpleasant side effects, such as haze, drowsiness, and memory loss. Contents - 1 An overview of how an anti-inflammatory diet works - 2 What is the difference between good and bad carbohydrates? - 3 What the Research Says About Reducing Inflammation in Diet - 4 8 Anti-Inflammatory Foods to Eat - 5 What are the possible health benefits of an anti-inflammatory diet? - 6 Are there any downsides to an anti-inflammatory diet? - 7 What to expect when you start the anti-inflammatory diet? - 8 Sources An overview of how an anti-inflammatory diet works There is no official diet plan outlining exactly what to eat, how much, and when. Instead, the anti-inflammatory diet is about filling your meals with foods that have been shown to fight inflammation and, just as importantly, eliminating foods that have been shown to help with it. An anti-inflammatory diet is an eating plan that aims to reduce or minimize low-grade inflammation in our body. Ideally, you should eat eight to nine servings of fruits and vegetables a day, limit your intake of red meat and dairy products, prefer complex carbohydrates to simple carbohydrates, and forgo processed foods. What is the difference between good and bad carbohydrates? It is better to choose foods rich in omega-3 fatty acids, such as anchovies, salmon, halibut and mussels, rather than omega-6 fatty acids, which are found in corn oil, l vegetable oil, mayonnaise, salad dressings and many processed foods. Eating this way is a good idea for everyone, because many foods with the potential to lead to inflammation aren’t healthy anyway. What the Research Says About Reducing Inflammation in Diet Many researches show the negative effects of inflammation, in fact, chronic inflammatory diseases are the most important cause of death in the world. (They are associated with health problems such as diabetes, Alzheimer’s disease and obesity. It has also been linked to an increased risk of colorectal cancer, people who eat pro-inflammatory foods (such as carbohydrates refined foods and red meat) with twice the risk of developing this cancer, according to a June 2019 study published in Nutrients. What’s more, a pro-inflammatory diet appears to increase the overall risk of death by 23%, according to a meta -analysis published in June 2019 in Clinical Nutrition. Several other studies have looked at the effect of a diet high in anti-inflammatory foods on certain health conditions. For example, a November 2017 article in Frontiers in Nutrition shows that choosing anti-inflammatory foods can help people with rheumatoid arthritis (RA). In particular, the authors write that reducing inflammation in the diet, for example by following a vegan or vegetarian diet, can help delay disease progression, reduce joint damage and potentially reduce addiction. to RA drugs when used as complementary therapy. Another study, small and prospective, was published in May 2019 in Integrative Cancer Therapies, and found that when people with familial adenomatous polyposis (cancer of the colon and rectum, called colorectal cancer) followed a low-inflammatory diet , they reported having fewer gastrointestinal problems and better overall physical condition. A prospective cohort study of more than 68,000 Swedish adults, published in the Journal of Internal Medicine in September 2018, found that following an anti-inflammatory diet was linked to a 13% lower risk of death from cancer. . The study authors also observed that smokers following an anti-inflammatory diet had a 31% lower risk of dying from any cause, a 36% lower risk of dying from cardiovascular disease and a 22% lower risk of dying from cardiovascular disease. % of dying from cancer. Smoking is a habit associated with a higher risk of health problems, and following such a diet will not necessarily cure you of these problems if you continue to smoke. Yet research suggests it may help reduce the impact of the disease, delay its progression, reduce the amount of medication needed, and reduce joint damage. Other studies have shown that anti-inflammatory foods can help in the following ways: Recovery during athletic training Management of pain associated with aging Heart Protection Improved quality of life for people with multiple sclerosis 8 Anti-Inflammatory Foods to Eat A List of Foods to Eat and Avoid on an Anti-Inflammation Diet Following an anti-inflammatory diet means filling up on foods that research has shown can help reduce inflammation and reduce your consumption of foods that have the opposite effect. One of the benefits of this diet is that it provides lots of food options and a lot of leeway, allowing you to choose the foods you like best. If you need a little more structure, consider adopting the Mediterranean diet. There is a lot of overlap with the anti-inflammatory diet as both emphasize the consumption of fruits, vegetables and whole grains. Anti-inflammatory foods to eat Fresh fruits, including grapefruit, grapes, blueberries, bananas, apples, mangoes, peaches, tomatoes and pomegranates. Dried fruits, including plums (prunes) Vegetables, especially broccoli, Brussels sprouts, cauliflower, and bok choy. Vegetable proteins, such as chickpeas, seitan and lentils. Fatty fish, such as salmon, sardines, albacore tuna, herring, lake trout and mackerel. Whole grains, including rolled oats, brown rice, barley, and wholemeal bread. Leafy green vegetables, including kale, spinach, and romaine lettuce. Ginger Nuts, especially walnuts and almonds. Seeds, such as chia seeds and flax seeds. Foods filled with omega-3 fatty acids, such as avocado and olive oil Coffee green tea Dark chocolate (in moderation) Red wine (in moderation) Foods to Eat Sparingly or Avoid to Prevent Inflammation Refined carbohydrates, such as white bread, pastries and sweets. Foods and drinks high in sugar, including sodas and other sugary drinks. Red meat Dairy products Processed meat, such as hot dogs and sausages fried foods What are the possible health benefits of an anti-inflammatory diet? Following an anti-inflammatory diet has been shown to help people with: – autoimmune disorders, including RA and MS – heart disease – Cancer, including breast cancer and colorectal cancer – Alzheimer’s disease – Diabetes (22) – Pulmonary Disease – epilepsy (23) Are there any downsides to an anti-inflammatory diet? There are no major downsides associated with the anti-inflammatory diet, although there may be a learning curve in mastering which anti-inflammatory foods to eat and which to avoid. If your diet currently consists of processed foods, meat, and dairy products, you may have a small adjustment period. You’ll need to clear your fridge and pantry of potentially inflammatory foods, and you’ll likely need to spend more time and effort preparing meals, as stopping to eat fast food is prohibited while on this diet. . What to expect when you start the anti-inflammatory diet? Once you start eating this way, you’ll probably start to feel better overall. People may feel better, with less bloating, gastrointestinal discomfort, and body aches. You may also see your mood improve as you change your eating habits. But don’t expect to see immediate changes when it comes to any health condition, it will probably take you two or three weeks to see this kind of effect, and maybe up to 12 weeks to know if the results are showing. will maintain. In summary, should you change your diet to reduce inflammation? The anti-inflammatory diet is a healthy approach to eating, whether or not you suffer from chronic inflammation. An anti-inflammatory diet is a way of life that will ultimately improve your overall health, well-being, and quality of life. Anyone can benefit from such an eating plan, and I’ve found it especially helpful for populations with chronic inflammation and health issues. Sources Foods That Fight Inflammation. Harvard Health Publishing. November 7, 2018. Sears B. Anti-Inflammatory Diets. Journal of the American College of Nutrition. 2015. Pahwa R, Goyal A, Bansal P, et al. Chronic Inflammation. StatPearls. March 2, 2020. Aggarwal BB, Prasad S, Reuter S, et al. Identification of Novel Anti-Inflammatory Agents From Ayurvedic Medicine for Prevention of Chronic Diseases. Current Drug Treatments. October 1, 2011. Vasunilashorn S. Retrospective Reports of Weight Change and Inflammation in the US National Health and Nutrition Examination Survey. Journal of Obesity. February 11, 2013. Obon-Santacana M, Romaguera D, Gracia-Lavedan E, et al. Dietary Inflammatory Index, Dietary Non-Enzymatic Antioxidant Capacity, and Colorectal and Breast Cancer Risk (MCC-Spain Study). Nutrients. June 21, 2019. Garcia-Arellano A, Martinez-Gonzalez MA, Ramallal R, et al. Dietary Inflammatory Index and All-Cause Mortality in Large Cohorts: The SUN and PREDIMED Studies. ClinicalNutrition. June 2019. Khanna S, Jaiswal KS, Gupta B. Managing Rheumatoid Arthritis With Dietary Interventions. Frontiers in Nutrition. November 8, 2017. Pasanisi P, Gariboldi M, Verderio P, et al. A Pilot Low-Inflammatory Dietary Intervention to Reduce Inflammation and Improve Quality of Life in Patients With Familial Adenomatous Polyposis: Casas R, Sacanella E, Urpi-Sarda M, et al. Long-Term Immunomodulatory Effects of a Mediterranean Diet in Adults at High Risk of Cardiovascular Disease in the PREvención con DIeta MEDiterránea (PREDIMED) Randomized Controlled Trial. Journal of Nutrition. July 2016.
https://www.highprotein-foods.com/intestine-joints-the-benefit-of-an-anti-inflammatory-diet-2.html
How to Promote Gut Health It is important to learn how to improve your digestive health. This article will provide tips on how to eat balanced and avoid hidden monosaccharides. Avoid sugar, processed foods, NSAIDs, and other artificial sweeteners. Eat a wide variety of whole foods that are rich in polyphenols and clear of medications like aspirin. It is essential to keep the health of your digestive tract. Diversify your diet One of the simplest ways to boost the health of your gut microbiome is to diversify your diet. While a traditional western diet is deficient in variety due to the large proportion of processed foods sugar, as well as fat and sugar, a varied diet will support the development of beneficial bacteria. To broaden the range of your diet, concentrate on whole fruits such as vegetables, nuts whole grains, seeds, and legumes. These foods can be incorporated into your meals and snacks. American food is full of processed foods, sugar and dairy products that are high-fat. These foods can make it more difficult for our digestive systems to work well, and can result in toxic byproducts. Consuming refined and processed carbohydrates can cause inflammation and decrease the diversity of microbiome. Diversifying your diet will help support proper digestion and improve overall health. You can improve your gut health by adding more fruits and vegetables in your meals every day. Beware of monosaccharides that are hidden sources of It is possible to make dietary changes to eliminate monosaccharides in your diet and improve your gut health. Make sure you eat fermented veggies or beef that is not processed, as well as fiber-rich vegetables. Certain foods can cause damage to the beneficial bacteria that live in your gut. If you’re looking for a diet that promotes gut health, try eliminating foods that trigger digestive symptoms like gluten and sugar. It is also possible to take probiotic supplements. Probiotic supplements can help your body build beneficial bacteria. Stress can cause damage to the beneficial bacteria in your gut. Research has shown that a diet high in fiber and omega-3 fat acids can help reduce the amount of pro-inflammatory bacteria found in the gut. Flavonoids can also be beneficial to gut health. Foods that belong to the cabbage family and vegetable broths are excellent sources of flavonoids. These are important to promote healthy gut bacteria. You should also drink plenty of water, stay clear of alcohol, and limit your intake of processed food items. Eat foods rich in polyphenols Polyphenols are a type of antioxidant found in a wide range of plants. They shield the body from diseases and have beneficial effects on the gut microbiome. Polyphenols are especially abundant in colorful fruits and vegetables. People with a lower chance of certain ailments tend to eat a diet that is rich in vegetables and fruits. Include more organic foods like vegetables, fruits, and avoid foods that are processed or have added chemicals. Flavonoids are the largest class of polyphenols. They include quercetin, which is well-known and anthocyanin. Both black and green teas are loaded with polyphenols. Certain of these compounds possess anti-cancer properties. If you’re trying to figure out how you can include enough polyphenols in your diet, here are some of them. Avoid NSAIDs Although NSAIDs are often prescribed to relieve discomfort, they can also have adverse effects on the gut. Inflammation may cause bleeding, ulcers and other symptoms, and they can cause long-term issues with the gut such as leaky gut syndrome, irritable bowel syndrome, and Crohn’s disease. This is why you should avoid NSAIDs in order to promote gut health and prevent these adverse effects. Antibiotics are a highly effective treatment for serious infections. However they are often misused or overused. Antibiotics should only be prescribed by your physician and should not be used for self-treatment. The normal balance of bacterial health in the gut is disrupted by antibiotics and nonsteroidal antiinflammatory drugs (NSAIDs). It is crucial to stay clear of NSAIDs in order to improve gut health. Drink fermentable fiber One of the best ways to improve your health is to eat more fiber. This is not a hard job, and you can find a myriad of fiber-rich foods, such as fruits and vegetables whole grains, as well as VINA sodas. All of these food items contribute to the gut microbiome being healthy. Fiber is vital for maintaining healthy cholesterol levels as well as lowering blood pressure. Recent advances in microbiome research have led to a growing number of prebiotic and probiotic ingredients that can help improve gut health. Prebiotic fermentation can boost the immune system and improve blood levels of lipids, and continues to be being studied. While the role of these supplements is unclear, there are many positive advantages. One study found that fermentable fibers can improve the control of glycemic levels. Other studies did not show any effect. Exercise Researchers at the University of New Mexico discovered that regular exercise is beneficial for the stomach’s health. Exercise can boost the development of healthy bacteria which is vital for our overall wellbeing. This can lead to a more positive mood and better mental health. It is also a major element in neurogenesis, which is responsible for the creation of new neural connections in our brains. You should select a type of exercise that is beneficial to gut health. The effects of exercise on gut microbiome were seen in a study that followed two previously inactive males and women for six months. Both groups showed improvement in the composition of the gut bacteria and higher levels of compounds that are physiologically relevant. Furthermore, both aerobic exercise and voluntary wheel running have resulted in increases in the number of gut bacteria. While these results seem promising, they must be confirmed by further research.
https://www.thinktwicepakistan.com/walmart-gut-health/
Qualys EDR / Patch Management Blog Series [Part 2] NOTE: This is the second part of a blog series. Part 1: Qualys Patch Management (PM) Part 2: Qualys Endpoint Detection and Response (EDR) Part 3: PM and EDR Remediation Demonstration Overview In this blog post, we will take a look at Endpoint Detection and Response (EDR) module in Qualys. We will learn the answers to questions such as “What is Qualys Qualys Endpoint Detection and Response?”, “What is it used for?”, “How does it work?”, “How to activate and set up?”, “What kind of features does it have?” and “What can we do with this module?”. In the next and final part of this blog series, we will be discussing patching by using Qualys Patch Management (PM) and remediation with Qualys Endpoint Detection and Response (EDR). In addition, the impacts of PM and EDR on target hosts will be demonstrated. Qualys Endpoint Detection and Response (EDR) Overview Endpoint Detection and Response, or EDR, is a cybersecurity solution that detects and responds to cyber threats on a continuous basis. The Qualys multi-vector EDR application is an evolved superset of the Indication of Compromise (IOC) application. The Cloud Agent delivers Qualys EDR, allowing for continuous monitoring and data gathering from the agent via the EDR Manifest. After enabling EDR for an asset, the agent begins collecting data in real-time about the asset's numerous objects and associated actions/events. It uploads the data to the Qualys cloud platform for analysis. This information is directly correlated with Qualys Malware Labs threat intelligence and research, and you can view the incidents recognized by EDR as well as the system events and details gathered by the cloud agent in the EDR app. Events are prioritized using a proprietary scoring system, allowing for a prioritized response, and the user is informed of harmful events, infected hosts, and attacks taking place in their environment, among other things. EDR extends the Qualys Cloud Platform's threat hunting and remedial response capabilities. EDR identifies a suspicious activity, validates the presence of known and unknown malware, and responds to your assets' remediation needs. Finally, EDR unifies various context vectors such as asset discovery, rich normalized software inventory, end-of-life or end-of-support visibility, vulnerabilities and exploits, misconfiguration, in-depth endpoint telemetry, and network reachability in a single cloud-based app with a powerful backend to correlate it all for accurate assessment, detection, and response. Correlation with all attack vectors aids in identifying the root cause and reducing the likelihood of future attacks. EDR Activation and Setup The following configuration steps are required to use the Qualys Patch Management (PM) application successfully: 2.1 On the target host, install the Cloud Agent. Note: Cloud Agent must be installed with an activation key that is compatible with the EDR module. If you're not sure how to install and configure “Qualys Cloud Agent”, check out the Qualys Cloud Agent Installation Guide with Windows and Linux Scripts. 2.2 Assign the target agent host to an EDR-enabled Configuration Profile. To create a “Configuration Profile” using assets, create a new asset tag. Create a new Configuration Profile to work with. PERFORMANCE: The high-performance option performs more frequent checks. ASSIGN HOSTS: Choose which assets will receive this profile. Assets can be selected using by Asset Tag or by Name”. EDR: For Configuration Profile, enable the EDR module. 2.3 On the target agent host, activate the EDR module. You can manually activate the asset module instead of using the Configuration Profile to enable EDR. EDR Application Overview On the EDR Welcome page, use Configure Agents for EDR to configure agents and upgrade activation keys for the EDR module. Additionally, on the EDR Welcome page, you can simply manage tags. Discover and Monitor: Install lightweight agents on your IT assets in minutes. These may be installed on your on-premises systems, as well as in dynamic cloud settings and on mobile devices. Cloud Agents (CA) are self-updating and are maintained centrally by the cloud agent platform (no reboot needed). Detect and Investigate: In one central spot, you can view and investigate all of your EDR issues and events. You'll get a list of all occurrences that have been discovered across all of your assets. In a matter of seconds, you may search through all of your incidents and events. Respond and Prevent: From a central, respond to suspicious and harmful activity. In the case of a harmful or suspicious occurrence, remedial action will be provided. 3.1 EDR User Interface There are five sections to the EDR user interface: - **DASHBOARDS:**Dashboards allow you to examine your assets, see your threat exposure, use stored searches, and quickly remediate harmful or suspicious occurrences. Unified Dashboard (UD) and EDR have been merged. UD visualizes information from many Qualys apps in a single location. To visualize particular information, you may utilize Qualys' EDR dashboards or create your own widgets and dashboards. - INCIDENTS: This section includes a list of all occurrences that have been discovered in your environment. You may examine events by Malware family name and Malware category using Qualys advanced search and filter features. - View and search assets that have been identified as being infected with malware. - Look into events based on Active Threats and Malware Families/Categories. - HUNTING: This section includes a list of all events gathered from EDR-enabled assets by the Cloud Agent. You may use this page to filter and search for harmful File, Process, Network, and Mutex events, as well as execute remedial activities. - Examine the information gathered by EDR agents. - Search for events based on their characteristics, skip to events that happened within a specific time range, and organize events based on their kind, activity, and score. - Remedial action should be taken in the case of malicious files, processes, mutexes, and network events. - Results of a “Search” can be exported - ASSETS: This section includes a list of agent host assets that have the EDR module enabled. You may obtain up-to-date information on a specific asset's details, events, and occurrences all in one location. When examining asset details, the user may examine the asset's inventory, vulnerability, compliance, EDR, and other data in one location. The user is instantly routed to the Hunting or Incidents tabs while reading event or incident details. - Lists all EDR-enabled agent assets. - Provides up-to-date information about a certain asset's details, events, and occurrences. - Asset data is available in CSV format for download. - Show the assets that have been affected and the infections that have occurred. - RESPONSES: This section shows the status of requests for remedial measures taken in response to harmful occurrences. You may also set EDR to monitor events for conditions defined in a rule and give you notifications if events fit the condition. - View the status of response actions. - Set up rule-based notifications. Events and Incidents - An “object” is an artifact on the system, without state information - Object Types: File – PE files locally attached disks (called “image”) Process – a running process, usually from an image Process Network Connection – a network state of a process Mutex – Mutant Handle, a shared memory resource used by processes Registry – Windows, locations used for persistence (auto-start) - Actions and events include state information: File (Created | Deleted | Renamed | Write) Process (Running | Terminated) Mutex (Running | Terminated) Network (Connected | Disconnected | Listening) Registry (Created | Deleted) 4.1 Hunting Events Search for events using event attributes, skip to events that took place within a specific timeframe. Organize events by category, and see event and asset information. For EDR-enabled assets, the Hunting section includes a list of all event data gathered by the Cloud Agent. Using various search queries, you may filter harmful events and search for malicious files, processes, mutexes, and network events. You may also sort events by Type (file, process, mutex, and network), Action (file created, network connection formed or listening, a process running, and so forth), and Score. Finally, you can take steps to correct harmful events. Filter for harmful events to see a list of all malicious events, which you may then “delete” or “quarantine”. The event's details page has all of the pertinent information. Click Quick Actions > Event Details to go to the Events Details page. The Event Details page displays information about the object (file/process/mutex/network connection) and its state (file created, process/mutex running or terminated, network listening on a port, network connection established), such as the image path, associated user, process ID, MD5/SHA256 hash value, and so on. The event tree for Process, Mutex, and Network events is displayed on the Event Details page. We show all the events that are linked to the selected event in the event tree. Current View Active State Only active asset events are shown: - File Created (existence) - Process Running - Mutex Running - Network Listening / Established - Registry Created (existence) Historic View “Look Back” Investigation Stored as state change events: - File Created / Deleted - Process Running / Terminated - Mutex Running / Terminated - Network Listening / Established / Closed - Registry Created / Deleted You may check for available results/research on this threat by searching for the file hash on Google, or you can compare EDR findings to the VirusTotal database to see whether other scanning engines have recognized this file/process/mutex as dangerous. VirusTotal gathers information from a variety of antivirus programs and internet scan engines to look for infections that the user's own antivirus may have missed, as well as to rule out any false positives. 4.2 Using Queries for Hunting Suspicious Activity What are the most interesting file properties? - Examine the information about the signer and the certificate. - Look for files running out of $RECYCLE.BIN, %temp% or %downloads% What do you search for when you're looking for evasion methods? - Malware files may be renamed to seem to be native Windows files. - Compare filenames within %system% to files on disk. - Look for suspicious use of SVCHOST, WMI, and PowerShell. Is it safe to trust your files? - Examine the information on the certificate. - Look for persistent untrustworthy files, untrusted processes, and untrusted programs that generate network traffic to add to your results. Sample Hunting Search “Suspicious Use of Windows Command Shell and PowerShell” is a threat actor tactic and hunting approach: - Threat actors aim to avoid detection by loading malicious scripts into memory via whitelisted applications - PowerShell or cmd.exe are not invoked using MS Office applications in normal use. - Hunting approach: open cmd.exe or powershell.exe after executing word.exe, excel.exe, or powerpnt.exe Query: type:PROCESS and parent.name:[winword.exe, excel.exe, powerpnt.exe] and process.name:[ cmd.exe,powershell.exe ] and process.arguments:-e* Identify any MS Office processes that have used the Windows command shell or PowerShell. Threats such as fileless attacks involve the use of legitimate\whitelisted programs such as Windows command shell\PowerShell to load malware directly into memory. Although Microsoft’s PowerShell is preinstalled on nearly all Microsoft systems and is considered trusted software, seeing it launched via MS Word or PowerPoint or Excel is highly anomalous and suspicious. Sample Hunting Search – 2 “Suspicious Use of WMI” is a threat actor tactic and hunting approach: - WMI (“wmiprvse.exe”) is a system process that runs WMI commands on a remote host - Threat actors use it as a remote execution utility and to establish persistence - Hunting approach: powershell.exe running with wmiprvse.exe as parent process may be suspicious Query: type:PROCESS and parent.name:wmiprvse.exe and process.name:powershell.exe and process.arguments:-e* Find all WMI-invoked PowerShell processes that are currently executing. WMI was created as Microsoft's interpretation of web-based enterprise management (WBEM) for system administration and auditing; however, attackers may utilize it at any point throughout the Attack Lifecycle, from gaining a foothold on a system to stealing data from the environment, and anything in between. Because WMI is so versatile, hackers have found a variety of methods to use it to run malicious code. Because of the large quantity of legitimate activity in today's organization, finding malicious WMI and PowerShell in memory might be difficult. Context is crucial in hunting, and looking at the parent and children of processes may frequently provide further context. EDR Investigation and Response Actions Active Threats should be used to investigate incidents. Active Threats by Host, Malware Name, and Malware Family. - All Hosts with threats listed under Incidents - Filter results by Malware Family and Category - The highest event score is used to calculate the asset score Details about incidents may be viewed by clicking on their names. Display file details from the process tree. In addition, “remediation action” may be found under “View Mode > Process Tree”. Delete or Quarantine files on the process tree. When you run a delete or quarantine action, you'll get a message and the status of the action will change to In Progress. On the “RESPONSE” page, you can see all quarantined or deleted files and their statuses. Note: For windows assets, response actions are only supported in Cloud Agent version 4.0.0 and higher. Rule-Based Alerts - You must first configure a rule action and indicate what action should be performed when events meeting a condition are identified in order for EDR to generate alerts. - Then, in order to issue the alert, you must create a rule containing trigger criteria and rule actions. EDR will give you the notifications based on the rule action settings. Configure a rule action that will be referenced in the alert rule as the first step. In the Response section, under the Actions tab, you may configure a rule action. 6.1 Configure Rules The next step is to create a rule that will send out notifications when harmful events occur. In the Response section, under the Rule Manager tab, you may configure rules. To create a new rule, fill in the needed information in the appropriate sections: - Give the new rule a name and a description in the Rule Name and Description section of the Rule Information section. - Provide a query for the rule in the Rule Query area. This query is used by the system to look for events. To test your query, click the Test Query button. - To choose from a list of pre-defined queries, click the Sample Queries link. - Three trigger criteria are available to use in combination with the rule query. Single Match, Time-Window Count Match, and Time-Window Scheduled Match are the Trigger criterion. - Choose the steps you want the system to take when an alert is generated in the Action Settings section. 6.2 Trigger Criteria - Select Single Match if you want the system to send you an alert every time it finds an event that matches your search query. - Select the Time-Window Count option. When you want to set up alerts depending on the number of events returned by a search query over a set period of time, use Match. For example, if three similar occurrences are detected within a 15-minute interval, an alert will be delivered. - When you need to create alerts for Configure Rules matching events that happened during a specified time, select Time-Window Scheduled Match. Only when an event matching your search criteria is detected during the time specified in the Schedule will the rule be activated. Fill in all of the Rule Details fields. 6.3 Aggregating Alerts For the trigger, you can group the alerts based on: - Action - Asset Agent ID - Asset Hostname, etc. Example for Aggregating Alert to find all running svchost.exe processes that do not have “-k” as an argument. Goal: Find all running svchost.exe processes that do not have “-k” as an argument. Create alert rule to notify using a Slack channel if one or more instances of such process instance found. Rule Based Alert Configuration: - Rule query for search logic: process.name:svchost.exe and not process.arguments:-k - Rule Trigger: Single Match (one alert for one match) - Action Setting: Raise alert and post to Slack 6.4 Activity Tab The Activity tab displays all of the alert activity for the timeframe specified. The rule name, success or failure in delivering the alert message, aggregate enabled or disabled for the rule, action selected for the rule, matches discovered for the rule, and the user who authored the rule are all displayed here for each alert. Prevention To correlate various attack vectors and offer a wider context for remediation and prevention, EDR integrates with other Qualys applications like AI, VMDR, PC, and PM. 7.1 Global Asset Inventory (AI) Visibility is the first step toward endpoint security. For your assets, Qualys Global Asset Inventory (AI) delivers a single source of truth. It's a central spot where you can see all of the data collected by the various sensors you've installed. Asset inventory is automatically updated with data obtained from your sensors. To offer a better perspective, the data is standardized and classified. You're fulfilling the first step required by security and compliance teams, which is visibility, by acquiring an inventory. - Gives you comprehensive visibility into your hybrid IT environment. - Helps in the elimination of blind spots. - Provides critical context for a multi-vector EDR strategy. - Asset Inventory is included with EDR. Use queries to: - Missing assets can be quickly identified with EDR. - Assets should be tagged for EDR activation. - Create widgets to keep track of assets without using EDR. Use queries to: - Identify EOL or EOS software\browsers - Identify assets with EOL or EOS software - Enable EDR on target assets to monitor activity and prevent the threat from spreading 7.2 Detect Vulnerabilities and Missing Patches - Use VMDR to quickly find vulnerabilities linked to particular Malware types identified by EDR. - Identify assets that have these vulnerabilities. You can eliminate the root cause of malicious attacks for exploitable vulnerabilities using a combination of VMDR, Patch Management (PM), and EDR. You can quickly identify all missing patches for these exploitable vulnerabilities. Then, you can use VMDR's integrated workflows for Patch Management to create a patch job to patch all such vulnerabilities across the environment, which could have been exploited otherwise and your team would have to spend time detecting, investigating, correlating, and responding to such incidents. 7.3 Additional Context from Configuration Management - Detect misconfigurations and ineffective security measures - Utilize Qualys' out-of-the-box policies for control evaluation. - Examine your compliance posture and take steps to limit the risk of malware and ransomware. In addition to vulnerabilities, an adversary may identify and exploit vulnerabilities in your infrastructure's configuration. Architectural issues, misconfigurations, and insufficient security measures might all be examples of these issues. Finding failed controls linked to malware/ransomware propagation or controls mapped to the MITRE method can assist with discovering misconfigurations and minimizing the attack surface. Conclusion In Part 2: We have learned about Qualys Endpoint Detection and Response (EDR) and discussed its features and benefits. We learned how to enable and configure EDR using configurations. The EDR application, events, reaction actions, and rule-based alerts were all examined. We learned about Hunting Events and Incidents, which are the most essential aspects of EDR, in great detail. We also discussed how EDR interacts with other Qualys products like AI, VMDR, PC, and PM to correlate various attack vectors and give more context for remediation and prevention. In the next and last post of this blog series, Part 3: PM and EDR Remediation Demonstration, we'll see what Qualys Patch Management (PM) and Qualys Endpoint Detection and Response (EDR) perform on target hosts. We'll demonstrate: - PM patching example - EDR deleting/quarantining malicious file example - EDR response action/alert example. Stay tuned!
https://www.prplbx.com/resources/blog/qualys-endpoint-detection-and-response/
View the Report IT security is the single biggest challenge faced by organizations today, with threats like ransomware costing companies billions of dollars to detect and remediate. In order to address advanced threats, IT and security teams must start working together to leverage both domain expertise and data. As this Gartner report makes clear, network forensic data can play a critical role in detecting and mitigating security threats, but it's often under-utilized due to lack of alignment between key groups. "In most organizations, IT operations and security operations operate as two distinct units. The objectives of these two units are often distinct, so the alignment of processes to include network forensics as part of the security incident response is critical."* Download Gartner's report today to start re-thinking how IT operations and security operations teams can work together to leverage networking performance monitoring (NPM) technologies for security threat detection and forensic investigation. From understanding how IT operations can configure NPM solutions to enable more efficient workflows to updating skill-set training for better incident resolution, this report is designed to help you define a new era of collaboration-driven security in IT. Fill out the form to access the full report, and enjoy!
https://www.extrahop.com/platform/resources/whitepapers/npmd-respond-to-security-breach/
Supply chain management (SCM) has become a topic of critical importance for both companies and researchers today. Supply chain optimization problems are formulated as linear programing problems with costs of transportation that arise in several real-life applications. While optimizing supply chain problems, inbound logistic segment has been considered as one of the most neglected area in SCM. Very few studies have focused on utilizing optimization model on SCM that only accounts for inbound logistic system. This study has identified the research gap and proposed method attempts to minimize the total transportation costs of inbound logistic system with reference to available resources at the plants, as well as at each depot. Genetic algorithm and Lingo were approached to help the top management in ascertaining how many units of a particular product should be transported from plant to each depot so that the total prevailing demand for the company’s products satisfied, while at the same time the total transportation costs are minimized. Finally, a case study involving a Bangladeshi renowned retail super shop is used to validate the performance of the algorithm. In order to evaluate the performance of the proposed genetic algorithm, the obtained result was compared with the outputs of LINGO 17.0. Computational analysis shows that the GA has result very close to optimal solution in very large-sized problems, and in case of small problems, LINGO that means exact method works better than heuristics. Supply chain management is a field of growing interest for both companies and researchers. Supply chain management (SCM) definition varies from one enterprise to another. This chain is concerned with two distinct flows: a forward flow of materials and backward flow of information. At its highest level, a supply chain is comprised of two basic, integrated process: (1) The Production Planning and Inventory Control Process and (2) The Distribution and Logistic Process. The aim of logistics activities, as a bridge between manufacturers and customers, is to bring the right product to the right place in the right quantity at the right time 1. Inbound and outbound logistics combine within the field of supply-chain management, as managers seek to maximize the reliability and efficiency of distribution networks while minimizing transport and storage costs. Inbound logistics refers to the transport, storage and delivery of goods coming into a business whereas outbound logistics refers to the same for goods going out of a business. According to Ali Naimi Sadigh 2 supply chain is an interrelating network of suppliers, manufacturers, distributors, and customers, plays an important role in competitive markets to satisfy customer demands. Recently it has been found that product delivery to customers in a suitable time with desirable quality and minimum cost is a complicated process that needs several internal and external organizational transactions. Since efficiency and responsiveness are two generic strategies for supply chain network design, coordination of these transactions is an important issue. The reason behind transportation’s having a prime role in supply chain management is because products are never produced and consumed in the same place. According to the studies of 3, Genetic Algorithms work better where the traditional search and optimization algorithms fail to avail the goal performance. Genetic algorithm is the most popular algorithm that has been used to select optimal route. Many researchers are working on it to optimize routes in supply chain networks using Genetic algorithm. In this research GA was approached to get the total optimized cost and allocation of truckloads and prioritized further. Our proposed model is composed of single objective function to minimize the total transportation costs between plant & depot and as well as to determine the best optimal truck load that to be transported from plant to depot. The remainder of this paper is as follows. In Section 2, literature review of the approached problem is presented. Section 3 presents a descriptive idea about linear programing. The ordinary Genetic Algorithm is introduced in Section 4. Section 5 presents the mathematical formulation for the transportation costs of truckloads. The solution procedure is explained in section 6. Section 7 represents the performance and comparison between GA and exact method by solving some numerical examples. Finally, conclusions and the future research points are drawn in Section 8. In 1941 Hitchcock first developed the transportation model. Then the simplex method on the transportation problem as the primal simplex transportation method used by Dantzig 4. The modified distribution method is useful in finding the optimal solution for the transportation problem. Whenever there is a physical movement of goods from the point of manufacture to the final consumers through a variety of channels of distribution (wholesalers, retailers, distributors etc.), there is need to minimize the cost of transportation (such as maintenance cost, personnel cost, fuel cost, and loading/offloading cost) so as to increase the profit on sales. Transportation problems arise in all such cases. It aims at providing assistance to top management in ascertaining how many units of a particular product should be transported from plant to each depot to that the total prevailing demand for the company’s product is satisfied, while at the same time the total transportation costs are minimized 5. According to Ali et. al. transportation criteria ( for example costs and mode of transportation) play an important role in achieving sustainability across a supply chain, as well as enhancing supply chain performance 6. Prichanont et. al. used discrete-event simulation to demonstrate that the number of trucks for harvested corps and processing mill should be drastically reduced in order to avoid excess supplies 7. Emphasize has been given on creation of value that can only be achieved through internal and external organizational supply chain collaboration 3, 8. Hong and Liu 9 applied the knowledge-based view to the information process and knowledge development in organizational supply chain performance. They could describe the substantial variance in cycle time of organizational supply chain performance using knowledge-based view. This shows the relevance of sharing of knowledge in achieving supply chain performance in an organization. The relevance of this theory with regard to the objective of this study is that it demonstrates the use of strategic inbound transportation management practices as a resource that leads the organization to reduction of transport and communication costs and thus contributes to supply chain performance 10. Strategic inbound transportation management practices are intangible resources that firms might utilize as part of the organizational capabilities to gain a competitive advantage by integrating strategic inbound management practices to suite customer needs. Strategic inbound transportation practices are the best practices that are used in transportation to ensure that manufacturers optimize cost 11. These are those methods or techniques found to be the most effective and practical means in achieving transportation objectives such as low costs, timely delivery of transportation related information to the rest of the enterprise and to customers, increase transportation velocity while making optimum use of the firm's resources 12. A well-run inbound transportation program can reduce costs, improve service, minimize delays, reduce confusion, and raise performance. It can drive efficiencies across the entire supply chain. Lack of optimization is another gap where suppliers at times just want to get product shipped not for client’s benefits to get it out of their way. This can be corrected through optimization of orders and consolidation of loads so that weights on trucks are maximized before being sent. It should be noted that highlighted challenges can be minimized through optimization, planning, automation, and collaboration that facilitates control of transportation costs, embracing Omni channel, being compliant and adherent to set standards and multi-enterprise requirements; and utilization of data to improve operations 13. According to the studies of Tiwari and J. Mehnen. 14, Genetic Algorithms work better where the traditional search and optimization Algorithms fail to avail the goal performance. Lawrynowicz et al also assert that GAs are efficient tools for solving complex optimization problems, highlighting the problem of minimizing the total cost for a distribution network, which presents some similar features to the problem addressed in this paper 15. Wen et. al. proposed a Genetic Algorithm, using an integer encoding to represent the cargo item sequence to be delivered in order to solve the problem of logistics scheduling problem and optimize the total cost for a location-routing-inventory problem 16. The main goal of a supply chain is to deliver the right supplies in the right quantities to the right locations at the right time and the strategic goal in logistics is to reduce costs, improve efficiency and increase customer value and satisfaction. That’s why we proposed a fulfilled supply chain network with minimum possible inbound transportation cost and number of truck loads to be delivered with Genetic Algorithm. Linear programming (LP; also called linear optimization) is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear relationships. Linear programming is a special case of mathematical programming (mathematical optimization). More formally, linear programming is a technique for the optimization of a linear objective function, subject to linear equality and constraints. Its feasible region is a convex polytope, which is a set defined as the intersection of finitely many half spaces, each of which is defined by a linear inequality. Its objective function is a real-valued affine (linear) function defined on this polyhedron. A linear programming algorithm finds a point in the polyhedron where this function has the smallest (or largest) value if such a point exists. Linear programs are problems that can be expressed in canonical form as maximize cTx subject to Ax ≤ b and x ≥ 0 Where x represents the vector of variables (to be determined), c and b are vectors of (known) coefficients, A is a (known) matrix of coefficients, and is the matrix transpose. The expression to be maximized or minimized is called the objective function (cTx in this case). The inequalities Ax ≤ b and x ≥ 0 are the constraints which specify a convex polytope over which the objective function is to be optimized. In this context, two vectors are comparable when they have the same dimensions. If every entry in the first is less-than or equal to the corresponding entry in the second, then we can say the first vector is less-than or equal to the second vector. Genetic algorithms were developed by J. Holland in the 1970s to understand the adaptive processes of natural systems. Then, in the 1980s, he applied genetic algorithm (GA) for optimizing and machine learning problems. GA belongs to a very popular class of evolutionary algorithms that use crossover and mutation operators and a selection procedure to generate new population. In the new population, strong species have greater chance to pass their genes to future generations via reproduction. In recent years, there has been a growing interest in using genetic algorithms to solve many single and multi-objective problems that are mostly NP-hard and combinatorial. Each possible configuration was then evaluated with respect to the key performance indicators. The crossover and mutation operators are segment based and the selection mechanism is based on the number of parents and offspring in the current generation. According to T. Jones. 17, one of the most general heuristics used in optimization techniques is the idea that the value of solutions is to some extent correlated with how similar the solutions are; crudely, that a good solution is more likely to be found nearby to other good solutions than it is to be found nearby an arbitrary solution. Naturally, ‘nearby’ or ‘similar’ needs to be qualified. The simplest notion of similarity of solutions is their proximity as measured in the problem parameters given. But alternatively, we may define proximity in terms of the variation operators used by the search algorithm. In any case, the simplest way to use this heuristic is a hill-climbing algorithm: start with some random solution, try variations of this solution until a better solution (or at least, non-worse solution) is found, move to this new solution and try variations of this, and so on. But the actual success of a hill-climber requires a stronger assumption to be true: that from any point in the solution space there is a path through neighboring points to a global optimum that is monotonically increasing in value. If this is true then a hillclimber can find a global optimum - and, although a hill-climber can do better than random guessing on almost all practical problems we encounter, it usually does not find a global optimum. More likely, it gets stuck in a local optimum - a sub-optimal point or plateau that has no superior neighboring points. Considering a transportation problem faced by a local retail super market chain which have four major plants and five major districts depots in Bangladesh. Production capacities of the plants are adequate to satisfy their customers, but with limited available of number trucks. Given the company’s present situation, the objective is to determine the number of truckloads to be transported via each depot from the plants that provides the minimum total transportation cost. Parameters and decision variables of proposed model are as follows: I Index of plants, i= 1…m J Index of depot, j= 1…n Transportation cost from ith plant to jth depot Average available truckloads for ith plant. Average demand of truckloads for jth depot Number of truckload transport from ith plant to jth depot The cost objective Function, Minimize, |(1)| Subject to: 1. Constraints on total available truckloads at each plant |(2)| 2. Constraints on total truckloads needed at each depot |(3)| |(4)| Here equation (1) gives the objective to minimize the total inbound cost. However, it is subjected to some constraints. While equation (1) defines the total inbound cost, equation (2) and (3) define the capacity constraint and equilibrium constraint of plant as well as depot respectively. Each plant has its own maximum available truckload capacity. Equation (2) ensures that the expected cumulative net quantity of truckload is always less than or equal to the available truckload capacity of the plant. Equation (3) represents that the total number of truckload leaving from plant should equal to the demand of truckload in depot causing equilibrium in supply and demand between plant and depot. Equation (4) imposed the non-negative restriction on decision variable. Each chromosome consists of several gens which represents a feasible solution. The chromosome is coded as an integer number and explains the truckloads from each plant to each depot.6.2. Initial Population The value of each chromosome is generated randomly in which all constraints (e.g., equilibrium of supply and demand between suppliers and customers) should be satisfied. The initial population of algorithm is feasible and made randomly.6.3. Fitness Function The objective value of each chromosome is calculated from the mathematical model which presented in Section 3. Objective function value of cost can be obtained by considering the constraints. Therefore, fitness function is the objective function value itself.6.4. Population Sorting The population is sorted into non-dominated solution set based on the total weighted sum of the objective value of each chromosome. The first ten non-dominated solution individuals are selected from the population and defined as grade 1 first of all, and then, the second ten non-dominated solution individuals are defined as grade 2 and so on.6.5. Crossover Operator Crossover operator is defined as a random selection of two chromosomes from the population. It selects equal-sized assembly schemes from each selected chromosome and then swaps the position of the selected schemes.6.6. Mutation Operator A bi-level mutation operator is presented for this model. In the first level, a randomly selected number of truckloads from chromosome mutates the value of each gene in this selected scheme with the same ratio. In the second level, mutated genes in first level are randomly selected, and then, their values are mutated.6.7. Stop Criterion The stop condition is set as the generation number; it will stop when the number of new generations reaches the generation number. In this section, some computational experiments are presented to illustrate the efficient performance of the proposed approach for supply chain decision. GA and exact method were used to determine and compare the objective function value and the number of truckloads to be sent to the depot. Here, Taguchi classifies objective functions into three categories: the smaller-the-better type, the larger-the-better type, and the nominal-is-best type. As the objective function in cost minimization problem so the smaller-the-better type is used here. As mentioned in the previous section, the factors are: the population size, the rate of crossover, migration fraction, and the number of generations. Different levels of these factors are shown in Table 1. We used L12 (3^4). Therefore, only 12 experiments for setting the parameters of proposed algorithm are needed. As indicated in Figure 1, the optimal level of the factors A, B, C, D, and E clearly becomes A (2), B (2), C (3), and D (2) respectively. Considering the S/N ratio plot result, the best position of factors to determine the optimal cost are shown in Table 2. Figure 2 illustrates the minimum cost value using genetic algorithm considering the factors from S/N ratio plot. Table 3 represents the comparison between the number of truckloads to be sent from each plant to each depot by approaching GA in MATLAB R2015a and run on Intel Core i3, 2.00GHz with 4.00GB of RAM. Results are compared to outputs of LINGO 17.0 software. Figure 3 compares the results of GA and exact method where minimum monthly transportation cost for the company to transport its products from the four plants to the five depots are BDT 1.47E+09 and 1.00E+09 respectively. Large scale firms that used strategic inbound transportation practices achieved reduced operational costs, reduced defects and minimized lead time which contributed to transportation performance. Furthermore, desired objectives have been considered and converted into costs for comparison and all related information from every function in the supply chain has been shared to create the process alignment and gain better communications which are the important characteristics of the responsive supply chain. In this paper, authors have developed a mathematical model for a supply chain transportation network among two stages that minimizes the transportation costs and select optimum number of truckloads. The mathematical model was formulated as multi stage single-objective optimization problem. The research has used the Genetic Algorithm approach in a simple supply chain network model to find the optimized transportation cost in a Bangladeshi renowned retail super shop. To identify whether the proposed algorithm is effectively solving problems, the Taguchi method was applied to set parameters of genetic algorithm. To investigate the effectiveness of the proposed genetic algorithm, LINGO 17.0 was employed. A comparative analysis has been done between the obtained result from genetic algorithm and outputs of lingo 17.0. The indicators which considered in the comparison are: (1) the value of objective function and (2) Number of truckloads. Experimental results show that exact method (LINGO) generates better solutions than GA, but better performance is obtained in the large-scale problems in terms of quality of solutions and computation times in GA. There are some key points for the future researches. In this paper, all parameters are certain, but considering the uncertainty of costs, new solution methodologies may be needed. Considering multi-period planning problems in developing mathematical models is also a potential aspect for future researches.
http://pubs.sciepub.com/ajie/6/1/2/index.html
Written By: Stuart Zola, Ph.D. Emeritus Interim Provost and Executive Vice President for Academic affairs, Emeritus Professor, Department of Psychiatry and Behavioral Sciences Emory University, Atlanta, Georgia 30322 Research Career Scientist (Retired), Atlanta VA Health Care System Introduction In my previous post, I discussed the fact that Alzheimer’s disease traditionally has been viewed as a one-size-fits-all medical condition. An important new perspective would be to re-think and re-frame Alzheimer’s disease as a spectrum disorder, much like the evolution of re-thinking and reframing autism to autism spectrum disorder. Considering Alzheimer’s as a spectrum disorder sets the stage for the application of one of the most promising and innovative approaches in modern medicine, personalized, precision medicine. Precision medicine, sometimes called personalized medicine, refers to the tailoring of medical treatment to the individual characteristics of each patient. Several core components of precision medicine have been identified, including comprehensive risk assessment, tools for early detection of pathological processes, and interventions tailored to an individual’s drivers of disease. These components, and others, speak to the potential power that precision medicine could have both in identifying specific profiles of Alzheimer’s disease, and, in turn, developing more effective interventions and treatments that can be targeted at specific aspects of the disease, and that can be personalized to individualized treatment. Considerable research on Alzheimer’s disease is focused on understanding genetic risk. Indeed, a core component of precision medicine are tools for early detection of pathological processes, including genetic mutations and changes, and other approaches that focus on the molecular level, including discoveries of a range of identified biomarkers. However, it is likely that environmental factors also will be key to risk assessment and to understanding and developing the most effective interventions. For example, it is well known that traumatic head injury increases the risk for Alzheimer’s. There are now established protocols for treating traumatic head injury, as well as individual counseling protocols to reduce future risk in cases of traumatic head injury. These include patient management and frequency of surveillance for preclinical AD. While treatment options for head injury, and for other risk factor conditions, can alter the course of vulnerability to Alzheimer’s for some individuals, it is less effective for others. Precision medicine can begin to help us understand why by uncovering how head trauma might interact with existing but undetected or latent pathophysiological processes and certain genetic dispositions that aim some individuals toward AD vulnerability. Most recently, an additional effective strategy in precision medicine has been recognition of the potential impact of behavioral assessment tools, because they link so directly to the most obvious defining characteristic of Alzheimer’s, i.e., cognitive decline. New and innovative behavioral assessment approaches have been developed and aimed at preclinical detection of oncoming cognitive decline based on our understanding of the underlying neurology of memory function. These behavioral assays are fast becoming part of precision medicine approaches because of the ability of these assays to selectively target dysfunction in particular brain regions and even identify specific brain structures. This kind of behavioral information will become invaluable in uncovering the relationships between symptoms in preclinical and established Alzheimer’s disease, the potential biomarkers, genetic markers, and pathophysiologic processes that underlie the disease. The time is ripe for the convergence of two ideas, i.e., reframing Alzheimer’s disease as a spectrum disorder, and the application of personalized, precision medicine to Alzheimer’s disease as a spectrum disorder. We must come to the realization that there are many profiles of Alzheimer’s disease. The application of precision medicine will be the most effective and efficient way for us to discover and unpack the underlying neurology and genetics of Alzheimer’s profiles so that effective individualized treatments and interventions can be developed. In my next post, I’ll discuss additional behavioral interventions that could turn out useful not for predicting oncoming cognitive decline, but instead for more effectively managing individuals who are already affected by Alzheimer’s disease. Dr. Stuart Zola is Co-Founder of MapHabit™. Learn more about MapHabit’s™ work to help people with memory impairment live better HERE.
https://www.maphabit.com/post/alzheimers-and-precision-medicine/
If I Were A Planet But by some mysterious forces of the universe, in its usual machinations and elusiveness, an answer. I am Saturn. According to Wikipedia, "Saturn is a popular setting for science fiction novels and films, although the planet tends to be used as a pretty backdrop rather than as an important part of the plot." For a few more facts: In Roman mythology, Saturn is the god of agriculture....Saturn is the least dense of the planets; its specific gravity (0.7) is less than that of water.What I fall towards to? That question I've tried asking before and still have no answer. Or because I have so many answers. But to make things easier, to generalize: that the fictive becomes palpable. Saturn rotates very fast on its axis, but not at a uniform rate. What makes Saturn one of the most beautiful objects in the solar system is its ring system....The origin of the rings is obscure. It is thought that the rings may have been formed from larger moons that were shattered by impacts of comets and meteoroids. The ring composition is not known for certain, but the rings do show a significant amount of water. They may be composed of icebergs and/or snowballs from a few centimeters to a few meters in size. Much of the elaborate structure of some of the rings is due to the gravitational effects of nearby satellites. This phenomenon is demonstrated by the relationship between the F-ring and two small moons that shepherd the ring material. The whole system is very complex and as yet poorly understood....Like the other jovian planets, Saturn has a significant magnetic field....When it is in the nighttime sky, Saturn is easily visible to the unaided eye. Though it is not nearly as bright as Jupiter, it is easy to identify as a planet because it doesn't "twinkle" like the stars do.
https://www.razelibrary.com/2005/08/if-i-were-planet.html
Synaesthesia is a perceptual condition in which stimulation of one sensory or cognitive pathway leads to automatic and involuntary experiences in a secondary sensory or cognitive pathway (e.g. seeing music or tasting words). Despite the fact that synaesthetes constantly perceive additional information during these inducer-concurrent associations, they are relatively unaffected by this irrelevant information. Chapter II investigates whether different samples of -visual synaesthetes (i.e. those experiencing synaesthesia types involving visual concurrents such as colours for letters or numbers – grapheme-colour synaesthesia – or sequence-space synaesthesias like calendar-forms) are better than-non synaesthetes at filtering out task-irrelevant stimuli in different conflict tasks. Synaesthetes were more efficient than controls at ignoring visual irrelevant stimuli presented together with tactile targets, but no group differences were observed when they had to perform the same visuo-tactile task with reversed instructions (i.e. attend visual and ignore tactile information) or in unimodal visual tasks (Studies 1 and 2). However, these results were not replicated in Study 3, which assessed a new sample of participants with the two versions of the same visuo-tactile tasks. This study also evaluated a) whether the observed synaesthetic attentional advantage was consistent across different sensory modalities combinations by introducing audio-visual modalities of the same tasks, and b) whether different types of -visual synaesthetes showed the same attentional advantages or not by comparing groups of colour-synaesthetes (i.e. those experiencing synaesthesias involving -colour as the concurrent) and sequence-synaesthetes (i.e. those experiencing sequence-space synaesthesias). Results revealed that sequence-synaesthetes were better than non-synaesthetes and colour-synaesthetes at filtering tactile irrelevant distractors presented with visual targets; no other group differences were observed. This suggests that the specific types of synaesthesias, together with other factors discussed, might play a relevant role in shaping the cognitive abilities of synaesthetes. In order to explore the extent of the influence of synaesthetic individual differences, the second part on of the thesis examines differences in personality in individuals with different types of synaesthesia (Chapter III – Study 4). Synaesthetes have a distinct personality profile compared to non-synaesthetes, but there are inconsistencies in the literature with respect to the personality traits that differ. Most studies have focused on grapheme-colour synaesthetes, ignoring other types of synaesthesia. Here, we compare matched groups of colour-synaesthetes, sequence-synaesthetes, and non-synaesthetes on the Big Five personality traits and on specific empathy and positive schizotypy subscales. We replicated previous findings that synaesthetes experienced higher rates of Openness to Experience, Fantasising (a dimension of empathy), and Unusual Experiences (positive schizotypy) compared to non-synaesthetes. Importantly, some of these differences were only observed for sequence-synaesthetes, with higher rates of Openness to Experience compared to non-synaesthetes and colour-synaesthetes. However, no differences between synaesthetes and non-synaesthetes or between the two types of synaesthetes were found in a second sample assessed. We discuss several possible limitations affecting subject recruitment and assessment administration methods that could explain the different sample results. The last section of the thesis addresses synaesthetic heterogeneity from a methodological point of view. The need to screen and classify synaesthetes led to the development and validation of a screening questionnaire, the Edinburgh Synaesthesia Screening Assessment or ESSA (Chapter IV – Study 5). Although synaesthetic tests of genuineness or consistency tests are considered the ‘gold standard’ of synaesthesia assessment, they are only available for a few synaesthesia types. The ESSA is a self-report questionnaire developed to cover an exhaustive range of synaesthesia types (108) and designed to assess both synaesthetes and non-synaesthetes by asking responders to rate how much each synaesthetic experience applies to them (5-point Likert scale). Sensitivity and specificity analyses were carried out on ESSA scores obtained from a sample of over 150 (synaesthete and non-synaesthete) participants who also completed synaesthetic consistency tests for -colour and sequence-space synaesthesias. Synaesthetes obtained significantly higher scores than non-synaesthetes, and the analyses showed acceptable rates of sensitivity and specificity (±85.5 and ±75.8, respectively). These results were validated internally and externally (in a new sample of 275 participants) yielding some modest values. We consider different detected bias and other factors that might reduce ESSA’s performance and propose ways to address them in future studies. In sum, converging evidence seems to indicate that synaesthetes are not a homogeneous category of individuals. Different cognitive and personality profiles are associated with different synaesthesia types. These findings have wider implications for the synaesthetic research area, as they suggest that grapheme-colour synaesthetes, predominantly assessed in in synaesthesia studies, might not be representative of all synaesthetes. These observations might at least in part explain contrasting results reported in the literature.
https://era.ed.ac.uk/handle/1842/37310
What is the main point of 1984? More broadly, the novel examines the role of truth and facts within politics and the ways in which they are manipulated. The story takes place in an imagined future, the year 1984, when much of the world has fallen victim to perpetual war, omnipresent government surveillance, historical negationism, and propaganda. Is 1984 by Orwell a true story? When George Orwell penned his now-famous dystopian novel, “1984” — released 67 years ago in June 1949 — it was intended as fiction. What happens at the end of 1984 by George Orwell? In the final moment of the novel, Winston encounters an image of Big Brother and experiences a sense of victory because he now loves Big Brother. The Party had to go to extreme measures to break Winston, employing an entire cast of characters and spending countless hours following Winston and later interrogating him. Why was the book 1984 written? Orwell wrote 1984 just after World War II ended, wanting it to serve as a warning to his readers. He wanted to be certain that the kind of future presented in the novel should never come to pass, even though the practices that contribute to the development of such a state were abundantly present in Orwell’s time. What happened in the year 1984 in the US? 1984 United States presidential election: Republican President Ronald Reagan defeats Democratic former Vice President Walter F. Mondale with 59% of the popular vote, the highest since Richard Nixon’s 61% popular vote victory in 1972. What countries banned 1984? Recently, China banned all copies of “1984” in their country. Like the fictional government presented in “1984,” the Chinese Communist Party takes substantial measures when it comes to surveilling its people and censoring adverse news. What did Orwell predict with 1984? In 1949 George Orwell published his dystopian fiction classic “1984.” It depicted a dark future where technology exists in the public realm only as a tool for the elite to control society. Sound familiar? In the 70 years since, much of what Orwell imagined has come to fruition, including facial recognition, auto-transcription, and music made by AI. What are Orwell’s overall purposes through 1984? George Orwell ‘s primary purpose in 1984 is to depict a totalitarian society and warn readers against allowing the world to fall into such a dystopian future after World War II. Orwell creates an entire world in the novel, one that is marked by surveillance and lack of individual freedom. What was Orwell warning people about in 1984? George Orwell ‘s 1984 is a warning against tyranny and authoritarian governments. Writing in the mid-twentieth century, Orwell had seen the rise of tyrannical governments and political leaders… What are some memorable quotes from Orwell’s 1984? Most remarkable George Orwell quotes from 1984: “Big Brother is watching you”. Speaking of truths, here’s a quote getting straight to the point that you never really have any freedom or privacy at all. “You’re only a rebel from the waist downwards”. In the novel, Orwell’s main character Winston is in love with a woman named Julia. “Until they become conscious, they will never rebel”.
https://bookriff.com/what-is-the-main-point-of-1984/
In just five years, Millennials (those born between 1980 and 1995 and currently under 33 years of age) will make up 40% of the workforce in America; in 10 years, they will comprise 75% of the workforce. If your organization wants to remain competitive, you must address the unique needs of this growing group. The best place to start is to build a foundation for a thriving workplace culture and to foster individual autonomy. What Millennials Want According to 2014 research from the Intelligence Group, a division of the Creative Artists Agency that focuses on analysis of youth-focused consumer preferences and trend forecasting, Millennials are looking for employers who: - Offer meaningful work. (64% say it’s a priority for them to make the world a better place.) - Foster collaboration, not competition. (88% prefer a collaborative work-culture rather than a competitive one.) - Provide employee autonomy. (72% would like to be their own boss. But if they do have to work for someone else, 79% prefer that boss to be more like a coach or mentor.) - Provide flexibility and support work-life integration. (Since work and life now blend together inextricably, 74% want flexible work schedules, and 88% want work-life integration.) What Your Organization Can Do Many workplaces offer options like wellness programs, flex-time, and maternity or paternity leave. While these may offer some benefits for employees, such programs and policies do not necessarily foster meaningful personal growth and development. They are elements of workplace climate, not workplace culture. (Confused about the difference between climate and culture? Read this.) In order to appeal to Millennials and support their need for personal growth and development, it is essential to focus on cultivating a thriving workplace culture. Here’s how to get started: 1. Clarify and align core values. Millennials want to be part of something bigger than themselves, and they are looking for employers who offer them meaningful work. If your organization hasn’t already, clarify your company’s core values, the two or three behavioral traits that lie at the heart of your organization’s identity. (These are not to be confused with what Patrick Lencioni, in his book The Advantage, calls permission-to-playvalues like “honesty” or “integrity,” which are not what sets your organization apart and uniquely defines you.) One of the best ways to identify your core values is to look at the traits that are inherent and natural for your organization, and have been for a long time; what are the qualities of the employees who already embody what is best about the organization? For example, Patagonia’s core values are central to everything they do. Here is how they describe their values: Our values reflect those of a business started by a band of climbers and surfers, and the minimalist style they promoted… For us at Patagonia, a love of wild and beautiful places demands participation in the fight to save them, and to help reverse the steep decline in the overall environmental health of our planet. Zappos is also well known for their core values, and intentionally living them and ensuring all business practices are guided by them. Some of their core values include: - Deliver WOW through service - Embrace and drive change - Create fun and a little weirdness - Build a positive team and family spirit As you can see by these examples, your core values reflect your organization’s culture and your employment brand by framing the entire employee experience. Once you’ve clarified your core values, involve employees in living them on a daily basis. How your organization can support this: - Offer a Culture and Visioning Workshop. People support what they help create, so provide employees the opportunity to describe in detail what the employee experience looks like when everyone is living the core values. What behaviors will they see that are consistent with core values? What behaviors might sabotage the core values? Let employees see how they align with core values and vision as individuals, and how their work contributes to living the values and vision. With this foundation, employees can begin creating their development path, which will support meaningful work. - Offer a quarterly Workplace Culture Workshop. Once employees have collectively created clarity around the behaviors that reflect the company culture, they will be able to create structure to nurture and protect your organization’s brand. Part of this structure includes supporting and holding each other accountable, so offer employees the opportunity to do so in these workshops. Have employees connect with peers and managers to reflect on how they see themselves and others behaving in a way that is or is not consistent with the core values and workplace culture. Not only will this address Millennials’ need for collaboration and meaningful work, but it will also bring to light any glaring problems or issues before too much damage has been done. - Use your core values as a litmus test for everything you do – from hiring to recognition and even firing. Because leaders profoundly shape employees’ experience of the culture, every leader and manager within your organization needs to live the core values and create the conditions where employees feel valued; otherwise, all bets are off in terms of having a high performing organization and retaining Millennials. Assuming you have leaders who walk the talk, base employee recognition on how people are living out the core values and contributing to a thriving workplace culture. 2. Provide physical and mental space where employees have an opportunity to “pause.” In his best-selling book, “Leadership from the Inside Out,” Kevin Cashman describes how too often people allow themselves to be overcome by busyness. We are unhealthily attached to our smartphones, and too caught up and distracted to take the necessary time to sift through life’s complexity and find purpose. Many Millennial employees are unconvinced that excessive work demands are worth the sacrifices to their personal life. In fact, Millennials know what the research shows: to be productive and engaged, employees need to find ways to recharge during the day. Organizations that actively seek ways to help people integrate their personal and professional lives will have energized employees who are better able to bring their best selves to work each day. How your organization can support this: - Deliberately schedule play into the work week. Organizations whose employees engage in high-level thinking (e.g., Google, 3M) deliberately schedule play into the workday; they recognize that adopting a childlike mindset opens people up to alternative ways of thinking. Although play can include physical activities (e.g., setting up a Ping-Pong table in the break room), it is really more of a mindset; the key is that employees need to feel safe about pursuing occasional tangential interests. - Create an environment that positions people to do their best work. Some people need quiet space to allow for focus and concentration, while others benefit from collaboration. Create spaces that allow people to work well, but also to play and relax. Consider repurposing a meeting room into relaxation space for reflection, meditation or short naps. - Support breaks and vacations. In his book, “The Best Place to Work,” Ron Friedman asserts that people have a biological need for rest that’s as strong as our need for food and water. Yet personal time – including vacations – has become infected with work through smartphone technology that compels many to check company email frequently. FullContact, a progressive software company in Denver, recognizes the importance of rest. In 2012, they implemented a program that pays each employee $7,500 to take their family on vacation each year. However, in order to receive the bonus, employees must first agree to 3 strict provisions, as outlined on the blog of their CEO, Bart Lorang: - You have to go on vacation, or you don’t get the money. - You must disconnect. - You can’t work while on vacation. Lorang explains how this seemingly expensive program benefits the entire organization: “It’s an investment into the long-term happiness of our employees, which in turn leads to the sustained growth of the company.” Other simple things you can do to support the human need for rest include leaving time between meetings, encouraging employees to take a quick walk, and providing time for socialization. 3. Generously support professional AND personal development. Millennials are looking for organizations that truly value them, not just as cogs in the company machine, but as people who are thinking, evolving, and complex-systems capable. Show your Millennials (and all of your employees!) that you value everything they bring to the table by helping them take advantage of developmental opportunities that will broaden their horizons. Whether they choose courses or conferences to enhance their professional skills, or can benefit from programs or a professional coach to help them grow personally (a cornerstone of highly effective organizations), employees who are supported to grow and are much more likely to be engaged at work. Strengthening Culture is a Win-Win When addressing Millennials’ unique needs, beware of simply creating more programs and policies. It’s essential to shift and strengthen the underlying workplace culture in order to support employee autonomy, personal growth and development, and attract Millennials to your organization. The good news is that by addressing Millennials’ needs, you will foster a healthier and more productive workplace for all of your employees — and improve organizational performance. For more on building a thriving workplace culture and improving employee morale, check out this article. Do you have a healthy workplace culture? Find out now with this quick quiz.
https://salveopartners.com/how-to-become-a-place-where-millennials-want-to-work/
We work at the intersection of neuroscience and neurotechnology by developing novel measurement and analysis methods for studying human brain structure and function, as well as applying these methods to address important and challenging research questions in both basic and clinical neuroscience. We extensively employ neuromagnetic measurements of brain activity, which give a temporally detailed picture of activation dynamics. Our aims are Left: Comparison of conventional MEG (upper row) and OPM-based MEG (lower row) used in the high-resolution MEG project. Right: OPM-MEG measurement session. Joint estimation of directed functional connectivity (left) and source locations (right) from MEG data acquired while viewing pictures of human faces. Left: MEG/EEG based BCI system employing selective attention task with possible neurofeedback; Right: Robust selective attention target detection for establishing communication. The project goal is to restore lost communication ability for completely locked-in patients. Although functional brain imaging can uncover activity patterns encompassing multiple brain regions, the interplay of these regions is usually not directly addressed. Yet, neural processing supporting cognition may dynamically recruit brain functions implemented at distinct cortical regions. These short-lived networks are formed by dynamic functional connections between the participating regions. Electro- and magnetoencephalography (EEG/MEG) measure electric brain activity with high temporal resolution. However, neither method readily provides us with a network structure; they merely show the aggregated activity of all contributing regions. The challenge is to decompose the recorded MEG/EEG data into a sparse and dynamic set of brain signal sources. The goal of this project is to tackle this challenge in a new way, surpassing current functional connectivity estimation methods, and to enable real-time tracking of these networks to allow their use in neurofeedback experiments. In the context of this project, we also develop advanced modeling of invasive intracranial EEG measurements. Recent preprints and publications: Collaborators We study several aspects of human cognition using MEG and EEG combined with machine learning. Many of our approaches build on brain-signal features specifically reflecting attentive and conscious processing. We also develop brain–computer interfaces both for “closed-loop” neuroscientific experimentation as well as for future clinical applications. Recent preprints and publications: Zhigalov, A., Heinilä, E., Parviainen, T., Parkkonen, L., & Hyvärinen, A. (2019). Decoding attentional states for neurofeedback: Mindfulness vs. wandering thoughts. NeuroImage, 185, 565-574. Zubarev, I., & Parkkonen, L. (2018). Evidence for a general performance‐monitoring system in the human brain. Human brain mapping, 39(11), 4322-4333. Halme, H. L., & Parkkonen, L. (2018). Across-subject offline decoding of motor imagery from MEG and EEG. Scientific Reports, 8(10087). Zubarev, I., Zetter, R., Halme, H. L., & Parkkonen, L. (2018). Robust and highly adaptable brain-computer interface with convolutional net architecture based on a generative model of neuromagnetic measurements. arXiv preprint arXiv:1805.10981. Collaborators: Functional brain imaging methods hold promise for producing valuable information for the diagnostics of many brain disorders; however, the application of these methods is hampered by the lack of a normative database. To this end, we are aggregating a large number of magnetoencephalographic (MEG) and functional magnetic resonance imaging (fMRI) datasets into such a database for the application of machine-learning methods to derive biomarkers that would be indicative of brain disease states. Recent preprints and publications: Collaborators: We exploit recent advances in a novel magnetic sensor technology—optical magnetometry—to construct a new kind of MEG system that allows capturing cerebral magnetic fields within millimetres from the scalp. Our simulations show that this proximity leads up to a 5-fold increase in the signal amplitude and an order-of-magnitude improvement of spatial resolution compared to conventional SQUID-based MEG. Therefore, a high-resolution MEG (HRMEG) system based on optical magnetometers should enable non-invasive recordings of cortical activity at unprecedented sensitivity and detail level, which we capitalize on by characterizing cortical responses, particularly gamma oscillations, during complex cognitive tasks. Read more and check the publications of this project here. Collaborators: Most neuroimaging studies of human social cognition have focused on brain activity of single subjects. “Two-person neuroimaging” refers to simultaneous recordings of brain signals from two subjects involved in social interaction. We have developed a set-up that connects two MEG systems in different laboratories allowing the subjects to interact and we have applied this set-up to study brain functions subserving social interaction. Recent preprints and publications: Hari, R., Henriksson, L., Malinen, S., & Parkkonen, L. (2015). Centrality of social interaction in human brain function. Neuron, 88(1), 181-193. Collaborators: We are contributing to open-source MEG/EEG analysis software packages. Recent preprints and publications: Innovative sensors that allow detection of the brain’s magnetic field from right on the scalp could enable more precise measuring of brain activity Aalto University research groups will study and develop technology for quantum communication, ultra-sensitive magnetic sensors based on quantum optics, and photon-emitting quantum chips. The Quantum Flagship launched by the European Union will provide funding for ten years and for over 5000 researchers with one billion euros. The flagship will consolidate the best quantum physics research in Europe and transfer quantum technology from the lab to the market. Aalto University's newly tenured professors reveal the secrets of aesthetics, biohybrid materials and wireless world, among other things. Brain imaging has given us a lot of data concerning, for example, how human brains process sensory information.
https://www.aalto.fi/en/department-of-neuroscience-and-biomedical-engineering/neuroimaging-methods-group-nimeg
Steam, Tracks, Trouble & Riddles (S.T.T.&R.) is a single player adventure mod, based on source engine (Orange Box) by Valve. It combines two types of game play: an adventure game with complex, 3D-interactive riddles in a fantastic environment and a simulator for track based vehicles. Only registered members can share their thoughts. So come on! Join the community today (totally free - or sign in with your social account on the right) and join in the conversation. Easy Rider (the big robot, formerly a metal barrel) and Happy Jack (living in the head of Easy Rider, trying to help him with neurological capabilities) will help and guide the the player, trying to give some useful hints and guides to solve the riddles.
https://www.moddb.com/mods/steam-tracks-trouble-riddles/images/stt-r-characters-easy-rider-and-happy-jack4
The National Ballet of Canada's "A Disembodied Voice" features a score by John Oswald for the recorded voice of Glenn Gould, robot piano, ghost pianist, and orchestra. The piece premiered in the programme entitled "Inspired by Gould" which ran from November 20th to the 27th at the Hummingbird Centre in Toronto. The half hour composition is in 10 sections, each of which takes a different angle on Gould's musical preoccupations. Several technological innovations were utilized by a team under the direction of Oswald which researched and created materials during most of 1999. A major bit of sonic archeology was the dissecting of Glenn Gould's 1981 recording of the Aria of the Goldberg Variations. First the piano was filtered out of the recording as much as possible, leaving Gould's inadvertent vocalizations as a more prominent element. Christopher Butterfield in Victoria made a phonetic and music notation transcription of this vocal line. Where there was difficulty ascertaining a sound the team studied a video tape of the Gould recording session to see what Glenn's mouth was doing. Eventually Christopher's brother, opera tenor Benjamin Butterfield, was recorded reproducing a version of Christopher's transcription, with his own revisions. Christopher was also recorded, and several takes of his version were layered in combination with Benjamin's solo version to produce a chorus of Glenns near the end of the Aria. To this Oswald added a klangfarbenmelodien-like arrangement (this is a technique where a melody line is passed from instrument to instrument, changing its timbral colour over time) for live orchestra, which gradually added clues as to the source. For the performances the monophonic voice of Gould "walked" via routing through several hidden speakers from behind a canopied area on stage to the orchestra pit where it was joined by the chorus. Meanwhile in Toronto, Ernest Cholakis, who is best-known for designing the groove templates found in various sequencers, worked on making a very precise MIDI transcription of Gould's piano performance of the Aria. This transcription was designed to be played back on a Yamaha Disklavier very similar to the piano Gould played for the original recording. The result was a reproduction of the piece, minus Gould's voice, which is much more realistic than any hi fi system could ever recreate using the original recording. Audiences remarked about the ghostly presence of performances of this, in a sense, live acoustic piano recreation. These two Aria derivations were the bookends of a composition which featured abstractions and plunderphonic derivations of Gould's own music and some of his favorite pieces, including works by Richard Wagner and Petula Clark. Bach and Mozart clash in one section, and the Disklavier was a featured soloist throughout.
http://www.plunderphonics.com/xhtml/xglenngould.html
Fact Box: Magnetar; NASA's observatory finds Magnetar "SGR 0418" Scientists at NASA have discovered exotic neutron star called Magnetar named SGR 0418 by using Chandra X-ray Observatory and other satellites. What are Magnetars? Magnetar are the dense remains of dead stars that erupt sporadically with bursts of high-energy radiation. It is a type of neutron star with an extremely powerful magnetic field (ten to a thousand times stronger than for the average neutron star), the decay of which powers the emission of high-energy electromagnetic radiation, particularly X-rays and gamma rays. Because the only plausible source for the energy emitted in these outbursts is the magnetic energy stored in the star, these objects are called “magnetars.” What is Neutron Star and how does it form? A neutron star is a type of stellar remnant that can result from the gravitational collapse of a massive star during a Type II, Type Ib or Type Ic supernova event. Such stars are composed almost entirely of neutrons,which are subatomic particles without net electrical charge and with slightly larger mass than protons. When a massive star runs out of fuel, its core collapses to form a neutron star, an ultra-dense object about 16 to 24 kilometres wide. The gravitational energy released in this process blows the outer layers away in a supernova explosion and leaves the neutron star behind.
https://www.gktoday.in/current-affairs/fact-box-magnetar-nasas-observatory-finds-magnetar-sgr-0418/
Insomnia in College Students: Causes and Effects Between school work, hobbies, parties, and sometimes even work, most school students don’t get as much sleep as they need. Many don't even know that they're sleep deprived. It can be fun for a while, but there are some serious consequences to poor sleep, especially for those who develop chronic insomnia. I’m going to summarize research on college students and sleep so we can get to the bottom of sleep deprivation effects on students and how it should be treated. How Common is Insomnia for Students? A systematic review of studies on student sleeping habits found that approximately 18.5% of students meet the criteria for insomnia (1). This may vary by school, major, and school year, as some studies have found insomnia rates as low as 9%, and others have found rates of up to 40%. Insomnia is the most extreme case of poor sleep, but other studies show that approximately 60% of college students have poor sleep quality, which also has side effects (2). Causes of Student Insomnia The main causes of insomnia for students are poor sleep hygiene, mental health issues, and stress. One large scale study of college students had 1,125 university students complete an online survey about their sleep habits (2). Responses were modelled against Pittsburgh Sleep Quality Index (PSQI) scores to see which factors predicted poorer sleep. You can get your own PSQI score online here if you’d like. Surprisingly, factors like alcohol and caffeine consumption were not significant variables. The variables that correlated to poor sleep were: - Tension (anxiety) - Stress - Depression - Anger Note that this doesn’t mean that caffeine doesn’t cause insomnia, it just means that the average student in this study didn’t drink enough for it to be a significant factor for poor sleepers. Other studies have found similar results. In particular, students with insomnia are approximately 2 times as likely to have clinically significant anxiety (3). This could mean that sleep trouble causes anxiety (which it can), but since anxiety, depression, and insomnia are all bidirectionally related, one can lead to the other (4). So regardless of which one came first, both anxiety and insomnia can contribute to each other and cause side effects. Student Sleep Hygiene The most common sleep hygiene issues that appear to cause sleep problems in students are: - One study of sleep hygiene in students found that poor sleep schedulingwas the best predictor of insomnia severity (5). In other words, students that had irregular sleep and wake times tended to have worse insomnia symptoms. - Another study found that variable sleep schedules (same as above), going to bed dehydrated, and environmental noise were the sleep hygiene factors that significantly correlated with poor sleep quality (6). - A study of medical students only found that watching TV in bed was significantly associated with poor sleep quality (7). Research on student sleep hygiene and its impact is mixed, so take those results with some skepticism. SummaryThe main causes of insomnia in students are stress, anxiety, depression, and certain sleep hygiene components like not having a consistent sleep schedule or sleeping in a noisy environment. Effects of Sleep Deprivation on Students Okay, so the majority of college students are sleep deprived to some degree, and we know what causes it. Should we care? Of course that’s rhetorical, as we know that there are serious short and long term side effects of insomnia. For students, the consequences can end up having a major impact on the rest of their lives. Here’s what the research says: - College students with poor sleep are more at risk of developing mental disorders (e.g. anxiety disorders, depression, etc.), which can lead to lower academic performance and higher rates of dropout and underemployment (8). - Students with insomnia were more likely to experience excessive daytime sleepiness and clinically significant anxiety (9). - Undergraduate students at a university with chronic insomnia had higher levels of anxiety, depression, and obsessive-compulsiveness, among other mental disorders (10). See the graph below, where PWI stands for Person with insomnia, and PWOI stands for Person without insomnia. Students with insomnia basically had twice the odds of developing every serious mental health condition in that study. Does Insomnia Affect Student Grades? Research appears to show that a small level of sleep deprivation does not significantly affect grades compared to normal sleepers, but getting less than 5 hours of sleep regularly does. The results of research on sleep quality and academic performance are a bit mixed, but it’s pretty clear that it’s a case of lacking sufficient research in this area. A few studies show a clear connection between insomnia and poor academic performance: - Insomnia symptoms were associated with poorer school performance, particularly in concentration and attention scores (11). - Medical and paramedic students in Jordan who had low risk of insomnia had higher GPAs (12). - Norwegian college and university students were much more likely to fail examinations if they slept less than 6 hours or more than 10 hours (13) Those are the common sense results that most people would expect, but there are also studies that show the opposite of what we’d expect: - A study that followed 1,074 college students for a year found no correlation between chronic insomnia and grade point average (14) - A study in Ethiopian students found that 60% of students had insomnia, but “no significant association between insomnia and academic performance” (15). These studies had limitations, and most importantly they are all correlational. The most likely explanation for the study that showed that insomniacs had the same average GPA would be that students who get really stressed about school typically care a lot, which leads to poor sleep but also extra studying. To really conclude anything, we’d need an interventional study to see if fixing sleep issues in those students improved their GPAs further. SummaryWhile more research is needed to quantify the impact of sleep deprivation on students, insomnia will affect energy levels, motivation, concentration, and other factors that will lead to poorer academic and social performance. Treatment Options for Students With Sleep Trouble Unfortunately, many students turn to over the counter medication (i.e. Tylenol PM) to try and treat insomnia, or even worse - alcohol at night and caffeine during the day (16). These might be effective as crutches in the short term, but ultimately don’t fix sleep issues and there will be other side effects. Treating insomnia in students isn’t very different from treating it in the general population. The most effective treatments are (17): - Cognitive-behavioral therapy of insomnia (CBT-I) - Sleep hygiene improvement - Noise blocking technique (e.g. white noise) While some types of therapies take a long time to have an effect, CBT-I often improves insomnia symptoms within weeks by decreasing stress and sleep anxiety levels. Note that insomnia treatment should be guided by a doctor, as there could be other causes that need to be addressed (like a nutritional deficiency or comorbidity). They can also prescribe proper sleep medication if appropriate for your situation. There’s some evidence that relaxation therapy (sort of like a guided meditation) can help improve sleep quality, but it’s far less effective than the solution above (18). SummaryStudents with insomnia should see a doctor, who will check for any other obvious causes. For students with high levels of stress or mental disorders, a combination of CBT-I and sleep hygiene improvement is typically the best treatment plan. Summary: College Students and Insomnia The majority of college and university students are poor sleepers. This not only affects their academic and social lives during school, but those results can impact their career and personal life for decades to come. Students with insomnia should start by fixing their sleep hygiene and seeing a doctor to see whether CBT-I or another treatment option is best for their situation. References - A systematic review of studies on the prevalence of Insomnia in university students - Sleep patterns and predictors of disturbed sleep in a large population of college students - Insomnia and Relationship with Anxiety in University Students - A bidirectional relationship between anxiety and depression, and insomnia - Associations Between Sleep Hygiene and Insomnia Severity in College Students - Relationship of Sleep Hygiene Awareness, Sleep Hygiene Practices, and Sleep Quality in University Students - Association Between Sleep Hygiene and Sleep Quality in Medical Students - Prevalence, severity, and comorbidity of 12-month DSM-IV disorders in the National Comorbidity Survey Replication - Insomnia and Relationship with Anxiety in University Students - Insomnia and Mental Health in College Students - The relationship between insomnia symptoms and school performance among 4966 adolescents - Insomnia among Medical and Paramedical Students in Jordan - Insomnia, sleep duration and academic performance - Epidemiology of Insomnia in College Students: Relationship With Mental Health, Quality of Life, and Substance Use Difficulties - Insomnia and Its Temporal Association with Academic Performance among University Students - Over-the-counter medication and herbal or dietary supplement use in college - A Pilot Randomized Controlled Trial of the Effects of Cognitive-Behavioral Therapy for Insomnia on Sleep and Daytime Functioning in College Students - Relaxation therapy for insomnia Medical Disclaimer: The information on SnoozeUniversity.com is not intended to be a substitute for physician or other qualified care. We simply aim to inform people struggling with sleep issues about the nature of their condition and/or prescribed treatment.
https://snoozeuniversity.com/insomnia-students/
In this section, we study the nature of the gravitational force for objects as small as ourselves and for systems as massive as entire galaxies. We show how the gravitational force affects objects on Earth and the motion of the Universe itself. Gravity is the first force to be postulated as an action-at-a-distance force, that is, objects exert a gravitational force on one another without physical contact and that force falls to zero only at an infinite distance. Earth exerts a gravitational force on you, but so do our Sun, the Milky Way galaxy, and the billions of galaxies, like those shown above, which are so distant that we cannot see them with the naked eye. Our visible Universe contains billions of galaxies, whose very existence is due to the force of gravity. Gravity is ultimately responsible for the energy output of all stars—initiating thermonuclear reactions in stars, allowing the Sun to heat Earth, and making galaxies visible from unfathomable distances. All masses attract one another with a gravitational force proportional to their masses and inversely proportional to the square of the distance between them. Spherically symmetrical masses can be treated as if all their mass were located at the center. Nonsymmetrical objects can be treated as if their mass were concentrated at their center of mass, provided their distance from other masses is large compared to their size. The weight of an object is the gravitational attraction between Earth and the object. The gravitational field is represented as lines that indicate the direction of the gravitational force; the line spacing indicates the strength of the field. Apparent weight differs from actual weight due to the acceleration of the object. The acceleration due to gravity changes as we move away from Earth, and the expression for gravitational potential energy must reflect this change. The total energy of a system is the sum of kinetic and gravitational potential energy, and this total energy is conserved in orbital motion. Objects with total energy less than zero are bound; those with zero or greater are unbounded. Orbital velocities are determined by the mass of the body being orbited and the distance from the center of that body, and not by the mass of a much smaller orbiting object. The period of the orbit is likewise independent of the orbiting object’s mass. Bodies of comparable masses orbit about their common center of mass and their velocities and periods should be determined from Newton’s second law and law of gravitation. Johannes Kepler carefully analyzed the positions in the sky of all the known planets and the Moon, plotting their positions at regular intervals of time. From this analysis, he formulated three laws: Kepler’s first law states that every planet moves along an ellipse. Kepler’s second law states that a planet sweeps out equal areas in equal times. Kepler’s third law states that the square of the period is proportional to the cube of the semi-major axis of the orbit. Earth’s tides are caused by the difference in gravitational forces from the Moon and the Sun on the different sides of Earth. Spring or neap (high) tides occur when Earth, the Moon, and the Sun are aligned, and neap or (low) tides occur when they form a right triangle. Tidal forces can create internal heating, changes in orbital motion, and even destruction of orbiting bodies. According to the theory of general relativity, gravity is the result of distortions in space-time created by mass and energy. The principle of equivalence states that that both mass and acceleration distort space-time and are indistinguishable in comparable circumstances. Black holes, the result of gravitational collapse, are singularities with an event horizon that is proportional to their mass. Thumbnail: Our visible Universe contains billions of galaxies, whose very existence is due to the force of gravity. Gravity is ultimately responsible for the energy output of all stars—initiating thermonuclear reactions in stars, allowing the Sun to heat Earth, and making galaxies visible from unfathomable distances. Most of the dots you see in this image are not stars, but galaxies. (credit: modification of work by NASA).
https://phys.libretexts.org/Bookshelves/University_Physics/Book%3A_University_Physics_(OpenStax)/Map%3A_University_Physics_I_-_Mechanics%2C_Sound%2C_Oscillations%2C_and_Waves_(OpenStax)/13%3A_Gravitation
As the distance between two masses increases, the force of gravity between them decreases by the square of that distance. This means that a doubling of distance would result in quartering of the gravitational force between the masses. Calculating the Change in the Force of GravityCalculate the resulting force of gravitational attraction between two masses if one of the masses was to double and the distance between them was to triple. WHAT'S THE TRICK? The original force of attraction can be calculated using Newton’s law of gravity. GRAVITATIONAL FIELD The influence, or alteration, of space surrounding a mass is known as a gravitational field. The gravitational field of Earth can be visually represented as several vector arrows pointing toward the surface of Earth, labeled g, as shown in Figure 9.1. The surface gravity field for Earth, g, is 10 meters per second squared. When an object of mass m is placed in Earth’s gravity field, it experiences a force of gravity, Fg , in the same direction as the gravity field. Figure 9.1. The gravitation field of Earth The effect on mass m in the gravity field can be solved as a force problem. ma = mg a = g Finding Surface Gravity The gravity field of any planet is a function of the mass of the planet and the distance of the planet’s surface from its center. The exact relationship can be derived using two formulas for the force of gravity. When a mass m rests on the surface of Earth, the force of gravity can be determined using either the weight formula or Newton’s law of gravitation. Although the formula for g was derived on the surface of Earth, it can be generalized to solve for gravity on any planet or at a point in space near any planet. Calculating the Surface Gravity of the MoonCalculate the surface gravity of the Moon. The Moon has a mass of 7.36 x 1022 kilograms and a radius of 1.74 x 106 meters. WHAT'S THE TRICK? Use the formula for finding the gravity of a planet and substitute in the values for the Moon. The SAT Subject Test in Physics does not allow you to use a calculator, and it is unlikely to present a problem requiring calculations that will be this involved. This example serves to demonstrate the universality of the equation for finding the gravitational field and the acceleration of gravity for any celestial object. CIRCULAR ORBITS When a ball is thrown horizontally on Earth, it will follow a parabolic path toward the ground. During its flight, the ball simultaneously experiences a constant downward acceleration and a constant forward velocity. To an observer, it appears that the ball is moving in a parabola relative to a flat Earth. Newton hypothesized that if a ball could be thrown with sufficient forward velocity, it would travel so quickly that the acceleration pulling it downward would not bring it to Earth. This is because the spherical Earth would curve out of the ball’s way as it fell. Today, satellites in orbit are able to do this with speeds exceeding 7,900 meters per second (17,500 miles per hour). As discussed in Chapter 6 “Circular Motion,” objects experiencing an acceleration perpendicular to their motion will move in a circle. The magnitude of their acceleration remains constant. However, their direction is constantly changing so that it always points toward the center of rotation. Calculating Tangential Orbital Velocities Figure 9.2. Satellite of mass m orbiting central body M Equations for gravity and circular motion can be combined to determine the velocity of a satellite in a circular orbit. In Chapter 7, the following circular motion equations for centripetal acceleration and centripetal force were introduced. Calculating a Satellite’s Orbital VelocityCalculate the speed of an orbiting satellite around Mars at a height of 3.57 x 106 meters above the center of the planet. The mass of Mars is 6.42 x 1023 kilograms. WHAT'S THE TRICK? Use the general equation for finding the velocity of a satellite above a planet of known mass, M, at radius r above the planet’s center. Understanding the formulas associated with orbital motion will help you answer many conceptual questions concerning changing variables. Consider the key formulas discussed thus far: Determining the Effect of Changing Radius on an Orbiting SatelliteA satellite orbiting at a speed of v and a radius of r above the center of a planet climbs to a radius 2r. What is the satellite’s new orbital speed? WHAT'S THE TRICK? Orbital speed is determined by the following formula. KEPLER’S LAWS Seventeenth-century astronomer Johannes Kepler deduced three laws describing the motion of planets around the Sun. 1. Planets orbit the Sim along an elliptical path where the Sun is at one of the two foci of the ellipse, as shown in Figure 9.3. 2. A line drawn from the Sun to a planet would sweep out an equal area during an equal interval of time. In Figure 9.3, area1 = area2 Figure 9.3 Kepler's law the Sun. These laws were deduced from Kepler’s analysis of observations made by Tycho Brahe. They were considered controversial by several of Kepler’s contemporary astronomers. The first law was particularly controversial. Most believed that planets orbited in perfect circles around the Sun. Kepler’s second law illustrates that when a planet is closer to the Sun (known as perihelion), it will move at a faster orbital speed than when it is at its farthest point from the Sun (known as aphelion). This will allow the imaginary line between Sun and planet to sweep out an equal area during an equal interval of time. The third law relates a mathematical relationship between orbital period and orbital radius. T2 ∝ T3Committing this relationship to memory will help you answer conceptual questions regarding the effect on the period by changing the radius. This relationship works for all objects orbiting a common central mass.
http://www.bestsatprepbook.com/2017/09/sat-physics-gravity-universal-gravity.html
Q: Plotting the chi square distribution with TikZ I have tried without success to plot the curve of the chi-squared distribution. Is there a generous soul who can come to my rescue. A: If you can access gnuplot, you can try this. This is an adapted version of a gnuplot demo file. \documentclass{standalone} \usepackage{pgfplots} \begin{document} \begin{tikzpicture} \begin{axis}[% xlabel = $x$, ylabel = {Probability density}, samples = 200, restrict y to domain = 0:0.5, domain = 0.01:15] \foreach \k in {1,...,8} {% \addplot+[mark={}] gnuplot[raw gnuplot] {% isint(x) = (int(x)==x); log2 = 0.693147180559945; chisq(x,k)=k<=0||!isint(k)?1/0:x<=0?0.0:exp((0.5*k-1.0)*log(x)-0.5*x-lgamma(0.5*k)-k*0.5*log2); set xrange [1.00000e-5:15.0000]; set yrange [0.00000:0.500000]; samples=200; plot chisq(x,\k)}; \addlegendentryexpanded{$k = \k$}} \end{axis} \end{tikzpicture} \end{document} A: if you can use PSTricks, then it is easy. Run the example with xelatex \documentclass{article} \usepackage{pst-plot,pst-func} \begin{document} \psset{xunit=1.2cm,yunit=10cm,plotpoints=200} \begin{pspicture*}(-0.75,-0.05)(9.5,.65) \multido{\rnue=0.5+0.5,\iblue=0+10}{10}{% \psChiIIDist[linewidth=1pt,linecolor=blue!\iblue,nue=\rnue]{0.01}{9}} \psaxes[Dy=0.1,ticksize=0 3pt]{->}(0,0)(9.5,.6) \end{pspicture*} \end{document} A: Here is a sketch of pure TikZ solution. The idea is that Gamma function is not available in tikz (tex), but the values of Gamma(k/2) for k=1,...,8 are simple. So we can "hardcode" them. If somebody wants to put axes and legend, feel free to edit this answer ;) \documentclass[tikz,border=7mm]{standalone} \begin{document} \begin{tikzpicture}[domain=.001:15,samples=200,thick] \clip (-1,-1) rectangle (15,10); \foreach[count=\k,evaluate={\z=\k>2?"(0,0)--":"";\c=10*\k}] \g in {sqrt(pi),1,sqrt(pi)/2,1,3/4*sqrt(pi),2,15/8*sqrt(pi),6} \draw[color=blue!\c!red,yscale=30] \z plot (\x,{exp(ln(\x/2)*\k/2-ln(\x)-\x/2-ln(\g))}); \end{tikzpicture} \end{document}
This study analyzes the growth of High-purity Isopropyl Alcohol based on historical, present, and futuristic data and will provide complete knowledge about the High-purity Isopropyl Alcohol industry to the market players. The major market segments along with the sub-segments will serve the comprehensive view of the global High-purity Isopropyl Alcohol market. This report on the global High-purity Isopropyl Alcohol market is entirely detailed and accurate, giving new and existing entrants a clear idea of what will help them navigate this competitive market. This report details all the macro and micro factors that affect the growth of the market. This research report provides an opportunity to look at the recent trends affecting the market and the growth outlook of the global High-purity Isopropyl Alcohol market. The following are frequently asked questions related to the High-purity Isopropyl Alcohol market: What are the characteristics of High-purity Isopropyl Alcohol market growth? What are the basic trends in the market? What will be the growth conditions and the market size of High-purity Isopropyl Alcohol market by 2027? What are the major hurdles facing the High-purity Isopropyl AlcoholX market growth? What opportunities and risk factors does the top player have to face? A thorough study of the High-purity Isopropyl Alcohol market will provide valuable insights for planning business strategies accordingly.
Q: Exponential growth of population. How to go back in time? My daughter has this question as a homework: If a population is known to double every 12 years and we know that the population in 2000 was of 100,000 individuals: a) What is the analytic expression of this growth according to the number of years? b) What was the population in 1995? The only formula she has so far is this one: P(n)=P(i)(1+rate)^n So, I guess the analytic expression would be: Population in 12 years= 100,000*(1+rate)^12 This would lead me to a rate of 0.594631. That part, I'm not sure about but still, it makes sense. But then, how to get back in time until 1995? Trying and guessing, I evaluated the same expression with the same rate for 1988 (12 years earlier), as the population was supposed to be about 50,000 individuals (half of the 2000 population). If my analytic expression and rate here above are right, I should then have this: 50,000= 100,000*(1+0.594631)^-1.5 That exponent is beyond me as I don't see any relationship between -1.5 and 12 years in the past. So this doesn't give me any clue on how to find, based on the initial expression, how many people were accounted in 1995... Something tells me that I'm completely wrong from the beginning. I searched a lot on internet but when it concerns growth of population, I just see expressions regarding population that doubles or predictions for the future, nothing related to prediction of the past (that includes the above expression that is). I guess that going back in time involves the use of logs but still, I have the base (rate) but I'm still unsure about how to feed the 5 years back in time as an input to get an answer. Any help would be appreciated. Thanks. A: Let's start with the given formula. $$P(\text{final year}) = P(\text{initial year}) *(1+r)^n$$ Where $n= \text{final year}-\text{initial year}$. You are correct in that next you should use the information that population doubles every 12 years $$P(2012)=P(2000)*(1+r)^{12}$$ $$100,000=50,000*(1+r)^{12}$$ $$2=(1+r)^{12}$$ Therefore $r = \sqrt[12]{2}-1=0.\mathbf{0}594631$. You are mostly fine up to here but for some reason you started to guess and check. Just use your formula $$n = 1995 - 2000 = -5$$ $$P(1995) = P(2000)*1.0594631^{-5} = 100,000 * 1.0594631^{-5} \approx 74,915 $$ If you are uncomfortable with negative exponents or the fact that the initial year is after the final year make 1995 the initial year and 2000 the final year then $$P(2000) = 100,000 = P(1995) *1.0594631^{5}$$ $$P(1995) = \frac{100,000}{1.0594631^{5}}\approx 74,915$$
The classical music period in Western music is believed to have been between 1750 and 1820 as an advancement upon the baroque period. The term classical music is utilized in an idiomatic sense as an alternative expression for Western art music. It offers a description of the diverse music styles, particularly from the 16th to 19th centuries. The era is also known for having produced popular composers. They include Joseph Haydn, Wolfgang Mozart, and Ludwig van Beethoven (Records, 2016). The classical period resulted in the introduction of a style known as the sonata style that has dominated composition using instruments up to the present time. Characteristics Foremost, much significance was accorded to various forms of instrumental music such as the sonata, string, concerto, symphony, and serenade. The style is known as sonata, and the concerto advanced and turned out to be the most significant style, whereas the symphony was developed during this era. The new form of the sonata was utilized as a build-up to the initial movement of a variety of comprehensive compositions. In addition, it was also used in other developments and solitary sections, such as Overtures. Moreover, it utilized the typical paces, such as fanfares to get attention, and the burial walk tempo is imperative in instituting and uniting the sound of a solitary movement. The concerto grosso, which entails more than one artist, was slowly replaced by the solo concerto that involved a single artist. Therefore, it started placing the utmost significance on a specific soloist’s ability to show off their talents. Related: Do my Music Homework online Another major characteristic entailed emphasizing on the variety and contrast within a composition as compared to other periods. Various keys, tunes, beats, crescendos along with recurrent mood changes, were common aspects in this era as compared to the baroque period. The piece of music was considered to be shorter as compared to those in the baroque age. Compositions in the classical music period were comprised of straightforward expressions and rhythms that are marked (Simonton, 2010). The orchestra’s size and range increased, the harpsichord continuo became distinct, and the woodwind developed into a self-reliant unit. As a single device, the piano replaced the harpsichord. The texture of the music played by the early piano was subtle, frequently accompanied by the Alberti bass. It, however, became more resounding and prevailing. The other major aspect is that classical music has a light and unblemished quality and less intricate. The music is principally homophonic, which means that the tunes are beyond chordal complement. The classical period of music also utilizes a galant style drawn in opposition to the baroque era frameworks. Also, it emphasizes light sophistication as opposed to the baroque’s distinguished gravity and imposing magnificence. The generality of the texture made compositions using instruments more significant. Conclusion The classical era period set the groundwork for a more personal exploration of the Romantic period. The key advantage of normalizing systems within the period is that they assist in serving as a starting point for different innovations. Classical composers were already playing around with the very forms they organized during the latter stages of the classical period. For instance, the enhanced orchestra turns out to be a tool for intense expressions. Therefore, as a result of some assistance from Beethoven, it became a catalyst for steering in the Romantic period. References Simonton, D. K. (2010). Emotion and composition In classical music. Handbook of music and emotion: Theory, research, applications, 347.
https://www.essay-writing.com/samples/classical-musical-period/
Company Description: Stey is a new solution to urban living, offering modern urban professionals a smarter, more connected and exciting way of living. Stey delivers efficiency, flexibility and freshness via modern technology. Combining traditional hospitality with the smartest digital solutions, we have created an innovative balance of home, co-living space and hotel. Our designers have created an integrated “re-renting” system that enables tenants to share their homes and save on rent - providing the ultimate in freedom and mobility. In a distinctly Swedish way, Stey expresses a philosophy of environmental protection, energy efficiency and sustainability through a keen eye for design features, and meticulous selection of materials and furniture - which carries over into straight forward, reliable operations management.
https://www.fbcs.fi/membership-directory/corporate/94928
We strive to maintain a safe, pedestrian-friendly environment where visitors of all ages and physical abilities can enjoy the benefits of the Topeka Zoo. This guide addresses many issues about accessibility. However, if you have other needs or questions, please contact us at (785) 368-9180; and request to speak to the guest service manager. We encourage you to call at least one week before your trip for the best possible assistance. Once on grounds, please feel free to discuss any special needs you have with Zoo employees. In light of changing needs of our guests and other developments, we reserve the right to modify this guide and our accessibility policies as appropriate. Admission Regular admission applies to all guests, service animals are not required to purchase ticket. Drinking Fountains Drinking fountains accessible to guests are located throughout the facility. Drinking fountains are located near the Kansas Carnivores exhibit, log cabin restroom near the Children’s Zoo, lions pride restroom and inside the Gary K. Clarke Living Classroom. Fountains located at restrooms are seasonal. Cups of water may also be requested at the Grazer’s Cafe. First Aid First Aid boxes are located at admissions, Gary K. Clarke Living Classroom, log cabin restroom near the Children’s Zoo and the lions pride restroom. The AED is located inside the Animals and Man building. If at any time you need immediate assistance, please ask any Zoo employee or call 911. Guests with Limited Mobility Please keep in mind that Zoo employees are neither trained nor permitted to lift guests. A guest requiring such physical assistance should plan to visit the Zoo with an attendant. Parking Accessible parking is available in our parking lot on a first-come, first-served basis. A valid disability parking placard or license plate is required. Vehicles parking in marked spaces for disabled access, but without the appropriate placard in view or a disabled access license plate can be ticketed by local or park police. Be sure to display your placard issued by an appropriate government Motor Vehicle agency. Wheelchairs and other Power-Driven Mobility Devices The Zoo is widely accessible to guests using both manual and electric wheelchairs. Consistent with federal guidelines, we define “wheelchairs” as devices designed primarily for use by individuals with mobility disabilities. We advise guests using wheelchairs to consult the Zoo’s Map to determine which areas may be challenging. The Topeka Zoo has a limited number of wheel chairs, and motorized chairs available. These are available on a first come first serve basis. (*) The Zoo accommodates the use of some Other Power-Driven Mobility Devices (OPDMDs), which are vehicles that are not wheelchairs, but rather are electric devices designed primarily for use by individuals with mobility challenges. In the interest of maintaining a safe and pedestrian friendly environment while at the same time ensuring that everyone has a positive experience at the Zoo, we regulate the operation of mobility devices. Permitted mobility devices include the following: Electric OPDMDs and other single-seat electric scooters with three or more wheels that cannot exceed more than 6 miles per hour. It is prohibited to operate a mobility device at a speed significantly greater than the flow of the surrounding pedestrian traffic. Prohibited OPDMDs include the following: Any device that has or should have a registered license plate Any device that has only one wheel Any device that has two tandem wheels (e.g. two-wheeled electric or motorized scooters) Any OPDMD that has been structurally or mechanically altered Any OPDMD that is not listed as acceptable (above) Any gas powered vehicle (*) Manual wheelchairs and electric scooters (an OPDMD), are available for rent just inside the gift shop inside the zoo. Rentals require a picture ID, to be held at the guest services desk and the age requirement to operate an electric OPDMD is 16 years old. Rental Prices (subject to change) Manual wheelchairs: $5 per day Electric OPDMD: $20 per day Restrooms All the restrooms in the Zoo are accessible. Each restroom is equipped with a baby changing station. There is an infant nursing stall located in the concessions restroom. Service Animals We welcome guests with disabilities that choose to bring their trained service animals into the Zoo. Pets will not be allowed entry. Service animals permitted to the Zoo are dogs. Service animals must remain on a leash or a harness, be under control of its handler at all times, and be house broken. Leash cannot exceed 6 feet in length. If, at any time your service animal’s behavior is out of control, you will be asked to remove your service animal from the premises. As noted in the Zoo’s Service Animal Policy, the use of service animals may be restricted or limited in certain areas due to the sensitivity of the Zoo’s animal collection. Animals that are purely used to provide emotional support, well-being, comfort, or companionship are not “service animals” under the American Disabilities Act and zoo policy and shall not be admitted to the Zoo. Please read the Zoo’s Service Animal Policy for further information which can be found by clicking here.
A great many decisions are made as a result of testing through a logical, methodological system, such as mathematics and science. Other times, math and science are of no use in the decision-making process and one must delve deeper in order to come up with the “right” solution to the problem at hand. “In order to make ethical decisions, it is important to ask the correct questions, to focus on the main issues, to balance determination with compromise, to debate possibilities, and to make the decision that stems from the recommended steps”. There, however, are additional factors that may serve as an impetus for ethical decision making. These factors may include, but are not limited to, family, friends, religion, community, culture, and law. It is these factors, combined with personal bias, which impact an individual’s concept of right and wrong, and, thus, impact ethical decision making. FRAMEWORK FOR ETHICAL DECISIONS Moral questions rarely have clear answers, which is why ethics is a difficult topic to discuss. There are usually more questions than answers in ethical decision making. Though there are various guidelines for conduct, people can rely on some basic formulas or rules for the majority of ethical decisions. The guiding formula for moral judgment as presented consists of the following steps: • First, select the moral principle that best defines the problem in question. Is it a matter of honesty, fairness, equity, or loyalty? • Second, justify the situation by examining whether it conforms to the selected principle. If not, what accentuation or mitigating factors could make it more or less fitting with the principle? • Next, if the situation fits the principle exactly, the judgment should be made in exact accordance with the principle. • Finally, if the situation does not fit the principle exactly, judgment should be made by determining a high or low likelihood that the situation will fit the principle. 1. When attempting to make a decision, analyzing the issue is the best place to begin. 2. The next step is to consider the facts involved. For instance, one should ask what is beneficial? What is necessary? 3. At this point, it is helpful to consider perspectives that others might hold regarding the issue at hand. If time and place allows for it, it is suggested that opening the issue or decision up for debate might help. Asking questions of others and receiving feedback from those outside of the decision process may aid an individual discover novel solutions or enable unique perspectives to present themselves. It is not uncommon for decisions to be made without adequate time to stop, ask for input, analyze the information, and think about the repercussions of the decision. It is in these situations that individuals should rely on their personal character as a guide for the decision-making process. After the initial process, there may be multiple decisions that may emerge. Each decision-making scenario, which an individual is confronted with, is unique and, thus, requires a thorough look at the options that present themselves. 1. At this point, an individual would be wise to weigh the pros and the cons of each potential decision outcome. 2. What are the values of each action compared with the consequences that may occur from each option presented? 3. The application of situational ethics may assist an individual in rationalizing decisions or actions and, thus, assist in the decision-making process. However, the application of situational ethics may create a double standard or a subjective decision with relation to ethical principles because each person is unique and what may work for one individual or group in one situation, may not work for another in another situation. Although not all-encompassing or correct in all situations, the above outline represents an example of a decision-making process. It serves as a guideline rather than as a standard operating procedure for the decision-making processes. Typically, when ethical decisions are made in routine situations, they are simple because there is consistency of choice most often based on established rules and regulations. While each situation is unique, particularly unusual situations often pose more difficult to an individual because of conflicting views of religion, values associated with culture, or variations in law that are foreign to the individual. There are many guidelines for ethical decision making, as evidenced by the earlier stated guideline. Using such a guideline is useful to an individual in organizing one’s thoughts and in assessing moral thinking. In Ethics of Human Communication, Rushworth Kidder discusses various levels of moral thinking. While these guidelines originate from ethical decisions regarding journalism, they can be useful when applied to decisions made within the public service sector. According to Kidder , the four levels of moral thinking which occur are 1. Ideal decision making, or what is absolutely right or wrong. 2. Practical decision making, or following common rules, such as: “Do not tell lies.” 3. Reflective decision making, or the exceptions to given rules. 4. Political decision making, or making decisions for the good of the larger community. In the end, ethical decision making and ethical judgment is ultimately a result of choices that should be freely made. Although the decision-making process may oftentimes result in there being more questions than there are answers, the recognition that there are various ethical perspectives, and varying levels of moral thinking. The utilization of the aforementioned decision-making strategies can often make the process much more manageable. Existentialism is a relatively recent concept that has an emphasis on an individual’s freedom to make decisions free of influence from others. This is often referred to as free will. If we are to discuss the concept of free will, it is necessary to discuss the supporting concepts of determinism and intentionalism. Determinism is a term that applies to the premise that all occurrences, thoughts, and actions are beyond the control of an individual. This concept can often cast doubt on the validity or usefulness of individual choice, and may reveal itself in a personal expression or attitude, typically appearing in such remarks as, “It wasn’t in the cards,” “I was destined to fail,” or “It was fate that. …” A more in-depth concept, known as scientific determinism, deals with an individual’s actions, character, and decisions as results associated with genetics or one’s surroundings. More specifically, this concept is grounded in the following: • An individual’s genetic make-up (specific genes and chromosomes) affects one’s physiological make-up, which directly impacts one’s decision making. • An individual is a product of his environment. More specifically, climate and geography play a part and may directly influence personality and disposition, which will impact decision making. • The society in which an individual lives and the cultures present within the society provide the individual with traditions, values, and foundational information that influence one’s actions. • An individual’s education and experience provide for a personal knowledge base from which the decision process can be made. Intentionalism is a term given to the premise that individuals have free will and, thus, are accountable for their actions and the results of their decisions. More specifically: 1. External pressures on individuals are viewed as influences upon them rather than as preexisting determinants. When an individual assesses his or her surroundings and becomes aware of these external pressures, their impact on the decision-making process is considerably reduced. 2. Each individual possesses logic, therefore, it is possible to make use of logical reasoning to assist with ethical decision making. Based on the above concepts, is persuasion then considered to be unethical? Determinism would state that “yes, persuasion is unethical because it can manipulate a person’s decision”. Whereas, intentionalism would state that “no, persuasion is not unethical because people are accountable for their decisions”. Logic is a basic tool in the study of ethics and, as such, it is important to mention some techniques relating to logical evaluation with regards to moral decision making. From a logical standpoint, a decision is a good moral one when its “premises” (evidence/reasons) support it, and a decision is a bad moral one when its premises lack support. Therefore, when one attempts to make a moral decision and evaluate the decision options, he or she must attempt to answer three questions: 1. What is the argument attempting to prove? Or, more specifically, what is the “conclusion” (Sentence that an argument claims to prove. This is sometimes referred to as a decision)? 2. What are the “premises”? (Any sentence that an argument offers as proof or evidence of the conclusion.) 3. Is the conclusion supported by the premises? a. If the premises are not all true, then the conclusion is not adequately supported. b. If the premises are not all relevant to the question at hand nor enough to prove the conclusion, then the conclusion is not adequately supported. After one assesses the premises laid out before him and attempts to ascertain whether or not he supports the stated conclusions, he can begin to decide if he has a foundation for a good or a bad argument. An “argument” is made up of any of a number of sentences that claim to prove one another. Therefore, an argument is a good argument if “the premises are true, the premises are relevant to the conclusion, and no premise simply restates the conclusion”. And an argument is bad when “a premise is false, a premise is irrelevant to the conclusion, or a premise simply restates the conclusion”. Arguments can be further characterized as being either deductive or inductive arguments. Those arguments that claim certainty are referred to as being “deductive.” These arguments claim that because the stated premises are true, then the conclusion is certainly true. Whereas, arguments claiming probability are referred to as being “inductive.” These arguments claim that because the stated premises are true, then the conclusion is probably true. Most inductive arguments have their foundation in past observations or experiences. It is important to understand the distinction between deductive and inductive arguments when one is faced with evaluating moral decision making, as each of these has a different kind of evaluation that is attached to them. Deductive arguments have conclusions that are either certain or uncertain. Therefore, when the given premises do not prove a conclusion certain, even if highly probable, the argument fails. Whereas, inductive arguments are often more difficult to evaluate because concepts of probability vary between individuals. What one individual considers probable, another individual may consider improbable. It is also important to differentiate between true and false, and valid versus invalid where decision making is concerned. As has already been discussed, arguments are sets of sentences, and these sets of sentences can either make up a good or a bad argument. A good argument is considered to be “valid,” or more specifically, an argument is valid “if the premises are true and, thus, the conclusion must certainly be true (in a deductive argument) or as probable as the argument claims (in an inductive argument)” . A bad argument, on the other hand, is considered to be “invalid,” or more specifically, “even if the premises were true, that would not demonstrate the truth or probability of the conclusion”. Individual sentences, on the other hand, do not make up an entire argument, but instead are concerned with stating either a premise or a conclusion. These individual sentences can be found to be either true or false. There is a final level of evaluation that bares mentioning and that pertains to the soundness of valid arguments. Valid arguments are classified as being either “sound” or “unsound.” A sound argument is one where all stated premises are true. An unsound argument is one that contains at least one premise that is false. Upon reaching a decision, it is logical for an individual to evaluate the results of the decision. While this would have been hypothesized earlier in the decision- making process, now that the decision has been made, the real-time effects and results can be evaluated. Based on the information provided at the time of the decision, was the right choice made? If presented with the same options in the future, how would the decision change? It is here that the dilemmas of actions and consequences begin to show themselves. When actions occur, there are certain patterns that begin to emerge. Due to variations in personal ethics, bias, and external influences, individuals do not always reach the same conclusions; however, this does not necessarily mean that the other individual is wrong. A particular situation may not have one “right” answer; however, it may have many “wrong” answers. Therefore, it is necessary that individuals use their best judgment (based on personal ethics) and common sense when attempting to reach the “best” conclusion. It is important for one to understand the decision-making process if one is to evaluate whether or not a decision is an ethical or unethical one. Given the kind of power a civil servant wields, it is not only desired but is essential that the decision taken are in tune with the ethical principles of the organization and society at large. 1 responses on "Ethics in Public Service - Part 3 !!!" Leave a Message You must be logged in to post a comment.
https://www.iastree.com/ethics-in-public-service-part-3/
What is probability in genetics expressed in? The dominant allele appeared 705 times out of a possible 929 times (705+224=929). Probability is normally expressed in a range between 0 and 1, but it can also be expressed as a percentage, fraction, or ratio. Expressed as a percentage, the probability that a plant of the F2 generation will have purple flowers is 76%. How probability is used in genetics? In genetics, theoretical probability can be used to calculate the likelihood that offspring will be a certain sex, or that offspring will inherit a certain trait or disease if all outcomes are equally possible. It can also be used to calculate probabilities of traits in larger populations. What is probability How does probability relate to genetics? Probability is the term used to describe the likelihood that some event will occur. In relation to genetics, the principle of probability allows us to predict the possible combinations of phenotypes in a genetic cross by using a diagram called Punnett squares. What is the probability important in genetics? Calculating probabilities is extremely important in genetics. Probabilities predict the likelihood that certain events will occur such as the inheritance of a particular trait in an organism. This can help plant and animal breeders develop more desirable characteristics in their products. How do you identify a pedigree? Reading a pedigree - Determine whether the trait is dominant or recessive. If the trait is dominant, one of the parents must have the trait. - Determine if the chart shows an autosomal or sex-linked (usually X-linked) trait. For example, in X-linked recessive traits, males are much more commonly affected than females. What is the product rule in genetics? The product rule states that the probability of independent events occurring together is the product of the probabilities of the individual events. How do you calculate probability in genetics? - The empirical probability of an event is calculated by counting the number of times that event occurs and dividing it by the total number of times that event could have occurred. - The theoretical probability of an event is calculated based on information about the rules and circumstances that produce the event. How can you determine the probability of specific genetic traits in offspring? Divide the number of boxes with a dominant allele by four and multiply the result by 100 to get the percent chance that an offspring will have the dominant trait. For example (2/4)*100 = 50, so there is a 50 percent chance of an offspring having brown eyes. How is the probability of inheritance determined in genetics? Probability of Inheritance. These percentages are determined based on the fact that each of the 4 offspring boxes in a Punnett square is 25% (1 out of 4). As to phenotypes, 75% will be Y and only 25% will be G. These will be the odds every time a new offspring is conceived by parents with YG genotypes. How are the rules of probability used to predict a child’s birth? The rules of probability can be applied for predicting the ratio of boys and girls born in a family. Since the human male produces an equal number of X and Y sperm, the chance for a boy at any birth is 1/2, and for a girl also is 1/2. How to calculate conditional probability in population genetics? Using one item of new information not included in the prior probabilities, calculate a conditional probability for each hypothesis. What is the probability of having a boy or girl? Since the human male produces an equal number of X and Y sperm, the chance for a boy at any birth is 1/2, and for a girl also is 1/2. From the probability of each single conception it is possible to calculate the probability of successive births together.
https://www.rhumbarlv.com/what-is-probability-in-genetics-expressed-in/
On March 11, 2021, the American Rescue Plan (ARP) Act was signed into law. The plan included Elementary and Secondary School Emergency Relief (ESSER) funds to help state educational agencies and school districts safely reopen and sustain the safe operation of schools, as well as address the impact of the coronavirus pandemic on students. Greeneville City Schools received funds from the ESSER program. A list of ESSER 3.0 updates and highlights, along with the opportunity to provide feedback, is available at https://bit.ly/ESSERHighlights through February 15, 2023. The list is also available in Spanish at https://bit.ly/ESSERSP23SPN. Information on Greeneville City Schools' ESSER funds, along with the opportunity to provide feedback on a continual basis, is located on our Web site at https://bit.ly/ESSERGCS, which will also include the upcoming updated Spending Plan and Health and Safety Plan. Healthy Tips from our Nursing Department Screen Time Increased screen time has been identified as a contributing factor to children becoming less active and increasingly overweight.Early data from a landmark National Institutes of Health (NIH) study that began in 2018 indicates that children who spent more than two hours a day on screen-time activities scored lower on language and thinking tests, and some children with more than seven hours a day of screen time experienced thinning of the brain’s cortex, the area of the brain related to critical thinking and reasoning. Screen time also inhibit restful sleep. The blue light from the screen inhibits melatonin, which can delay sleep. Excessive screen time and sleep deprivation are linked to obesity, which can affect self-esteem and lead to social isolation and more screen time. As adults it is easy for us to set limits on our screen time, but for young children it is harder to set those boundaries. Here are ways to help your child limit screen time: co watch whenever possible, keep bedtime, mealtime and family time screen free, limit your own phone usage, and emphasize the importance of healthy nutrition and exercise. References: https://healthmatters.nyp.org/what-does-too-much-screen-time-do-to-childrens-brains/ Free Resources TDOE AND GELF PARTNER TO GIVE FREE READING RESOURCES TO TENNESSEE FAMILIES In partnership with Governor’s Early Literacy Foundation (GELF), the department is opening up an opportunity for parents to order free at-home reading resources this winter for any of their children in grades K-2. Through this partnership, the department and GELF want to encourage at-home reading practice to help young learners become stronger readers outside of the classroom. Based on if a child is in kindergarten, 1st or 2nd grade, they will receive seven At-Home Decodable Book Series, which contain 20+ exciting stories full of sounds and words to practice, and age-appropriate, high-quality books from Scholastic. All Tennessee families can order one booklet pack for each of their kindergarten, 1st, and 2nd grade students using this site.
https://www.smore.com/7e62k
We are searching data for your request: Upon completion, a link will appear to access the found materials. Ingredients for Cooking Canapes with Walnut Cheese Pate - Baguette Bread 10-12 slices - Large garlic 1 clove - Hard cheese 100 grams - Walnuts peeled 100 grams - Butter 100 grams - Salt to taste - Ground black pepper to taste - Main ingredients: Cheese, Nuts, Bread - Serving 10 Servings Inventory: Meat grinder or blender, Flat serving dish, Medium bowl, Tablespoon, Cutting board, Knife, Teaspoon, Garlic, Fork, Refrigerator, Cling film or pan lid Cooking canapes with walnut cheese pate: Step 1: prepare hard cheese.Using a meat grinder or a blender, grind the cheese and transfer it to a medium bowl. For such a delicious dish, you can choose any kind of cheese and any brand of cheese, the main thing is that you like the canapes. For example, the paste is tasty if you add Russian cheese, or Adyghe cheese, as well as any other cheese, even processed one. Attention: if you are using the first inventory, then be sure to grind the component with a fine lattice, so that the paste ingredient turns out to have a mushy consistency. In the case of using a blender, you can use the Turbo mode. Step 2: prepare the garlic. Step 3: prepare the paste.In a bowl with crushed ingredients for the paste, spread the soft butter at room temperature. And for this, we just get the creamy component from the refrigerator in advance. Salt and pepper to taste our future paste and with the help of a fork we thoroughly mix the products until a homogeneous mass is formed. Then - put the paste in the refrigerator to insist, for 20-25 minutes. And so that he does not absorb extraneous odors of food in the refrigerator, we rewind the container with cling film or cover with a lid from any pan that is suitable in diameter to a bowl. Step 4: prepare canapes with walnut-cheese pate. Step 5: serve canapes with walnut-cheese pate.Immediately after cooking, the dish can be served. For a more appetizing appearance, canapes with nut and cheese pate can be decorated with a sprig of fresh dill or parsley. In fact, such a dish is very appropriate to cook for all kinds of receptions, as it is very easy to prepare and at the same time very tasty. But I pamper my relatives and friends on a regular holiday, so that for them this day would also seem like a holiday. I usually serve canapes with hot tea and often in the morning for breakfast, as my family loves cheese sandwiches. And so the dish is both tasty and healthy, since the nut gives our body additional energy and a number of vitamins, and in winter the prevention of flu and colds, since garlic is part of the dish. Good appetite! Recipe Tips: - - If you don’t have a baguette at hand, do not be discouraged. An excellent substitute is a regular fresh loaf. Only best in this case is to cut each slice of bread into several small pieces and fry over medium heat in a small amount of vegetable oil. Then you will get not just a small sandwich, but a dish with a golden and fragrant crust of bread. - - Instead of butter, you can add spread, homemade heavy cream or a small amount of mayonnaise to the paste. So the dish is also very tasty. - - In a blender or in a meat grinder, you can grind all the products for the paste, including walnuts. This option is convenient for those who do not really like nuts. - - If you didn’t have a blender or a meat grinder on hand, do not worry. After all, you can grind the components for paste in another way. To do this, just take a fine grater. True, it will take a little longer, but the result will be the same.
https://gb.thefutureofrepresentativedemocracy.org/3043-canapes-with-walnut-cheese-pate.html
Special Issue "Methods to Improve Energy Use in Road Vehicles" A special issue of Energies (ISSN 1996-1073). Deadline for manuscript submissions: closed (15 November 2017). Special Issue Editor grade E-Mail Website Guest Editor Interests: Intelligent Transport Systems; Advanced Driver Assistance Systems; Vehicle Positioning; Inertial Sensors; Digital Maps; Vehicle Dynamics; Driver Monitoring; Perception, Autonomous Vehicles; Cooperative Services; Connected and Autonomous Driving Special Issues, Collections and Topics in MDPI journals Special Issue in Electronics: Connected Vehicles, V2V Communications, and VANET Special Issue in Sensors: Sensors in New Road Vehicles Special Issue in Sensors: Sensors for Autonomous Road Vehicles Special Issue in Applied Sciences: Road Vehicles Surroundings Supervision: On-Board Sensors and Communications Special Issue in Sensors: Perception Sensors for Road Applications Special Issue in Applied Sciences: Technologies and Applications of Communications in Road Transport Special Issue in Sensors: Sensors for Road Vehicles of the Future Special Issue in Vehicles: The New Devices to Assist the Driver (ADAS) Special Issue in Applied Sciences: The Development and Prospects of Autonomous Driving Technology Special Issue in Sensors: Feature Papers in Vehicular Sensing Special Issue in Sensors: Sensors for Autonomous Vehicles and Intelligent Transport Special Issue Information Dear Colleagues, Optimizing energy use in road vehicles is one of the key topics in the automotive sector and research is being done in different fields of work. Energy, fuel consumption and exhaust emissions can be reduced using several strategies and solutions. For instance, vehicle and components design can be optimized in order to reduce rolling and aerodynamic resistances. Furthermore, conventional engines have suffered relevant advances making them more efficient. These advances involve components design, development of new control strategies, etc. New fuels have been also considered. In the last few years, alternative propulsion systems have obtained more relevance. In this case, electric and hybrid propulsion system have appeared as feasible and affordable alternatives to conventional combustion engines. These propulsion systems require specific components selection, control strategies, etc., which should be analyzed. A very active area is the research on electric and hybrid vehicles components, such as batteries or other means of energy storage, transmissions, etc. Another relevant subtopic that could lead to energy use reduction in road vehicles is the improvement of driver behavior, using drivers’ education tools or using onboard assistance systems that recommend to the driver the best actions to be performed at every moment. These systems take advantage of new developments in the fields of electronics, wireless communications, etc., to provide accurate information. Furthermore, other solutions, such as measures to improve urban traffic using advanced traffic management tools, also provide reductions in global energy use. Apart from original research related to the topic, studies on the state-of-the-art in relation to previous works are also welcome. In conclusion, the aim of this Special Issue is to bring together innovative developments oriented to achieve a better energy use in road vehicles, including, but not limited to: - Vehicle design optimization - Alternative propulsion systems - Components optimization - Components dimensioning optimization - Alternative fuels - Optimization of propulsion systems - Ecodriving and driver behavior - Driver assistance systems - Traffic management - Hybrid vehicles - Electric vehicles Authors are invited to contact the Guest Editor prior to submission if they are uncertain whether their work falls within the general scope of this Special Issue. Dr. Felipe Jimenez Guest Editor Manuscript Submission Information Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website. Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Energies is an international peer-reviewed open access semimonthly journal published by MDPI. Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
https://www.mdpi.com/journal/energies/special_issues/road_vehicles_2017
Hypopituitarism – an Overview of the Condition and Applicable ICD-10 Codes Hypopituitarism is a rare disorder of decreased pituitary hormone secretion wherein the pituitary gland fails to produce normal amounts of one or more hormones. The pituitary gland is a small bean-shaped gland situated on the underside of your brain (behind your nose and between your ears) which secretes hormones that influence nearly every part of your bodily functions. The gland produces eight types of hormones – each of which can affect your body’s routine functions related to growth, controlling metabolism, blood pressure and reproduction. The condition can develop very slowly, over several months or even over several years. Treatment for this condition basically involves hormone correction. Taking the right hormone medications can help effectively control the prominent symptoms associated with the condition. Physicians treating hypopituitarism have to ensure accurate documentation of the patient’s condition and treatment provided. Endocrinology medical billing and coding has become increasingly complex due to the growing number of rules and regulations. Outsourcing medical billing and coding services could help endocrinologists and other physicians ensure accurate and timely claim filing for appropriate reimbursement. Hypopituitarism occurs when your pituitary gland does not release enough of one or more of any of these hormones such as adrenocorticotropic hormone (ACTH), thyroid-stimulating hormone (TSH), antidiuretic hormone (ADH), Prolactin, follicle-stimulating hormone (FSH), growth hormone (GH), luteinizing hormone (LH) and Oxytocin. The hormone condition may be the result of inherited disorders, but more often it’s acquired. Tumor of the pituitary gland is one of the common causes associated with the condition. A tumor can compress the optic nerves, causing visual disturbances. Other potential causes include – head injuries, autoimmune inflammation (hypophysitis), stroke, infections of the brain (such as meningitis), sarcoidosis, blood loss during childbirth and radiation treatments. However, in some cases, the exact cause of hypopituitarism may be unknown. Unlock the Signs and Symptoms Hypopituitarism is often a progressive condition. The signs and symptoms in most cases develop gradually and may at times be subtle and remain unnoticed for months or even years. The signs and symptoms of this hormonal disorder may vary from one person to another and may depend on which pituitary hormones are deficient and how severe the deficiency is. Common symptoms include – - Weight loss or weight gain - Sensitivity to cold or difficulty staying warm - Fatigue and/or weakness - Excessive thirst and urination - Decreased sex drive - Decreased appetite - Stiffness in the joints - Short stature in children - Infertility - Hot flashes, irregular or no periods, loss of pubic hair, and inability to produce milk for breast-feeding in women - Headache and dizziness - Facial puffiness - Decreased facial or body hair in men - Anemia Diagnosing and Treating an Underactive Pituitary Gland If your physician feels that you have hypopituitarism, he/she will conduct several tests to check the levels of various hormones in your body. Several tests that may be conducted include – - Blood tests – to detect deficits in hormones as a result of pituitary failure - Stimulation or dynamic testing – to check your body’s secretion of hormones after consuming certain medications that stimulate hormone production - Vision tests – to check whether the growth of a pituitary tumor has impaired your sight or visual fields Once the hormone levels in your body are correctly determined, physicians will check the other parts of your body (target organs) which those hormones normally affect. In some cases, the problem may not be with your pituitary gland, but rather with the target organs. In addition, physicians will also conduct several diagnostic imaging tests such as CT scan or MRI scan to check if a tumor on your pituitary gland is affecting its normal function. Treatment for this condition may be lifelong. As this condition may generally affect a number of hormones, there is no single course of treatment for this condition. One of the initial treatment modalities will be related to hormones – to bring the hormone levels back to normal. In fact, these hormone dosages are given to match the amounts that the body would normally produce if it didn’t have a pituitary gland problem. Hormone replacement medications may include – Corticosteroids, Levothyroxine (Levoxyl, Synthroid, others), sex hormones (testosterone in men and estrogen or a combination of estrogen and progesterone in women) and growth hormones (also called somatropin). In some cases, if a tumor is causing pituitary problems, surgery to remove the tumor growth may be done in order to restore your hormone production to normal. On the other hand, physicians will recommend radiation therapy as well. Endocrinologists and other physicians who diagnose, screen and provide treatment procedures for hypopituitarism must carefully document the same using the correct medical codes. Medical billing and coding services provided by reputable medical billing companies can help physicians use the correct codes for their medical billing purposes. ICD-10 codes for Hypopituitarism E22 – Hyper function of pituitary gland - E22.0 – Acromegaly and pituitary gigantism - E22.1 – Hyperprolactinemia - E22.2 – Syndrome of inappropriate secretion of antidiuretic hormone - E22.8 – Other hyperfunction of pituitary gland - E22.9 – Hyperfunction of pituitary gland, unspecified E23 – Hypo function and other disorders of the pituitary gland - E23.0 – Hypopituitarism - E23.1 – Drug-induced hypopituitarism - E23.2 – Diabetes insipidus - E23.3 – Hypothalamic dysfunction, not elsewhere classified - E23.6 – Other disorders of pituitary gland - E23.7 – Disorder of pituitary gland, unspecified If the pituitary gland is permanently damaged, proper hormone replacement generally requires life-long treatment. The initial course of hormone replacement therapy may take time to determine the patient’s response and to find the best dose. The endocrinologist will correctly monitor the levels of hormones in your blood to ensure that the patient is getting adequate, but not excessive amount of hormones. Physicians will adjust the dosage of corticosteroids if the patient becomes seriously ill or experiences major physical stress. In short, patients can expect a normal life span, as long as they regularly consume the correct dose of medications recommended by endocrinologists or other physicians. Endocrinology medical coding can be complex, as there are several codes associated with the condition. By outsourcing these tasks to a reliable and established medical billing and coding company – that offers the services of AAPC-certified coding specialists, healthcare practices can ensure correct and timely medical billing and claims submission.
https://www.outsourcestrategies.com/category/blog/specialty-coding/endocrinology-medical-coding/
The Covid-19 pandemic is unfolding at a time when democracy is in decline. According to data compiled by Freedom House (2020), democracy has been in a recession for over a decade, and more countries have lost rather than gained civil and political rights each year. A key concern is that Covid-19 will turn the democratic recession into a depression, with authoritarianism sweeping across the globe like a pandemic. As the New York Times puts it, “China and some of its acolytes are pointing to Beijing’s success in coming to grips with the coronavirus pandemic as a strong case for authoritarian rule” (Schmemann 2020). Even the World Health Organization (WHO) has called its forceful lockdown “perhaps the most ambitious, agile and aggressive disease containment in history”. This raises the question: Is China is an exception, or have autocratic regimes in general been able to take more stringent policy measures to restrain people from moving around and spreading the virus? And if so, have they been more effective? To explore these questions, we examine the institutional and cultural underpinnings of governments’ responses to the Covid-19 pandemic (Frey et al. 2020). To measure the strictness of the policies introduced to fight the pandemic across countries, we use the Oxford COVID-19 Government Response Tracker (OxCGRT), which provides information on several measures, including school and workplace closings, travel restrictions, bans on public gatherings, and stay-at-home requirements. To capture the effectiveness of these responses in reducing travel and movement in order to curb the spread of the virus, we employ Google’s COVID-19 Community Mobility Reports. Figure 1 shows that travel fell in a number of selected countries as more stringent policy measures were introduced. However, the figure also shows that there is large dispersion in cross-country mobility, even for similar levels of policy stringency. Figure 1 Lockdown measures and cross-country reduction in mobility Sources: OxCGRT; Google’s COVID-19 Community Mobility Reports Have authoritarian governments been more effective in reducing mobility? To be sure, it is possible that political divisions and strong business interests make it harder to introduce stringent lockdowns in democracies. To test this, we employ the democracy index of Freedom House (2020). We find that more autocratic regimes have indeed introduced stricter lockdowns and have relied more on privacy-intrusive measures like contract tracing. However, our regression analysis also suggests that when democracies employ the same mobility restrictions as autocratic regimes, they experience steeper declines in mobility. This result also holds when we add a host of controls, like state capacity, GDP per capita, latitude experience with past epidemics, as well as country and time fixed effects. Using a complementary measure on political and civil rights, we similarly find that greater freedom is associated with greater reductions in movement and travel (Frey et al. 2020). Though these correlations cannot be interpreted as causal, they provide suggestive evidence that while autocratic regimes tend to introduce stricter lockdowns, they are less effective in reducing travel. Indeed, while China’s strict lockdown has received most media attention, other East Asian countries have arguably mounted a more effective response to Covid-19. Cultural values and the effectiveness of mobility restrictions Another theory is that some cultures are more obedient than others, prompting people to better follow more stringent lockdown measures. While societies differ on many cultural dimensions, cross-cultural psychologists view the individualism-collectivism distinction as the main divider (Heine 2007, Henrich et al. 2010, Schulz et al. 2019). Scholars have shown that individualism has a dynamic advantage leading to a higher economic growth rate by giving social status rewards to non-conformism and innovation (Gorodnichenko and Roland 2011). In particular, individualistic cultures, like those of the US Sweden, or the UK, are more innovative and take out more patents (Gorodnichenko and Roland 2017). The flipside of an individualistic culture, which encourages experimentation and innovation, is that it can make collective action, such as a coordinated response to a pandemic, more difficult. This is because people in more individualistic societies tend to pursue their own interest rather than the collective good. Collectivism, on the other hand, which emphasises group loyalty, conformity and obedience towards one’s superiors, makes collective action easier (Gorodnichenko and Roland 2015). To measure the variation in individualism-collectivism across countries, we employ Hofstede’s (2001) widely used scale which integrates questions about goals, achievement-orientation, and family ties. In addition, we construct an index on attitudes towards obedience based on data from the World Value Survey (WVS). Our regression analysis shows that similar levels of policy stringency reduced mobility less in individualistic cultures, and more in obedient ones. Figure 2 presents the result graphically. It suggests that collectivist countries have mounted a more coordinated response to Covid-19 in terms of reducing movement and travel. We also find that movement related to non-essential activities, like going to parks, exhibits a particularly sharp mobility declines (Frey et al. 2020). Figure 2 Individualism, obedience and the reduction in mobility Note: Each dot in the charts represent, for each country, the change in mobility index that is not explained by the policy stringency index. The obedience index is the first component of a Principal Component Analysis based on World Value Survey (WVS) data. Sources: Authors’ own calculations based on Hofstede (2001); WVS; OxCGRT; Google’s COVID-19 Community Mobility Reports. Concluding remarks Democracy has been in recession for over a decade (Diamond 2019) and many fear that Covid-19 will accelerate this trend. In the Philippines, President Rodrigo Duterte has seized even greater power and threatened martial law-style enforcement of a monthlong lockdown. And on 30 March 2020, the Hungarian Parliament passed the Coronavirus Act, which grants Viktor Orbán’s government unprecedented emergency powers for an indefinite period of time. Judging by how autocratic regimes have responded to the crisis, however, we do not expect that the democratic recession will accelerate. First, the lack of transparency in autocratic regimes has been an undisputable drawback in fighting the pandemic. In Turkmenistan, people have been arrested solely for discussing the outbreak in public and medical doctors are banned from diagnosing Covid-19. And while China successfully mobilised a strong national response once President Xi Jinping gave green light, the initial lack of transparency delayed decisive measures to curb the virus before it spread across China and globally (Ang 2020). Second, our research suggests that even though autocracies have introduced more stringent lockdowns, democracies have been more effective in reducing travel and the movement of people in their countries. Thus, while autocrats often seek to capitalize on perceived threats, their handling of the pandemic on these dimensions seems unlikely to look appealing to the outside world. China is not just an autocratic regime; it also has a strong state (Fukuyama 2011) and a highly collectivist culture (Talhem et al. 2014). But the same is true of democratic countries like South Korea and Taiwan. Building on a large literature, we find that a country’s capacity to enforce its mobility restrictions, as well as its culture, are more relevant variables in explaining how countries have fared during the pandemic. Following in the footsteps of cross-cultural psychologists, we show that collectivist societies have been more successful in managing the outbreak. Our findings speak to the intuition that a collectivist culture, which rewards conformity and group loyalty, and obedience towards one’s superiors, makes collective action easier (Gorodnichenko and Roland 2015; Schulz et al. 2019). In East Asian countries, which are highly collectivist on Hofstede’s (2001) scale, the habit of mask-wearing to protect fellow citizens markedly contrasts with Western attitudes. However, while collectivist societies are well placed to deal with epidemics that require collective action, collectivist cultures have historically experienced slower economic growth (Gorodnichenko and Roland 2011), less dynamism and innovation (Gorodnichenko and Roland 2017), and tend to focus on incremental innovation rather than radical breakthroughs (Chua et al. 2019). Fighting Covid-19 will require coordination to curb the spread of the virus, but also innovation in order to find treatments and vaccines. Pandemics are global by definition and hence a global response that leverages the innovative capacity of individualist countries, and the coordination and production capabilities of collectivist ones, will be needed. References Chua, R Y, K G Huang and M Jin (2019), “Mapping cultural tightness and its links to innovation, urbanization, and happiness across 31 provinces in China”, Proceedings of the National Academy of Sciences 116(14): 6720-6725. Freedom House (2020), Democracy Index. Frey, C B, G Presidente, C Chen (2020), “Democracy, Culture, and Contagion: Political Regimes and Countries Responsiveness to Covid-19”, Covid Economics 18. Gorodnichenko, Y and G Roland (2011), “Which dimensions of culture matter for long-run growth?”, American Economic Review 101(3): 492-98. Gorodnichenko, Y and G Roland (2015), “Culture, institutions and democratization”, National Bureau of Economic Research Working Paper No w21117. Gorodnichenko, Y and G Roland (2017), “Culture, institutions, and the wealth of nations”, Review of Economics and Statistics 99(3): 402-416. Hale, T, S Webster, A Petherick, T Phillips and B Kira (2020), “Oxford COVID-19 Government Response Tracker”, Blavatnik School of Government. Heine, S (2007), Cultural Psychology, New York: Norton. Henrich, J, S J Heine and A Norenzayan (2010), “The weirdest people in the world?”, Behavioral and Brain Sciences 33(2-3): 61-83. Hofstede, G (2001), Culture's consequences: Comparing values, behaviors, institutions and organizations across nations, London: Sage Publications. Nisbett, R E, K Peng, I Choi and A Norenzayan (2001), “Culture and systems of thought: holistic versus analytic cognition”, Psychological Review 108(2): 291. Schmemann, S (2020), “The Virus Comes for Democracy Strongmen think they know the cure for Covid-19. Are they right?”, New York Times, April 2. Schulz, J F, D Bahrami-Rad, J P Beauchamp and J Henrich (2019), “The Church, intensive kinship, and global psychological variation”, Science 366(6466). Talhelm, T, X Zhang, S Oishi, C Shimin, D Duan, X Lan and S Kitayama (2014), “Large-scale psychological differences within China explained by rice versus wheat agriculture”, Science 344(6184): 603-608.
https://voxeu.org/article/covid-19-and-future-democracy
--- abstract: 'Encryption schemes often derive their power from the properties of the underlying algebra on the symbols used. Inspired by group theoretic tools, we use the centralizer of a subgroup of operations to present a private-key quantum homomorphic encryption scheme that enables a broad class of quantum computation on encrypted data. A particular instance of our encoding hides up to a constant fraction of the information encrypted. This fraction can be made arbitrarily close to unity with overhead scaling only polynomially in the message length. This highlights the potential of our protocol to hide a non-trivial amount of information, and is suggestive of a large class of encodings that might yield better security.' author: - 'Si-Hui Tan' - 'Joshua A. Kettlewell' - Yingkai Ouyang - Lin Chen - 'Joseph F. Fitzsimons' bibliography: - 'qhe\_universal.bib' title: A quantum approach to homomorphic encryption --- The discovery that quantum systems could be harnessed to process data in a fundamentally new way has led to the burgeoning field of quantum information processing. This approach to computation holds the promise of more efficient algorithms for a variety of tasks including integer factorization [@shor1997polynomial], search [@Grover:1996:FQM:237814.237866] and quantum simulation [@lloyd1996universal]. However, quantum information processing has also found applications in the area of cryptography, which has been a focus of the field since the discovery of secure quantum key distribution protocols by Bennett and Brassard [@BB84], and Ekert [@PhysRevLett.67.661]. The information theoretic security of these protocols stands in stark contrast to the reliance of classical key agreement protocols on assumptions of computational hardness, and indeed a major goal of quantum cryptography research is to replicate and extend the functionality present in existing classical schemes while providing stronger, information theoretic, security guarantees. In the world of classical cryptography, a central topic in recent years has been the study of homomorphic encryption [@Rivest1978; @Gentry:2009:FHE:1536414.1536440; @DGH2010]. Homomorphic encryption is a form of encryption which allows data processing to be performed on encrypted data without access to the encryption key. In general, a homomorphic encryption system is composed of four components: a [*key generation algorithm*]{}, an [*encryption algorithm*]{} that encrypts the data using the generated key, a [*decryption algorithm*]{} that decrypts the data using the key, and an [*evaluation algorithm*]{} which is used to process the data without decryption. Thus homomorphic encryption allows for secret data to be processed by third parties without allowing them access to the plaintext. After decryption, the plaintext output reveals the processed data. A scheme is termed *fully-homomorphic* if it allows for arbitrary processing of the encrypted data. Although the idea for homomorphic encryption has existed for some time [@Rivest1978], it was not until 2009 that a fully-homomorphic encryption scheme was discovered by Gentry [@Gentry:2009:FHE:1536414.1536440]. Gentry’s scheme is only computationally secure, relying on the assumed hardness of certain worst-case problems over ideal lattices, and the sparse subset sum problem, although the condition requiring ideal lattices was later dropped [@DGH2010]. Recent successes in quantum cryptography in finding information theoretically secure protocols for blind computation [@5438603; @ABE08; @Barz20012012; @PhysRevA.87.050301; @PhysRevLett.111.230501; @mantri2013optimal] and verifiable computing [@FK13; @RUV13; @SFKW13; @M10], problems closely linked to homomorphic encryption, have motivated the question of whether quantum mechanics allows for information theoretically secure homomorphic encryption schemes. Indeed, a number of attempts have been made to find a quantum analogue of homomorphic encryption [@Liang2013; @Liang2014; @FBS2014; @Childs:2005:SAQ:2011670.2011674], however these attempts have inevitably run into a barrier. It is now known that it is not possible to achieve perfect information theoretic security while enabling arbitrary processing of encrypted data, unless the size of the encoding is allowed to grow exponentially [@YPF2014]. As a result, such schemes have required interaction between parties to enable deterministic computation. These requirements parallel those of blind quantum computation which hides [*both*]{} the data and the computation being done on it. The question then remains as to whether information theoretically secure homomorphic encryption is possible without expanding the definition to include interactive protocols. A first step in the direction of non-interactive quantum protocols was presented in [@PhysRevLett.109.150501] for a restricted model of quantum computation known as the BosonSampling model [@AA11] which is non-universal. Furthermore, the scheme ensures only that the encoded information and the accessible information differ by an amount proportional to $\log_2 m$ bits when $m$ bits are encrypted, which is a relatively weak security guarantee. An information-theoretically secure scheme that allows for processing of encrypted data beyond BosonSampling is not known to date. In this paper, we present a private-key homomorphic encryption protocol that supports a broad class of computations, including and extending beyond BosonSampling, while providing information theoretic security guarantees. The protocol we present ensures a gap between the information accessible to an adversary and actual information encoded that grows as $m \log_2(d/m)+m(\log 2)^{-1}$ bits when $m \log_2 d$ bits are encrypted using $m$ $d$-level systems. This is a significantly stronger security guarantee than that offered by the scheme presented in [@PhysRevLett.109.150501]. We present our results in three parts. First we present a general approach to homomorphic encryption stemming from the group theoretic structure of quantum operations. We then present a family of operations which allow for a broad class of computations to be performed on encrypted data for a range of encryption schemes satisfying certain symmetry constraints. Finally we present a concrete encoding satisfying these constraints and show that it limits the accessible information as described above. *Group theoretic approach —* We approach the problem of creating a homomorphic encryption scheme via the most naive route: we try to construct a set of encryption operations which commute with the operations used to implement computation on the encrypted data. However, this approach immediately encounters a barrier when applied to the case of universal computation. In such a case the computation operations form a group, either the unitary group in the case of quantum computation or the symmetric group in the case of classical reversible computation, which does not usually commute with other operations. Indeed, any irreducible representation of these groups only commutes with operators proportional to the identity, precluding non-trivial encryption. However, for reducible representations of these groups, there can exist non-trivial operators which commute with the entire group. This provides a natural route to constructing a homomorphic encryption scheme which allows the evaluation of operators chosen from some group $G$ on encrypted data, by choosing a representation of the group with a non-trivial centralizer. The set of operations used to perform the encryption must be chosen as a subset of this centralizer. While it is not immediately obvious that encryption operations chosen this way should actually be able to hide information, the BosonSampling scheme presented in [@PhysRevLett.109.150501] provides an example of such an encoding where a non-trivial amount of information is hidden. *Representation of computation —* Our protocol uses $m$ identical bosonic particles; each particle has a spatial degree of freedom limited to a finite number of modes $x =1, \dots , m $ and an internal state $\alpha = 0 , \dots, d-1 $ (see Fig. \[fig1:encoding\]). We design our scheme such that the encryption operations affect only the internal states of the particles, and the computation operations affect only the spatial modes of the particles. Since the input to the computation is supplied using the internal states of the particles, but the computation is performed using manipulation of only spatial modes, it may appear that the input does not affect the computation. This is not the case, however, since the internal states of the particles affect the computation by altering interference between particles. Each particle can be represented as a state $|\alpha\>_x$ created out from a vacuum state $|\rm{vac}\>$ via a creation operator $\hat{a}_{x,\alpha}^\dag$, with $\ket{\alpha}_x= \hat{a}_{x,\alpha}^\dag \ket{\text{vac}}$. The bosonic creation operators $\hat a _{x, \alpha} ^\dagger$ and $\hat a _{y, \beta} ^\dagger$ commute, and satisfy the orthogonality condition $[a_{x,\alpha}, a_{y,\beta}^\dag]=\delta_{\alpha,\beta}\delta_{x,y}$. Note that we make no assumption on the internal states of the $m$ particles, any two particles can have the same or different internal states. Explicitly, the initial state of our scheme is $$\begin{aligned} \hat a ^\dagger_{1, \alpha_1} \dots \hat a ^\dagger_{1, \alpha_1} |\rm{vac}\> = |\alpha_1\>_1 \otimes \dots \otimes |\alpha_m\>_m, \notag\end{aligned}$$ which we denote as $| {\boldsymbol{\alpha}} \>$ for short, where ${\boldsymbol{\alpha}} = (\alpha_1, \dots, \alpha_m) \in \mathbb Z_d^m$ is our plaintext. Since the values of $\alpha_1, \dots , \alpha_m$ are selected from the integers from 0 to $d-1$, there are $d^m$ possible orthogonal input states, spanning a complex Euclidean space $ (\mathbb{C}^d)^{\otimes m}$. The set of computation operations that we are allowed to perform is isomorphic to a unitary group of a large dimension. The state space of $m$ identical bosons can be expressed as a symmetric subspace of a Hilbert space $\mathcal H_m = \mathcal H_{\rm internal} \otimes H_{\rm spatial}$, where $\mathcal H_{\rm internal }$ and $\mathcal H_{\rm spatial }$ denote the space for the internal degrees of freedom and the spatial modes of the $m$ identical bosons respectively. Due to the indistinguishability of the bosons, the state of the system is invariant under permutation of particles, and hence the system can only occupy states within the subspace of $\mathcal H_m$ which respect this permutational symmetry. The computational operations, which act only on $\mathcal H_{ \rm spatial}$, must respect this symmetry, and hence the infinitesimal generators of the group of such operations are permutation-invariant. We proceed to elucidate the structure of these infinitesimal generators. Each boson can be in one of $m$ possible spatial modes, and hence there are $m^2$ generalized Pauli operators each of dimension $m$ that act non-trivially on the spatial degree of freedom of each boson. Let the corresponding Hermitian and non-Hermitian generalized Pauli operators constitute the sets $\mathcal B_i$ and $\mathcal B'_i$ respectively. Let $\mathcal C'_i \subset \mathcal B'_i$ such that $|\mathcal C'_i| =\frac{ |\mathcal B_i'| }{2}$ and every element in $\mathcal B'_i$ is either in or proportional to the Hermitian conjugate of some element in $\mathcal C'_i$. The Hermitian set $\overline {\mathcal B}_i = \mathcal B_i \cup \{ P + P ^\dagger : P \in \mathcal C'_i\} \cup \{ i(P - P ^\dagger) : P \in \mathcal C'_i\}$ then comprises of $m^2$ infinitesimal generators of the unitary group operating non-trivially only on the spatial modes on the $i$-th boson. The infinitesimal generators of group of computation operations are then symmetric sums of the $m$-fold tensor product of elements from $\overline {\mathcal B}_i$, with each such element corresponding to one boson. The number of such symmetric sums is exactly the number of ways to distribute $m$ indistinguishable spatial labels (because of the requirement of permutation-invariance) among $m^2$ distinct elements of $\overline {\mathcal B}_i$, which is ${\binom{m^2 + m - 1}{m }}$. Hence the set of computation operations $G$ that we can perform is isomorphic to a unitary group of dimension at least ${\binom{m^2 + m - 1}{m }} \ge \frac{(m^2)^m}{m!} \ge m^m e^{m-1} / \sqrt{m}$. Contained within $G$ are unitaries generated by the following infinitesimal generators: $$\begin{aligned} \widehat{C}_{x,y}:=\sum^{d-1}_{\alpha=0}\hat{a}_{x,\alpha}^\dag \hat{a}_{y,\alpha}\ ,\end{aligned}$$ for $1 \leq x, y \leq m$. These operators $\widehat C_{x,y}$ are infinitesimal generators for operations that are equivalent to beam-splitters for $x \neq y$, and phase-shifters for $x=y$ in the quantum optics setting. Since we can generate the phase-shifters and the beam-splitters as in [@reck1994experimental], these infinitesimal generators generate a dimension $m$ unitary group isomorphic to U$(m)$ [@RevModPhys.63.375; @RSG99; @iachello] from which the evaluator’s computation operations can be chosen. These are the same elements used to construct those of the BosonSampling model. All particles in the BosonSampling model are indistinguishable (have the same internal states); the particles in our model however need not be indistinguishable, because each particle can be chosen as a $d$-level system independently. If we were to filter out particles with one of the $d$ internal states, we are left with a system that is equivalent to $d-1$ BosonSampling models by linearity of passive linear optics. This is a generalization of the insight used to encrypt BosonSampling instances in [@PhysRevLett.109.150501]. Hence our computation space includes a hard sampling problem as a special case. However, it is currently unknown whether our model allows for encoded universal computation on a space of size exponential in $m$. *Encoding scheme —* For the encryption operation, a unitary operator $\mathcal{E}$, is applied to the internal state of the $m$ particles as is depicted in Fig. \[fig1:encoding\]. Since $\mathcal{E}$ only acts on the internal states of the particles, [provided that it operates identically on all particles,]{} it commutes with our computation operations that act trivially on the internal states of the particles. In this section, we give a specific choice $\mathcal{E}$ which enables non-trivial hiding of information. In what follows, we drop the spatial labels of the particles and make them implicit. We define the computational basis states of each particle to be $|\alpha\>$ for $\alpha=0,\ldots,d-1$, and define the discrete Fourier transform on $\mathbb C^d$ as $$\begin{aligned} F =\sum_{\alpha, \beta=0}^{d-1}\frac{1}{\sqrt{d}}\exp\left(\frac{2\pi i \alpha \beta}{d}\right) |\beta\>\< \alpha|. \notag\end{aligned}$$ Denote the basis states of $\mathbb C^d$ in the Fourier transform basis as $|\alpha_F\>=F |\alpha\>$, and define the trigonometric terms $c_\alpha(k) = \cos(2\pi \alpha k /d)$ and $s_\alpha(k) = \sin(2\pi \alpha k /d)$ for arbitrary integers $\alpha$ and $k$. The generators of the encoding are, for $k=1,\ldots, \lfloor \frac{d}{2}\rfloor$, $$\begin{aligned} \widehat{\Delta}_{k} &= \frac{\widehat{L}^k+\widehat{L}^{-k}}{2} = \sum_{\alpha=0}^{d-1 } c_\alpha (k) |\alpha_F\>\<\alpha_F|, \\ \widehat{\Delta}_{k+\lfloor\frac{d}{2}\rfloor} &= -\frac{\widehat{L}^k-\widehat{L}^{-k}}{2i} = \sum_{\alpha=0}^{ d-1 } s_\alpha (k) |\alpha_F\>\<\alpha_F|, \end{aligned}$$ where $\widehat{L}$ is the cyclic shift operation on the internal state of each particle such that $\widehat{L}\ket{\alpha}=\ket{\alpha+1({\rm mod}\ d)}$. To simplify our calculations, we choose to express our generators in the following basis instead: $$\begin{aligned} \widehat{H}_\ell =& \frac{1}{d} \left( \mathbb{I} - \eta_\ell \widehat \Delta_{\lfloor \frac d 2 \rfloor} + \sum_{k = 1}^{\lfloor \frac d 2 \rfloor} \left( 2 c_\ell ( k ) \widehat \Delta_k + 2 s_\ell ( k ) \widehat \Delta_{k + \lfloor \frac d 2 \rfloor} \right) \right),\end{aligned}$$ where $\eta_\ell=\frac{1+(-1)^d}{2}\cos(\ell\pi)$. It is easy to verify that in the Fourier transform basis, $\widehat H_\ell = |\ell_F\>\<\ell_F|$. Data represented using the logical basis can be encrypted by choosing a key, $\kappa=(\kappa_1,\ldots, \kappa_{d-1})$, where each $\kappa_\ell$ is an integer chosen uniformly at random from the non-negative integers $\{0,\ldots, m\}$, and applying the random unitary operation $\mathcal E$ on each particle, where $$\begin{aligned} \mathcal{E}=\exp\left(\sum_{\ell=1}^{d-1} i\phi_\ell\widehat{H}_\ell\right)\ ,\end{aligned}$$ and $\phi_\ell=\frac{2\pi}{m+1}\kappa_\ell$ are the secret random angles. It is convenient to think of $\mathcal{E}$ as a product of integer powers of $\mathcal{E}_\ell=\exp(i \widehat{H}_\ell \frac{2\pi}{m+1})$, so that $\mathcal{E}=\mathcal{E}_1^{\kappa_1}\ldots \mathcal{E}_{d-1}^{\kappa_{d-1}}$. ![This figure shows Alice’s encoding scheme for $m$ bosonic particles each in one of $d$ internal states. Each particle has a spatial degree of freedom labeled by $x$. The encoding operation $\mathcal{E}$ is effected across the particles in a tensor product way. The evaluation operation is taken from the group $G$, which acts non-trivially only on the spatial modes of the $m$ bosons, and can put multiple bosons in a single spatial mode. Post-evaluation, the encryption is removed via the inverse encoding operation to reveal the evaluated plaintext. []{data-label="fig1:encoding"}](qhe_fig1_v3){width="6cm"} After the encoding, computation can still be performed on the encrypted data using the operations described in the previous section. However, for an adversary that does not have access to $\kappa$, the information encoded is obscured. Once the evaluation is completed, the output can be decrypted by applying $\mathcal{E}^\dagger$ on every particle to yield the processed plaintext. Surprisingly, with this simple encryption-decryption process, [*any*]{} quantum computation chosen from $G$ which is performed on the encrypted state yields the same result when decrypted, as if it were performed on the unencrypted state. The result is an encryption scheme that admits privacy homomorphisms for operations chosen from $G$. Our scheme works because the encryption operators affect only the internal states of the particles at each site, while the computation leaves the internal states of every particle invariant. In the particular encryption scheme we have chosen, the encryption operators generate an abelian group $A$ that acts trivially on the spatial modes. Hence the evaluator can perform operations in the tensor product of the group $G$ and the abelian group $A$. *Hidden information —* Here we show that our quantum homomorphic scheme can hide a number of bits proportional to $m$. Without knowing the key, the ensemble is $\{\hat{\rho}_{{\boldsymbol{\alpha}}}, p_{{\boldsymbol{\alpha}}}\}$ where ${\boldsymbol{\alpha}}= (\alpha_1,\alpha_2,\ldots, \alpha_m)$ denotes the plaintext, and the corresponding encrypted state is $$\begin{aligned} \hat{\rho}_{{\boldsymbol{\alpha}}}=\frac{1}{(m+1)^{d-1}} \sum_{\kappa_1,\ldots ,\kappa_{d-1}=0}^{m} \mathcal{E}^{\otimes m} \ket{{\boldsymbol{\alpha}}} {\bra{{\boldsymbol{\alpha}}}} (\mathcal{E}^\dag)^{\otimes m} \ .\end{aligned}$$ It is illuminating to look at the ensemble in the Fourier transform basis as here the encoding is diagonal. We can write $\hat{\rho}_{{\boldsymbol{\alpha}}}$ in the form $\sum_{{{\boldsymbol{\beta}}},{{\boldsymbol{\beta}}'}\in\mathbb{Z}_d^m} c_{{\boldsymbol{\beta}}, {\boldsymbol{\beta}}'} \ket{{\boldsymbol{\beta}}} {\bra{{\boldsymbol{\beta}}'}}$ and the non-zero coefficients are those for which the number of $\ell$’s in ${{\boldsymbol{\beta}}}$ is equal to the number of $\ell$’s in ${{\boldsymbol{\beta}}'}$ for all $\ell=1,\ldots, d-1$. Let $\mathcal{F}(\hat{O})$ denote $(F^\dag)^{\otimes m}\hat{O} F^{\otimes m}$. Then $$\begin{aligned} \label{eq:part} \mathcal{F}(\hat{\rho}_{{\boldsymbol{\alpha}}}) =& \frac{1}{d^m}\sum_{{{\boldsymbol{\beta}}},{{\boldsymbol{\beta}}'}\in\mathbb{Z}_d^m} e^{-\frac{2\pi i {{\boldsymbol{\alpha}}}\cdot({{\boldsymbol{\beta}}-{\boldsymbol{\beta}}'})}{d}} \ket{{\boldsymbol{\beta}}} {\bra{{\boldsymbol{\beta}}'}} \times \notag\\ &\quad\prod_{\ell=0}^{d-1}\delta({\rm wt}_{\ell}({{\boldsymbol{\beta}}})-{\rm wt}_{\ell}({{\boldsymbol{\beta}}'}))\ ,\end{aligned}$$ where ${\rm wt}_{\ell}({{\boldsymbol{\beta}}})$ is the Lee weight which counts the number of times $\ell$ appears in the vector ${{\boldsymbol{\beta}}}$. The non-zero terms in eq. (\[eq:part\]) can be partitioned into sets labeled by integer partitions of $m$. Let $P_{m,d}$ be the set of integer partitions of $m$ into $d$ (possibly empty) parts and let $\lambda$ be a partition in $P_{m,d}$. In eq. (\[eq:part\]), strings for which all Lee weights are equal belong to the same partition $\lambda$. The entries in $\lambda=(\lambda_0,\lambda_1,\ldots, \lambda_{d-1})$ give the number of times a particular element appears in ${{\boldsymbol{\beta}}}$. With this notation, we get $$\begin{aligned} \mathcal{F}(\hat{\rho}_{{\boldsymbol{\alpha}}}) =\frac{1}{d^m}\sum_{\lambda\in P_{m,d}} R_\lambda \ket{\Psi_\lambda^{{\boldsymbol{\alpha}}}}\bra{\Psi_\lambda^{{\boldsymbol{\alpha}}}}\ ,\end{aligned}$$ where $R_\lambda=\left(\begin{array}{c} m\\ \lambda_0, \lambda_1,\ldots , \lambda_{d-1}\end{array} \right )$ is the multinomial coefficient, and $$\begin{aligned} \ket{\Psi_\lambda^{{\boldsymbol{\alpha}}}}=\frac{1}{\sqrt{R_\lambda}}\sum_{\substack{{\boldsymbol{\beta}}: {\rm wt}_j({\boldsymbol{\beta}})=\lambda_j\\ j=0,\ldots, d-1}}e^{-\frac{2\pi i }{d}{{\boldsymbol{\alpha}}}\cdot{{\boldsymbol{\beta}}}} \ket{{\boldsymbol{\beta}}}\ ,\end{aligned}$$ which is invariant under permutation of the particles. [**Theorem 1:**]{} For all probability distributions $p_{{\boldsymbol{\alpha}}}$ over plaintexts ${\boldsymbol{\alpha}}$, the accessible information of the encoding, without knowing the key, is upper bounded by $\log_2 m!$ bits when Alice sends $m$ $d$-level particles. Proof: First, we observe that the elements of $\{\ket{\alpha}, \alpha=0,\ldots, d-1\}$ are related by powers of $\widehat{L}$. Since $\widehat{L}$ is unitary and commutes with the encoding $\mathcal{E}$, it must be that $S(\hat{\rho}_{{\boldsymbol{\alpha}}})$ is the same for all ${{\boldsymbol{\alpha}}}$. For simplicity, we analyze $S(\hat{\rho}_{\bf 0})$: $$\begin{aligned} S(\hat{\rho}_{\bf 0})&=S(\mathcal{F}(\hat{\rho}_{\bf 0}))\nonumber\\ &= S\left(\sum_{\lambda\in P_{m,d}} \frac{ R_\lambda}{d^m} \ket{\Psi_\lambda^{{\bf 0}}}\bra{\Psi_\lambda^{{\bf 0}}}\right) \nonumber \\ &=H\left(\left\{\frac{R_\lambda}{d^m}\right\}\right)+\sum_{\lambda\in P_{m,d}}\frac{R_\lambda}{d^m} S\left(\ket{\Psi_\lambda^{{\bf 0}}}\bra{\Psi_\lambda^{{\bf 0}}}\right) \nonumber\\ &= H\left(\left\{\frac{R_\lambda}{d^m}\right\}\right) \ ,\label{rank1}\end{aligned}$$ where we have used the orthogonality of the different partitions labelled by $\lambda$ in third equality [@NielsenChuang], and that $\ket{\Psi_\lambda^{\bf 0}}\bra{\Psi_\lambda^{\bf 0}}$ has rank one in the final equality. Similar arguments can be made for $\hat{\rho}=\sum_{{\boldsymbol{\alpha}}}p_{{\boldsymbol{\alpha}}}\hat{\rho}_{{\boldsymbol{\alpha}}}$, $$\begin{aligned} \hspace*{-0.15cm}S(\hat{\rho}) &= S\left(\sum_{{\boldsymbol{\alpha}} \in \mathbb{Z}_d^m} p_{{\boldsymbol{\alpha}}} \sum_{\lambda\in P_{m,d}} \frac{R_\lambda}{d^m} \ket{\Psi_\lambda^{{\boldsymbol{\alpha}}}} \bra{\Psi_\lambda^{{\boldsymbol{\alpha}}}} \right)\nonumber \\ &\le S\left(\sum_{{\boldsymbol{\alpha}} \in \mathbb{Z}_d^m} \frac 1 {d^m} &= H\left(\left\{ \frac{R_\lambda}{d^m}\right\}\right) + \sum_{\lambda} \frac{R_\lambda}{d^m} S(\sum_{{{\boldsymbol{\alpha}}}} \frac 1 {d^m} \ket{\Psi_\lambda^{{\boldsymbol{\alpha}}}} \bra{\Psi_\lambda^{{\boldsymbol{\alpha}}}} ).\label{ortho2}\end{aligned}$$ The inequality above occurs because applying a channel that randomizes over ${\boldsymbol{\alpha}}$, by applying a random power of $\widehat L$ to each particle, symmetrizes the probability distribution $p_{{\boldsymbol{\alpha}}}$ to the uniform distribution, but cannot decrease entropy. The second term of eq. (\[ortho2\]) obeys the identity $$\begin{aligned} &\frac{1}{d^m}\sum_{{{\boldsymbol{\alpha}}} \in \mathbb{Z}_d^m}\ket{\Psi_\lambda^{{\boldsymbol{\alpha}}}} \bra{\Psi_\lambda^{{\boldsymbol{\alpha}}}} =\frac{1}{R_\lambda} \sum_{\substack{{\boldsymbol{\beta}}: {\rm wt}_{j}({{\boldsymbol{\beta}}}) =\lambda_j\\ j=0,\ldots, d-1}} \ket{{\boldsymbol{\beta}}} {\bra{{\boldsymbol{\beta}}}} \ ,\label{part}\end{aligned}$$ and is hence a maximally mixed state in the partition labeled by $\lambda$ with a rank of $R_\lambda$, with entropy at most $\max_\lambda\log_2 R_\lambda \leq \log_2 m!$. Using these facts and putting eqs. (\[rank1\])-(\[part\]) together, we obtain a bound on the Holevo quantity of $$\begin{aligned} \label{ub} \chi(\left \{\hat{\rho}_{{\boldsymbol{\alpha}}},p_{{\boldsymbol{\alpha}}}\right \}) &\leq \log_2 m! \,\end{aligned}$$ which in turn bounds the accessible information. When $m$ is large, $$\begin{aligned} \chi(\left \{\hat{\rho}_{{\boldsymbol{\alpha}}},p_{{\boldsymbol{\alpha}}}\right \}) &\leq m\log_2 m -\frac{1}{\log 2} m +\mathcal{O}(\log (m)) \ .\end{aligned}$$ and the gap, between the encoded information and the information accessible to an adversary, is at least $$\begin{aligned} \Gamma &= m\log_2 d-\chi(\{\hat{\rho}_{{\boldsymbol{\alpha}}},p_{{\boldsymbol{\alpha}}}\})\\ &\approx m\log_2 (d/m)+m(\log 2)^{-1}\ .\end{aligned}$$ Thus if $d=m$ and $m\log_2 m$ bits are encoded, this gap scales at least proportional to $m$. Moreover if $d = m^{1/r}$ for $r$ in the open unit interval, the gap asymptotically approaches $m(1-r)$. This is a significantly stronger security than that offered by [@PhysRevLett.109.150501], while at the same time significantly extending the functionality by allowing computations beyond BosonSampling to be performed on the encrypted data, thus bringing us closer to the goal of achieving a quantum fully homomorphic encryption scheme. As our bound in eq. (\[ub\]) is independent of the probability distribution used for the encoding, the bound on the accessible information holds even if the [*a priori*]{} distribution on the plaintext is not uniform. *Acknowledgements —* We thank H. de Guise for helpful discussions, and Y. Li for useful comments. This material is based on research supported in part by the Singapore National Research Foundation under NRF Award No. NRF-NRFF2013-01. LC was partially supported by the Fundamental Research Funds for the Central Universities.
In response to students’ changing literacy practices within the digital age in contrast to the traditional expectations of academic print literacy, many first year writing programs have rejected expressivist approaches to teaching academic reading and writing. Instead, these programs tend to emphasize rhetorical analyses of written and visual texts, especially in the first course of an academic writing sequence. As economist Robert Reich pointed out, our global knowledge economy requires this focus on analysis. He identified the need for symbolic analysts who “wield equations, formulae, analogies, models, and construct categories and metaphors in order to create possibilities for reinterpreting and rearranging” the deluge of textual and visual data (quoted in Johnson-Eilola, 2004, p. 229). Yet too often conventional rhetorical analysis relies more on having students consume academic texts (or public criticism in the form of op-ed pages) and only reproduce their discourse and generic forms. Rarely do these approaches aim to mediate the culture and languages from students’ communities as a major pedagogical goal. So most often students remain alienated from an academic identity and purpose in these courses. As a graduate professor on the periphery of our official writing program, I hear from frustrated new graduate student teachers who wisely come to identify this problem with the program’s suggested assignments. The first course in our program focuses more on analyzing advertisements and commentary pieces. Yet the program’s most inexperienced teachers, unequipped with a more expansive pedagogical toolkit, inevitably revert to teaching conventional academic forms rather than creative critical inquiry. As an alternative to these conventions of textual analysis, another smaller group of teacher-scholars have stressed rhetorical approaches through multigenre projects. As Tom Romano, Nancy Mack, Cheryl Johnson and Jayne Moneysmith, and Robert Davis and Mark Shadle have shown, multigenre pedagogy can definitely foster students’ creative inquiry. While I admire much of these multigenre approaches, particularly the work of Romano and Mack, they tend to use genres to help students understand complexities of research writing (Romano, Mack, Davis and Shadle) or argumentation (Johnson and Moneysmith). In contrast, I wanted to draw on genre pedagogy to focus on analysis to meet our writing program’s outcomes for the first semester writing course in ways that might be more internally persuasive to our students. In my upper-level undergraduate rhetoric course, students learned to analyze discourse by rewriting political commentaries in other genres and then analyzing the rhetorical effects of their choices (see Seitz “Mocking Discourse”). Now I wanted to create a similar approach to analysis that could motivate and engage most first year writing students. In this chapter, I will show how the genre writings project in my first year writing course, supported by principles of place-based education and theories of genre as textual sites of social action, helps create a more inductive approach to rhetorical analysis focused on students’ languages and values. In contrast to conventional rhetorical analysis of a text, the students analyze the rhetorical choices they make when they compose in diverse genres that respond to the rhetorical situations of local place and community. I believe this approach can help open up a dialectical space through a process of “purposeful mediation” between academic rhetoric and collective rhetorics of local place. Through this approach, students often invest more in the process of their analysis, analyzing what they have accomplished rhetorically through their genre writings. Goals, Assignments, and Interviews from a Place-Based Writing Course To better show my motives for the rhetorical moves within this project, what follows are the key goals of this course which I designed in accordance with a place-based genre writing pedagogy, an overview of the assignment sequences, and a look at genre connections drawn from interviews. Course Goals Students were expected to foster and articulate critical analyses of everyday rhetoric within social and historical contexts. They were also expected to gain awareness of how any place could be analyzed in relation to three conditions: community bonds, local history, and global influences. And I wanted students to understand how written, oral, and visual genres help enact, respond to, and complicate these three connections. Sequence of Assignments Throughout the course students were required to research and write an “Interview Analysis Paper.” I wanted them to identify connections between place and community, and develop a genre writing for each of these connections (i.e. community bonds, local history, and global influences). Finally, they were to analyze rhetorical situations of their genre writings and their connections. In my course, students conducted ethnographic interviews about how a place or community has responded to change. The students’ choice of place could be a neighborhood, town, or workplace. I borrowed this emphasis on change from Julie Lindquist’s own writing course on place, which helped inspire my own. By emphasizing change, the interviews tended to focus on how the interviewee drew upon the collective rhetorics of the place and community to respond to physical and historical forces as well as the changing rhetorical influences on the place and people. These forces and influences might come from outside groups and institutions, such as the decision to move NCR (National Cash Register, a home industry in Dayton Ohio), to Atlanta. Or they might have come from smaller groups inside the larger community, such as efforts of rural towns to revitalize their downtowns during the recession in a global economy. But the project also allowed for students to demonstrate when the place and community had not changed and how, why, and to what effects. In this manner the project left open the possibility of social affirmation and critique (see Seitz, 2004). We cannot assume before ethnographic research how the interviewee and others in the community view change and stability within this place. Through the work of the interview analysis paper, students then locate three connections from their interviews that respectively address community bonds, local history, and global influences related to this place or community. Genre Connections Drawn from Interviews With regard to community bonds, some of the possible connections could be specific actions people conducted in order to create ties or social networks; specific common traditions, values, and beliefs that brought individuals together; or issues that related directly to the well-being of the local place and its residents. As for the local history of the place and community, these might be major events taking place in the community or place during a specific period and which resulted in some change. These could be political, economic, newsworthy (at least, in the eyes of the community members), or historical—that is, referencing the history of particular groups within the community. And where global influence was concerned (whether considered from state, national, or international perspectives), students were encouraged to explore the political, economic, technological, or cultural influences on the place and community. For instance, Chelsea Presson interviewed her uncle, one of 15 remaining employees at NCR (which he describes now as a ghost office). From her interviews and analysis paper, she identified the community bonds of strong employee relationships that NCR once nurtured through company programs and abandoned over ten years before the decision to move the company. For the local history, she emphasized the deterioration of NCR’s long-standing support of Dayton’s communities and small businesses. And for the global influences, she focused on the impact of the national economy that acted as the backdrop for NCR’s decision to move. This analysis encourages an historical and global perspective toward the local place. Moreover, rather than the course providing pre-packaged issues, most students come to see that any place or institution is both sustained and impacted by these three connections. Then for each connection they have identified from their analysis, the students write a text in a non-academic genre that responds to a local rhetorical situation they learned about in their interview research. This approach helps develop greater rhetorical facility (one of the main Writing Program Administrators’ outcomes) expanding beyond academic genres in the larger knowledge economy. I provide the students with a vast list of possible genres to choose from, but also suggest they consider what genres community members would more likely write, read, and watch as well as what genres outsiders (state, national, international) whose actions affect this place would write, read, and watch. Through in-class activities, I get them to consider how their genre choices can help show something about each of their three connections. In this way, the activity gets students thinking about how genres enact the social roles and situated action tied to their three genre connections. Students need to also consider the rhetorical situation (considerations of audience, purpose, stance, genre, and medium/design), as defined by Richard Bullock’s Norton Field Guide to Writing (2009) for each genre connection. When they must consider the fit of the genre choice to rhetorical situation, they begin to analyze the affordances of each possible genre choice. So for community bonds, Chelsey composed an email dialogue between a surviving Dayton NCR employee and one who moved to the new Atlanta office, elaborating in detail on their past exploits in better company times. For the local history, she took on the voice of a Dayton restaurant owner in the city paper, addressing concerns of small business bankruptcies in Dayton since the pulling out of NCR and General Motors (supported by data drawn from secondary sources). And for the global influences connection, she took on the sunny authoritative tone of NCR CEO Bill Nuti in a slickly designed company newsletter assuring employees that the economy was turning around compared to previous recessions. The students also had to incorporate secondary sources in the text and footnotes of their genre writings to help them relate the local situations they enacted to similar concerns of other communities (or workplaces) and larger issues at the state, national or international level. For teaching strategies of incorporating research from secondary sources in genre writings, I have learned much from Nancy Mack’s scholarship and pedagogy. Finally, as a metacognitive reflection, the students analyze and articulate all these rhetorical choices in an extensive cover letter. When I designed this course, I knew I wanted students to address place as a generative theme, but I hadn’t read much on theories of place-based pedagogy, which is mostly a rural K-12 movement. Now I look back at the students’ projects over four years of classes and see how much these theories support my approach. Premises of a Critical Place-Based Writing Pedagogy Illuminate the concept of Intradependence (of place, community, and self). —Paul Theobauld Support sustainability of civic life at local levels (not migratory culture and rhetoric). —Robert Brooke Examine, celebrate, and critique the literacy practices that create local knowledge, culture, and public memory. —Charlotte Hogg Foreground connections to global, national, and regional development trends that impact local places. —David Gruenwald Robert Brooke has asserted pedagogical approaches of place-based education share common ground with the tradition of expressivist pedagogies that explore self and society (2003). As defined and articulated by Paul Theobauld, place-based education should illuminate the concept of intradependence, the connected relationship of place, community and self. To seek intradependence means to “exist by virtue of necessary relations ‘within a place’” (quoted in Brooke, 2003, p. 7). Brooke claims “Theobauld wants an education that immerses learners into the life of human communities while they are still in school, thereby teaching the practice of civic involvement” (2003, p. 6). Most of the students who work on this project in my class begin to practice forms of intradependence when they choose to interview their grandparents about the losses of a viable, walkable downtown life; their parents about the relationship of their workplaces to their home communities; people with institutional roles in the town, such as teachers, coaches, or ministers, about the local effects of demographic shifts; or people in professions that motivated some students, such as law enforcement and nursing, where they learn about the positive and negative impact of new technologies on employee interaction in these workplaces. Brooke rightly maintains that writing classes which emphasize rhetorical forms and argumentative strategies regardless of local cultures and community issues encourage a migratory culture that disconnects the self from place and does not support sustainability of civic life at local levels. “As educators,” Brooke writes, “all of us are implicated in the destruction of small communities” 2006, p. 147). Most American education now serves to create an “identity not linked to a specific place, community, or region but instead to the identity of the skilled laborer, equipped with the general cultural and disciplinary knowledge that will enable the person to work wherever those skills are required”; paraphrasing the naturalist writer Wallace Stegner, Brooke stresses how this kind of migratory living can lead to “harsh exploitation of natural and cultural resources—if you don’t plan to live somewhere more than a decade, it doesn’t matter in what condition you leave it in” (2003, p. 2). Instead, Brooke, along with other place-based educators, calls for imagining an education that fosters regional identity of “civic leadership, knowledge of heritage, and stewardship” (2006, p. 153). “It is at the local level where we are most able to act, and at the local level where we are most able to affect and improve community” (Brooke, 2003, p. 4). While the place-based genre writing project in my class doesn’t lead to immediate civic action, it does make students think more about establishing a regional, rather than solely migratory, identity within their acts of writing. But as Charlotte Hogg’s scholarship on rural literacies suggests, along with that of her colleagues Kim Donehower and Eileen Schell, place-based education needs to critique as well as celebrate local narratives of place. Hogg’s research of Nebraskan women’s roles as informal town historians highlights alternative narratives in contrast to the more patriarchal models of the agrarian movement which emphasize the self and the land and tend to neglect the everyday practices of towns that sustain local community. Hogg reminds us the goal is better models of cultural sustainability rather than preservation of a particular version of the past: “local narratives are not static artifacts for preservation, but openings for delving into questions of power and representation” (2007, p. 131). Moreover, the project in my course supports David Gruenwald’s call for a teaching approach that is “attuned to the particularities of where people actually live, and that is connected to global development trends that impact local places” (quoted in Hogg, 2007, p. 129). In the course of this project, the interview analysis activities help most students move toward the kind of analytical complexity suggested by Hogg, Gruenwald, and other scholars of critical pedagogies of place. The scaffolding of the interview analysis activities, along with other analysis activities using readings and movie clips, encourages students to discern social patterns and tensions from their interviews related to a community’s cultural values and responses to change. For example, Zachary Rapp comes from a working class town in southern Ohio. As a proud high school athlete, he wanted to interview his basketball coach. In the course of his analysis, Zach zeroed in on an unexpected tension within the school and town community. Zach’s coach explained specific ways this working class community deeply supported the athletics programs as a source of community pride. But he also referred to the teachers’ frustration over poor funding and repeated failed levies. In his interview analysis paper and then his genre writings, Zach had to wrestle with another side of this multifaceted story that he had not encountered before. As he began to question the commitment of his neighbors to the full education of the town’s children, he certainly considered issues of the town’s greater sustainability and the larger national issue of funding for education. But he also recognized, and wanted to explain the daily sacrifices that families made for the children’s athletics, and he wanted to celebrate that story, especially in contrast to the attitude of outsiders that his town was a wasted dangerous place which he claimed was part of its local history from the viewpoint of neighboring towns with greater wealth. In this regard, Zach took up the dialectical positions that Charlotte Hogg encourages—to both celebrate and critique the literacy practices that make up public memory of small town life. While the interview analysis paper gave Zach a genre form to address the significance of both perspectives within an academic frame, the genre writings gave him the opportunity to isolate and emphasize the voices and genres that both supported and challenged the cultural values that made up these aspects of the town’s civic life. So Zach writes in the voice of an injured local college athlete in a college application essay to show the community bonds forged at the town football games. He addresses the local history of rumors perpetuated by neighboring towns through a series of email exchanges between a prospective resident who asks a longtime volunteer booster about the town’s darker reputation. The booster’s replies speak to the town’s working class pride. But Zach also writes in the voice of a newspaper editor from a neighboring city paper that urges this local community to put as much emphasis on academic funding in their public schools as they do athletics. So when students’ rhetorical choices of genres (and their purposes and audiences) derive from the ethnographic analysis of these three connections to a local place or community, the students tend to better understand genre as situated social action. As with the place-based pedagogy, I had not read deeply into rhetorical theories of genre when I designed the project. Now I see how these theories support a view of students inhabiting roles and situations they have researched first hand from their interviews. Premises of Rhetorically-Based Theories of Genre Genres serve as keys to understanding how to participate in the actions of a community —Carolyn R. Miller The work of Carolyn Miller, Charles Bazerman, Catherine Schryer, Amy Devitt, and Anis Bawarshi, among others, reminds us that genres work to perform situated social actions and relations, enact social roles, frame social realities, and mediate textual and social ways of knowing and being. When we learn genres, we learn to inhabit “interactionally produced worlds” and social relationships, recognize situations in particular ways, and orient ourselves to particular goals, values, and assumptions. Apart from the genre pedagogy created by Devitt, Bawarshi and Reiff (2004), many teachers emphasize genre as forms, rather than situating the writing of various non-academic genres within the study of place and community. Rhetorical genre theorists instead view genres, such as a community newsletter or a company brochure, as “sites of social and ideological action” (Schreyer, 1993, p. 208). As Bawarshi sums up the importance of genres, “they embody and help us enact social motives, which we negotiate in relation to our individual motives; they are dynamically tied to the situations of their use; and they help coordinate the performance of social realities, interactions and identities” (2004, p. 77). Devitt, Bawarshi, and Reiff have stated that the term “discourse community” and the relationship of subjectivity to discourse community remain too vague. Instead, along with Miller, Bazerman, and others, they argue that it is the process of genres (within various modalities) that “organize and generate discourse communities” (2003, p. 550) and shape strategies of social action within these rhetorical situations. In my course project, the interview process and the three connections help to physically situate the cognition required to know what genres might be appropriate at what points in time and space within the local rhetorical situation. Because students encounter the use of various written genres in their interviews and in actual community contexts, they are exposed to genres not only as individual forms but as what rhetorical genre theorists call systems of genre sets. As a result, they must consider what affordances particular genres might offer within the range of appropriate genres in a given system that can best demonstrate the perspective of each chosen connection. As Anne Freadman and other rhetorical genre theorists have argued, the acquisition of genre knowledge includes “uptake”—knowing which genre to use based upon the rhetorical moves of earlier genres in a given system. While my first-year writing students do not explicitly study this genre knowledge or truly embed themselves in the practices of a community’s genre systems in ways that lead to full acquisition of genre knowledge, through this project they are more likely to see genres as more than just forms and conventions, and as the “lived textualities” that enact relationships and power relations within community bonds, local histories, and global influences. Katie Shroyer came to understand these intersections of power relations and genre knowledge over the course of her project. Katie interviewed her mother, a pastor of a local branch of the Christian Family Fellowship Ministry. To show the connection of local history, Katie composed a eulogy for John Shroyer, her grandfather, the founder of the local ministry. In this text, the speaker recounts the specific ways John Shroyer helped build the social environment of the congregation over forty years. What strikes me here is how much her purpose resembles the rhetorical view of epideictic rhetoric—that is, the speech itself is meant to develop identification and persuasion to the values of the larger congregation. To address the connection of community bonds, she took on the voice of her mother in the Ministry newsletter which is distributed to numerous communities. The article addresses the growing movement advocating for home fellowships in small groups compared to the greater anonymity of megachurch models. In her cover letter, Katie claims that this particular genre of the newsletter serves “as a bonding agent” to these different communities, developing a series of “mini support systems.” To examine global influences, Katie refers to a conflict between her mother and the leader of the Fellowship within a semi-formal business letter. As the church has expanded since the days of her grandfather, it has pursued international outreach. To encourage this national and global outreach, the leader has encouraged the production and distribution of service teachings on CDs. Katie’s mother repeatedly challenges what she sees as the impersonality of this approach and instead argues for the necessity of physical interpersonal relations in fellowship. Taking on the role of a Congregationalist in Bristol, England, Katie writes a letter to persuade Pastor Shroyer, her mother, to visit their fellowship, so they can gain much more than they can with her CDs. Now, to some composition scholars, this may not seem a strong critical rhetorical move, but to me it does suggest efforts to consider sustainability of the fellowship in the midst of global and technological change. I would also suggest that because the project allowed Katie to demonstrate the strengths of this fellowship community, she was probably more willing to reveal dissent in the church with regard to change as well. Moreover, Katie clearly chooses these genres, in her words, “to serve as keys to participate in the actions of a community,” and she analyzes these rhetorical choices very well in her cover letter. Finally, I believe this teaching approach follows in an expressive tradition because it’s about mediating identity and addressing places as communities, however flawed, and recognizing a range of agency within these communities. This pedagogy also draws on assumptions of critical teaching in that students must examine power relations within local communities and their relations to larger global influences. Genre writings can mediate academic and public rhetorics tied to place and community, thereby creating a dialectical space. The students’ interview papers mediated an academic analysis with the interviewee’s voice, which spoke from a collective rhetoric of place and community often tied to the student’s sense of self. The students’ genre writings translated academic insights of cultural, historical and socio-economic analysis into genres and voices of public rhetorics, often situated in place and community. And finally, their cover letters translated the implicit rhetorical analysis behind the creation of their genre writings into explicit demonstrations of analytical choices and use of secondary sources. In these ways, genre writings can act as a mediating force between the cultures and communities outside and within academe as students analyze place and change from academic perspectives, and then re-integrate those perspectives into the language and genres of public communities. In this sense, my use of the term “translate” is only partially accurate because when we move between these public and academic rhetorics, there is no direct correspondence of meanings—just as when I plug in a French phrase into a digital translator, I will not receive an absolutely English equivalent. So while I do see the process as a kind of partially accurate set of translations, the term mediation suggests a more dynamic fluidity that often takes place. In the process of this project, students gained experience mediating identities, communities, genres, and rhetorical assumptions and strategies—rhetorical experience that can hopefully serve them well in their communications outside the classroom, in their dealings with academic writing, and possibly well into their future lives. References Bawarshi, A., & Reiff, M. J. (2010). Genre: An introduction to history, theory, research, and pedagogy. West Lafayette, IN: Parlor Press and Fort Collins, CO: The WAC Clearinghouse. Bazerman, C. (1997). The life of genre, the life in the classroom. In W. Bishop & H. Ostrom (Eds.), Genre and writing: Issues, arguments, alternatives (pp. 19-26). Portsmouth NH: Boynton,. Brooke, R. (2006). Migratory and regional identity. In B. Williams (Ed.) Identity papers: Literacy and power in higher education (pp. 141-153). Logan, UT. Utah State University Press. Brooke, R. (2003). Introduction. In R. Brooke (Ed.). Rural voices: Place conscious education and the teaching of writing (pp. 1-20). New York: Teachers College Press. Bullock, Ri. (2009). Norton field guide to writing. New York: W.W. Norton. Davis, R. L. & Shadle, M. (2007). Teaching multiwriting: Researching and composing with multiple genres, media, disciplines and cultures. Carbondale, IL: Southern Illinois University Press. Devitt, A., Bawarshi, A., & Reiff, M. J. (2004). Scenes of writing: Strategies for composing with genres. New York: Longman. Devitt, A., Bawarshi, A., & Reiff, M. J. (2003). Materiality and Genre in the Study of Discourse Communities. College English, 65(5), 541-558. Freadman, A. (2002). Uptake. In R. Coe, L. Lingard, & T. Teslenko (Eds.), The rhetoric and ideology of genre: Strategies of stability and change (pp. 39-53). Cresskill, NJ: Hampton. Gruenwald, D. (2003). The best of both worlds: A critical pedagogy of place. Educational Researcher, 32(4), 3-12. Hogg, C. (2007). Beyond agrarianism: Toward a critical pedagogy of place. In K. Donehower, C. Hogg, & E. Schell (Eds.), Rural literacies (pp. 120-154). Carbondale, IL: Southern Illinois Press. Johnson-Eilola, J. (2004). The database and the essay: Understanding composition as articulation. In A. Wysocki (Ed.), Writing new media. Theory and applications for expanding the teaching of composition (pp. 199-236). Logan, UT: Utah State University Press. Johnson, C. & Moneysmith, J. (2005). Multiple genres, multiple voices: Teaching argument in composition and literature. Portsmouth, NH. Lindquist, J. (2006). ATL 150: Mapping community spaces. Retrieved from Michigan State University Web site: https://www.koofers.com/michigan-state-university-msu/atl/150/ Mack, N. (2006). Ethical representation of working-class lives: Multiple genres, voices, and identities. Pedagogy, 6, 53-78. Mack, N. (2002). The ins, outs, and in-betweens of multigenre writing. English Journal, 92(2), 91-98. Miller, C. R. (1994). Genre as social action. In A. Freedman & P. Medway (Eds.), Genre and the new rhetoric (pp. 23-42). Bristol, PA: Taylor and Francis. Presson, C. (2009). Genre writing project. Unpublished manuscript, Department of English, Wright State University, Fairborn, Ohio. Rapp, Z. (2010). Genre writing project. Unpublished manuscript, Department of English, Wright State University, Fairborn, Ohio. Romano, T. (2007). Blending genre, altering style: Writing multigenre papers. Portsmouth, NH: Boynton. Schryer, C. (1993). Records as genre. Written Communication, 10, 200-234. Seitz, D. (2011). Mocking discourse: Parody as pedagogy. Pedagogy, 11(2), 371-394. Seitz, D. (2004). Social affirmation alongside social critique. Who can afford critical consciousness?: Practicing a pedagogy of humility. Cresskill, NJ: Hampton Press. Shroyer, K. (2010). Genre writing project. Unpublished manuscript, Department of English, Wright State University, Fairborn, Ohio. Theobauld, P. (1997). Teaching the commons: Place, pride, and the renewal of community. Boulder, CO: Westview Press.
https://socialsci.libretexts.org/Bookshelves/Education_and_Professional_Development/Critical_Expressivism%3A_Theory_and_Practice_in_the_Composition_Classroom_(Roeder_and_Gatto)/05%3A_Pedagogies/5.01%3A_Place-Based_Genre_Writing_as_Critical_Expressivist_Practice
Before the Subcommittee on Oversight of Government Management, the Federal Workforce, and the District of Columbia, Committee on Homeland Security and Governmental Affairs, U.S. Senate, Washington, D.C. Chairman Akaka, Ranking Member Voinovich, and members of the Subcommittee, I appreciate the opportunity to be here today to discuss what the Federal Reserve is doing to help Americans make informed financial decisions. This topic is particularly relevant in light of current economic conditions and the impact that depreciating home and stock values, tightened credit markets, and increased unemployment have had on consumers' finances. In my remarks today, I will discuss the Federal Reserve's continued commitment to financial education and consumer outreach. We believe that these approaches are necessary, but as our recent rules on mortgages and credit cards attest, we also believe they are not a substitute for strong consumer protection safeguards in an increasingly complex financial marketplace. I will also discuss the challenges and opportunities for policymakers, regulators, and educators in designing and delivering a well-rounded and effective program to help consumers evaluate their options and make good choices given the array of products and services available to them in the financial marketplace. The Federal Reserve has a long history of providing useful consumer information. We believe that a major line of defense in consumer protection is self-defense--in other words, a well-informed consumer. Educated consumers can serve as their own advocates and better protect themselves from unnecessarily expensive and abusive financial products, practices, and scams by asking good questions about products and practices, especially those that "seem too good to be true." Consumers look to the Federal Reserve for unbiased, research-based financial information--and we intend to keep it that way. Over the years, the Federal Reserve Board has worked with other federal regulatory agencies, many of whom are now our partners in the Financial Literacy and Education Commission (FLEC), on consumer information resources, both in print and on the Internet. Nonetheless, financial education is not a panacea. Providing consumer information is one of several necessary and complementary consumer protection strategies ranging from raising awareness, providing accurate information, building capacity among educators and practitioners to developing effective consumer-tested disclosures, and, when necessary, protecting consumers by regulations banning or restricting unfair and abusive products. We believe that all of these approaches are essential for ensuring that consumers can successfully navigate an increasingly complex financial marketplace. Since I last testified on the subject of financial education, conditions in the credit market have changed significantly. Last year, I reported that the financial services industry was extremely diverse and complex and that new technologies, policies, and financial innovations had contributed to the development of a robust and highly competitive consumer financial marketplace. I discussed our financial education efforts in the context of a market where consumers had relatively easy access to credit, but where many credit products had complex terms and conditions and were being marketed aggressively. Much has changed in this last year, and the need for reliable financial information for consumers is even greater. Aside from the damaging effects of foreclosure on individual homeowners who lose their homes, home values have declined, on average, about 17 percent during 2007 and 2008.4 As a result, American families have experienced a substantial loss of wealth and financial security. The Federal Reserve's 2007 Survey of Consumer Finances reported that households lost all the gains in wealth they made between 2004 and 2007. In fact, relative to values in the 2004 survey, adjusted median net worth was 3.2 percent lower in 2008. In other words, consumers are financially less well off now than they were four to five years ago. Even financially savvy consumers are challenged in this difficult economic environment. Consumers with lower levels of financial capability--whether because of lower income, assets, or understanding--clearly need help to maintain and improve their finances. For financial educators, these circumstances represent a "teachable moment." For the Federal Reserve, this is an opportunity to reach consumers with important messages regarding the financial decisions they face, and we have significantly expanded our outreach efforts in response to these economic conditions. Given the current circumstances, consumers require reliable financial information, clear and meaningful disclosures, and regulations to protect them from potential financial harm. One of the most important roles the Federal Reserve plays is to make consumers aware of emerging issues and trends in the financial marketplace and to help them understand how those trends will affect them personally. An example of this type of financial education is our contract with a distributor of brief consumer news stories, in print and radio format, to daily and weekly media subscribers. We have used this approach for several years and have found it to be an effective means of directing consumers to our website (www.federalreserve.gov) for more information and resources. For example, a recent article on tips for protecting homeowners from foreclosure appeared in 398 newspapers in 26 states. Another article on refinancing mortgages appeared in 444 newspapers in 22 states. Audience penetration for these articles is estimated at 44.6 million and 45.4 million, respectively. The Board also has a history of identifying strategic partnerships to enhance our consumer outreach. For example, we are working to expand consumer awareness of foreclosure scams through a partnership with NeighborWorks America and the Conference of State Bank Supervisors (CSBS). The Federal Reserve Board and the Federal Reserve Banks also continue to partner with the "America Saves" program, the American Savings Education Council, Operation Hope, the "Bank On" program, and the Jump$tart Coalition for Personal Financial Literacy to promote financial education and asset-building strategies. In addition to the consumer information that Congress has mandated the Federal Reserve to provide to consumers, such as the Consumer Handbook on Adjustable Rate Mortgages and What You Should Know about Home Equity Lines of Credit, the Board has also developed calculators to help consumers explore mortgage choices and mortgage refinancing.10 Two weeks ago, we launched English and Spanish versions of our credit card repayment calculator, which allows consumers to estimate how long it will take to pay off their credit card bills if they only make minimum payments. Consumers can also estimate the monthly payments needed to pay off a balance in a specific number of years or the amount of time it will take to pay off their balance if they pay a specific amount each month.11 The calculators are available via a toll-free number, (888) 445-4801, and on the Board's website. While we work diligently to enhance consumer awareness and provide useful financial tools and information, the Federal Reserve is aware that some consumers would benefit from a more structured approach to learning how to make sound and informed choices in the financial marketplace. And sometimes they need coaching, advice, or counseling to help them develop and implement a personal financial plan. The Federal Reserve is committed to empowering consumers and increasing their financial capability by building the capacity of financial educators in schools and community-based organizations. Across the Federal Reserve System, we host teacher-education workshops for kindergarten through grade 12 teachers. These efforts focus on activity-based constructivist learning approaches, such as computer games, in contrast to more traditional information transfer education models. Our goals are to incorporate more experiential learning and to foster the development of critical thinking and problem-solving skills. We also host training workshops and conferences for community-based educators working with young adult and adult learners. These events provide updates on emerging issues and resources, as well as ideas for outreach via social media. While many of these are face-to-face sessions, we have also used webinars, online training, and other distance-learning strategies to reach audiences that may not be able to travel to conferences or meetings. Our support of education and capacity building goes beyond these train-the-trainer efforts. Many Reserve Bank staff members serve as key members of local Jump$tart coalitions that encourage states and localities to set standards of learning that include financial decisionmaking skills. Board staff also serves on the advisory council for NeighborWorks America's Center for Homeownership Education and Counseling (NCHEC), which has developed industry standards for quality homeownership education and counseling, including foreclosure mitigation counseling. NCHEC certifies nonprofit organizations as well as individual counselors. The Appendix to my statement provides a sampling of the numerous other Federal Reserve activities related to financial education and capacity building. Clear and well-organized disclosures can help consumers, including the best informed, to make good choices among financial products. The Federal Reserve Board has a number of statutory responsibilities with respect to writing rules for consumer disclosures. We take these responsibilities very seriously. In the past year or so, we have developed extensive new disclosures for a variety of financial products, most notably credit cards, and we are currently in the midst of a major overhaul of mortgage disclosures. To ensure that new disclosures are useful to consumers, we have increased our use of consumer testing. Exploring how consumers process information and come to understand--or sometimes misunderstand--important features of financial products has proven eye-opening. We have used what we learned from consumer testing to improve our required disclosures. For example, our recently released rules on credit card disclosures require certain key terms to be included in a conspicuous table provided at account opening because our field testing indicated that consumers were often already familiar with and able to interpret such tables on applications and solicitations, but were unlikely to read densely written account agreements. We continue to use qualitative cognitive testing with individuals to help us develop clear disclosures and quantitative validation testing to assure the new disclosures represent an improvement over those currently existing in the marketplace. We are also learning from the field of behavioral economics as we continue to explore ways to provide disclosures that consumers will pay attention to, comprehend, and use in their decisionmaking. As I indicated earlier, we believe that financially educated consumers are an important line of defense in well-functioning markets. At the same time, because of the complexity of certain products and terms, it may be difficult for consumers to weigh their costs and benefits or make informed choices. Some products posing a high degree of risk to consumers, especially those targeted at vulnerable populations, are often offered through aggressive or misleading marketing. Thus, there remains a need for effective regulation and enforcement that are responsive to market changes and that protect consumers from unscrupulous players. Our consumer testing efforts have taught us that even the best disclosures do not offer the best protection to consumers in all cases. Our testing shows that some aspects of increasingly complex products simply cannot be fully understood or evaluated by consumers, no matter how well-educated the consumer or how clear the disclosure. In those cases, direct regulation, including the prohibition of certain practices, is necessary. An example from our recent rulemaking was the allocation of payments by credit card issuers. When creditors charge different interest rates for purchases, cash advances, and balance transfers, they can increase their revenues through their payment allocation policies. For example, a consumer might be charged 12 percent on purchases but 20 percent for cash advances. Under the old rules, if the consumer made a payment greater than the minimum required payment, most creditors would apply the payment to the purchase balance (the balance with the lower rate) thus extending the period that the consumer would be paying the higher rate. Under these circumstances, the consumer is effectively prevented from paying off the cash advance balance unless the purchase balance is first paid in full. Similar issues have arisen in the mortgage arena, where many of the poor underwriting practices in the subprime market had potentially unfair and deceptive features.15 For example, the failure to require escrow accounts for homeowners' insurance and property taxes in many cases led borrowers to underestimate the costs of homeownership. The Federal Reserve restricted this practice and others through new rules released in July 2008. By using a variety of strategies to address the continuum of consumer needs--from making consumers aware of an issue, to providing reliable information and clear disclosures that allow a meaningful evaluation of financial choices, to prohibiting certain egregious products and practices--the Federal Reserve can empower consumers to make informed financial decisions. In so doing, we aim to promote the economic well-being of consumers and their families. In addition to the Federal Reserve's efforts to promote consumer education and protection, we have supported the Financial Literacy and Education Commission in meeting its mandates and implementing its national strategy. Since its inception in 2004, Board staff has served on the MyMoney.gov website working group and the national strategy working group. The Federal Reserve Board and the Reserve Banks have been engaged in many of the action items identified in the national strategy, including working with unbanked and underbanked audiences, improving access to financial services, developing a national financial capability research agenda, and encouraging global partnerships. We look forward to working with the new leadership at the Treasury Department and intend to continue to provide support for the national strategy. Beyond work with FLEC, Federal Reserve Board staff has also been engaged with colleagues internationally. In particular, we have represented U.S. financial education efforts with the International Network for Financial Education sponsored by the Organization for Economic Cooperation and Development (OECD).16 Federal Reserve Board staff serves on a subcommittee to create evaluation criteria that will allow cross-cultural comparisons of the impacts of financial education programs. Since 2002, the Federal Reserve Board has met with other international financial regulators to share best practices with respect to financial consumer protection and education issues. In these international settings, we have learned that while we are on the forefront of many consumer education and protection efforts, there is much that we can learn from others. In summary, we believe that a comprehensive approach best enables consumers to function effectively in the financial services marketplace. By enhancing consumer awareness, by providing reliable information to help consumers understand financial products and services, by requiring meaningful and consumer-tested disclosures, and by prohibiting unfair and deceptive financial products and practices, we believe we can both protect consumers and help them to make informed financial decisions. The Reserve Bank produced the video Lessons from a Storm, based on case studies from Hurricane Katrina. The video depicts the advantages of having a bank account for families trying to reestablish a firm financial footing. The Community Affairs group has also raised awareness in their region of successful "Bank On" efforts and initiatives targeted at immigrant families. Reserve Bank staff works with Harvard University faculty on a project during income tax season to apply behavioral economics principles to help low-to-moderate income families save. Bank staff created two new interactive education programs, Consumer Savvy and Teens and Credit for community and school groups; they also conduct teacher workshops. The Bank hosts the LifeSmarts competition for the state of Massachusetts and supports this competition in other New England states, including the successful launch of JV LifeSmarts for middle school students in cooperation with the Citizen Schools partnership. The Bank created and hosts the Reserve Cup Challenge that brings together high school teams from across New England states to compete. The Bank offers a 10-unit financial capacity apprenticeship, Let's Talk about College and hosts Kids Invest! WOW as part of an after-school and extended-day program. They collaborate with the Massachusetts Council for Economic Education in the provision of the Economics Challenge and the Financial Challenge for New England states. The Bank created and hosts the New England Youth Financial Education Forum bringing together financial education stakeholders from each of the six New England states. The advisory group includes representatives from the states' Departments of Education, Consumer Affairs, and Treasury offices as well as directors of the state affiliates of the Jump$tart Coalition and the Council for Economic Education, university professors, and other state officials. In collaboration with the Museum of African American History, the Bank has created the Black Entrepreneurs of the 18th and 19th Centuries exhibit that is being hosted both at the Bank and at the Museum. The exhibit focuses on the role of entrepreneurs in the economy and highlights the lives, challenges, and contributions of over 60 black entrepreneurs in selected industries. The New York Fed has been proactively involved in economic and financial capability programs for students and teachers, from elementary school through college, for many years. The elementary school program, It's All about Your $, is a part of the Jr. Fed Club that includes hands-on activities for students and a savings pledge. The High School Fed Challenge has more than 100 high school teams competing for the district championship; the winner competes in the national finals at the Board of Governors. Staff is developing the Fed Challenge Online, an e-learning site for high school students and teachers to learn about macroeconomics and monetary policy. Reserve Bank staff works with Sagrada Corazon University in Puerto Rico on the Economic and Financial Educational Alliance of Puerto Rico, providing teacher training for high school economics teachers and a local economics contest for high school students focusing on local economic issues. Staff provides two continuing education programs for high school teachers: In the Shoes of a Fed Policy Maker and Global Economic Forum. These three-day summer programs are designed to help teachers more effectively incorporate economics into their classroom. The Bank also hosts a symposium for two- and four-year college professors; The Federal Reserve in the 21st Century features a day on Fed Basics and another on current topics presented by Federal Reserve Bank of New York research economists. This year, the second day focused entirely on the recent financial market crisis and Fed response. To facilitate understanding and monitoring of consumer credit issues, the Bank's public website provides timely, detailed geographic information on delinquencies of bank credit cards and mortgages through a set of dynamic maps. Staff continues to develop features for the site, including adding data on auto loans and more detailed information on all mortgages at the state, county, and zip code level. Staff is creating a simple communications tool to aid low-income potential homebuyers. To retain homebuyers as homeowners, staff has created a series of flyers, customized by region, to enable borrowers to find free and reliable foreclosure prevention resources, available on the website. In addition, staff created a "spot a scam" checklist to supplement the Foreclosure Prevention Flyers. To build and expand educational capacity in the region, staff works with key stakeholders, including the Community Bankers, HopeNow Alliance, other federal and state banking regulatory agencies, and a range of community leaders and helps professionals across the region. In addition to conferences and training sessions, staff participated in borrower fairs and events that offered assistance to distressed homeowners looking for solutions to their mortgage-related troubles. Last year, home borrower fairs in Brooklyn, New York; Newark, New Jersey; and Westbury, Long Island helped about 700 home borrowers. This year, the HopeNow Home Preservation Forum, held in Newark with the New Jersey Department of Banking and Insurance, attracted 904 families to the half-day event compared with 193 families in 2008. To develop capacity in the legal community, the Reserve Bank's Legal group formed the Lawyers' Foreclosure Intervention Network (LFIN), a pro bono pilot program cosponsored by the City Bar Justice Center. LFIN marshals the resources of New York City's legal community to assist New Yorkers facing the prospect of foreclosure. The Federal Reserve Bank of Philadelphia is conducting a long-term experimental-design study of the effectiveness of pre-purchase homeownership counseling on consumer credit behavior and homeownership outcomes with the assistance of the Consumer Credit Counseling Service of Delaware Valley (CCCSDV) and Abt Associates, Inc. A major emphasis is on the financial behavior of participants after they become homeowners. Many homebuyers experience the greatest frequency of problems in the third or fourth year of their mortgage loan, including falling prey to unscrupulous lenders when refinancing and mortgage default. The study recently completed the process of recruiting program participants and is completing the first-year follow-up interviews. The Community Affairs department holds three meetings annually and participates on the steering committee for the Financial Education Network of Southeastern Pennsylvania. This group provides training and professional development to nonprofits, housing counselors, community banking lenders, credit unions, and others who work to enhance the financial capacity of the low- and moderate-income population and others. Recent topics have included state, local, and national foreclosure prevention programs such as FHA Secure, Hope for Homeowners, Philadelphia's Residential Mortgage Foreclosure Diversion Program, and the HopeNow Alliance. The Reserve Bank has been active in cautioning consumers about foreclosure avoidance scams. Bank staff also participated in the Consumer Information Fair sponsored by the Pennsylvania Office of the Attorney General during National Consumer Protection Week. The Federal Reserve Bank of Philadelphia continues to train and provide materials to high school teachers to teach the Keys to Financial Success personal finance course. About 70 schools offer the course to between 2,000 and 3,000 students per semester. An additional 15 to 20 teachers will be trained this summer. In March, Bank staff trained 34 elementary school teachers in the Money Matters for Kids program, a personal finance curriculum. This program incorporates active- and collaborative-learning teaching methods in response to economic education research that shows teaching personal finance needs to be incorporated from kindergarten through grade 12, as in other disciplines such as reading and mathematics, in order to be most effective. This program will be offered as a one-day program along with the Personal Finance for the Middle School Classroom course aimed at teachers in grades 6 to 8. Staff also hosted a delegation from the Russian Federation on a study tour to learn more about how personal financial education is implemented in the United States. Reserve Bank staff has been active supporters of the America Saves program from the launch of the pilot program in Cleveland almost ten years ago. Since then, they have conducted surveys of practitioners, convened regional consortia, hosted conferences, and currently advise the Northeast Ohio Consortia for Financial Success. Bank staff provided advice to help launch Pittsburgh Saves in Southwestern PA as a program of the Southwestern Pennsylvania Financial Education Consortium. Staff also advises the Northeast Ohio Coalition for Financial Success, an organization comprising local government, universities, banks, and financial education providers to promote greater awareness of and access to existing resources. The group's tagline is "Build your financial knowledge one step at a time." With the goal of improving financial stability for individuals and families in Greater Cincinnati, the Cincinnati Branch of the Federal Reserve Bank of Cleveland is working in partnership with the mayors of Cincinnati, Covington, and Newport to launch Bank On Greater Cincinnati in order to connect Greater Cincinnati residents with mainstream financial services. The Federal Reserve Bank of Cleveland has collaborative relationships with the United Way of Greater Cincinnati, the Southwestern PA Financial Education Consortium, Treasury Retail Securities, NeighborWorks Western Pennsylvania, and the Internal Revenue Service (IRS) Volunteer Income Tax Assistance program. In 2008, the Office of Community Development hosted a research seminar on financial education. Researchers and practitioners shared their perspectives and knowledge regarding financial literacy, impact measures of various programs, and the role of financial education as part of a broader set of policies aimed at enabling financial capability in low- and moderate- income communities. As a result of this seminar, the Bank commissioned a white paper (202 KB PDF) on financial education programs and hosted a policy summit on the effectiveness of financial education efforts. The Cleveland Reserve Bank recently launched a new exhibit in their Learning Center and Money Museum, "Power to the People: Regulation and Change." The exhibit introduces community audiences to the regulatory process and the consumer's role in regulatory reform. The exhibit is intended to encourage community dialogue to foster greater public understanding of the Federal Reserve. The exhibit and the program messages regarding consumer education, financial education, and regulatory reform are available to community groups and to other Reserve Banks for use in their outreach efforts. Other Learning Center programs include lessons on critical thinking that are aligned with Ohio Board of Education standards. In addition to traditional student audiences, the Bank offers tailored financial education programs for organizations including Girl Scouts, NAACP, and the library system. Great Minds Think: A Kid's Guide to Money is a self-directed financial education workbook, available in English and Spanish. It was developed in response to requests for resources to introduce financial education and critical thinking concepts to middle school audiences. After less than two years, the Reserve Bank has received requests for more than 89,000 copies from other Reserve Banks, government, financial, education, community groups, and families. Now in its sixth year, the Cleveland Fed's writing contest--Money, Money, Money--encourages high school students to think critically and creatively about financial decisions. Ongoing feedback from educators informs the structure of the annual contest. Federal Reserve Bank staff has collaborated with the New Visions New Ventures Center for Asset Development and their individual development account program. They have also worked with the Jump$tart affiliates in Virginia to present sessions on credit and mortgage markets in Virginia. Ongoing activities include partnerships with and serving on the board for Jump$tart, Junior Achievement, state Councils on Economic Education, and local organizations that promote and support financial literacy. Outreach and education activities aimed at promoting financial capability include providing professional development and training opportunities for educators, developing and distributing curriculum and informational resources, building capacity for community based educators and organizations, and raising awareness about the importance of economic and financial education through partnerships. Staff hosted a conference series, "Widespread Impact of Mortgage Foreclosures: From Credit Markets to Local Communities" in conjunction with universities to provide information on mortgage foreclosures nationally and locally. Economists and analysts presented insight into drivers that contributed to the problem, the subsequent disruption of the mortgage market, and the effects on communities. This conference reaches faculty, students, the general public, and local media. The Bank hosted a professional development webinar series for teachers. The program included topics on housing finance, structured finance, and financial regulations followed by highlights of Federal Reserve resources for teaching personal finance and economics. With the Virginia and North Carolina Bankers' Associations, staff conducted four sessions of Back to School, a one-day workshop to prepare bankers to conduct classroom visits. The program is designed to provide bankers with instructional techniques, resources, and content information on relevant topics in the K-12 curriculum; 100 bankers conducted multiple classroom visits across Virginia and North Carolina. The Reserve Bank hosted foreclosure prevention training for housing counselors in Virginia and assisted in building the capacity of the housing counseling network that serves North Carolina and South Carolina. Staff serves on the Maryland Fraud Prevention Task Force, helping to determine what information to disseminate to help consumers avoid foreclosure scams in Maryland and to identify the enforcement remedies available. The Baltimore Branch staff co-sponsored Maryland's Personal Finance Challenge in partnership with the Consumer Credit Counseling Service of Maryland, the Maryland Coalition of Financial Literacy, and the Council on Economic Education in Maryland. Charlotte Branch staff participated in Financial Literacy Day held at the General Assembly in Raleigh, North Carolina to kick off Financial Literacy Month. The Federal Reserve Bank of Atlanta and the Federal Deposit Insurance Corporation's (FDIC) Atlanta Regional Office have partnered to develop a financial planning curriculum that serves as an enhancement to the FDIC's existing MoneySmart Financial Education curriculum. The topics include financial services and products that promote lifelong financial stability, such as the fundamentals of financial planning, saving for education, insurance planning, retirement planning, estate planning, income taxes, and investment planning. Staff also conducts train-the-trainer events using the MoneySmart curriculum. Staff from the Atlanta Reserve Bank serves on the taskforce for the Bank On Savannah initiative to increase financial access for unbanked individuals. Additionally, Reserve Bank staff participated in the HopeNow Foreclosure Workout event hosted in Atlanta, Georgia. The Atlanta Reserve Bank provided information on foreclosure prevention options, on foreclosure prevention taskforce partners and resources, and on how to recognize foreclosure rescue scams. Staff supports very active Jump$tart Coalitions in Georgia, Louisiana, Florida, Alabama, and Tennessee. The Tennessee coalition, with leadership from the Nashville Branch staff, was recognized by National Jump$tart as the 2009 State Coalition of the Year. In Tennessee, largely because of the work of the state Jump$tart Coalition, financial education will be a mandatory class for high school students in order to graduate, beginning in 2010. The Nashville Branch has been designated as one of eight organizations certified to provide the required 14-hour training necessary for educators to teach the course. Staff in the New Orleans branch provides an intensive summer teacher training program that incorporates personal financial literacy concepts into the school system's free enterprise class, which is required for graduation. The Atlanta Fed is closely involved with the Georgia Consortium for Personal Financial Literacy, which is the Georgia affiliate of Jump$tart. In Florida, the Coalition has received funding to provide mini-grants for teacher training and participates in the American Bankers Association's (ABA) Teach Children to Save Day. The Community Affairs staff is active in the many regional asset-building programs throughout the District to promote asset building and preservation through free tax preparation, financial education, savings programs, and foreclosure mitigation. In Tennessee, asset-building programs are available through the free tax sites, including access to Department of Human Services benefits screening, access to a free "second chance" savings or checking accounts through the SavingsPoint initiative, and access to free one-on-one financial coaching. Community Affairs is active throughout the District in foreclosure prevention and mitigation including partnering with the HopeNow Alliance, working with local congressional staff to put together foreclosure mitigation events, and convening local and regional task forces for foreclosure mitigation and prevention. In Louisiana and Mississippi, Community Affairs participates in programs to educate and inform communities about rising mortgage defaults and high-risk markets and trying to combat foreclosure prevention scams. In Florida, a series of MoneySmart Train-the-Trainer workshops has been conducted for students at St. Thomas University School of Law. As part of this program, third-year law students were also trained in foreclosure counseling and will do pro bono financial education and foreclosure counseling in the community. In South Florida, Community Affairs has been working very closely with the U.S. Southern Command Office of Family Support to provide mortgage financing information and foreclosure mitigation alternatives specifically for military personnel and federal employees. In this effort, partnerships have been developed with the Departments of Housing and Urban Development and Veterans Affairs to discuss alternatives and services available specifically to active military and veterans. Also, Community Affairs is working with the Mexican Institute for Mexicans Abroad to train representatives from the Mexican Consulate and other community-based organizations that serve the Mexican immigrant population. Community Affairs is also working with the Bank On Cities program in Georgia, Tennessee, Louisiana, and Florida. In Georgia, Bank on Savannah has just launched its initiative. In Louisiana, Community Affairs is cooperating to launch a Bank On initiative modeled on Bank On San Francisco in Baton Rouge and Houma/Terrebonne parish with government leadership. In Tennessee, Community Affairs staff is working with the City of Nashville in the very early stages of exploring a Bank On campaign. In Florida, Community Affairs staff is working with the City of St. Petersburg to launch the Bank On St. Petersburg. This city has been selected as one of eight cities in the nation to be awarded a technical assistance grant from the National League of Cities for their Bank On program. Additionally, the Jacksonville area is part of the Treasury's Community Financial Access Pilot Program and Community Affairs has played a key role in convening the community around this initiative. Additionally, Community Affairs has developed a Disaster Preparedness Center website to assist in financial preparedness and recovery. Small business financial education is also being developed. Branch staff continues to use the convening power of the Federal Reserve to reach out to diverse groups in order to develop partnerships and collaborations around the area of financial stability. Staff interacted with over 7,300 middle school and high school teachers, with intensive workshop presentations focusing on economic education and financial literacy, and an additional 5,000 teachers with presentations at conferences. As part of a major initiative to assess the effectiveness of economic education programs, the Atlanta Fed is partnering with the St. Louis Fed to establish standards for economic education and personal finance programs. These standards will cover a wide range of key financial knowledge targets for middle and high school students and will provide a basis for ensuring consistency and measuring the effectiveness of our economic and financial education programs. The Miami Branch education staff delivered a Building Wealth teacher workshop for 50 teachers, featuring the Dallas Fed's personal finance curriculum. They also delivered a "Fed Boot Camp for Academy of Finance (AOF) Teachers" workshop. This workshop sought to familiarize AOF teachers with the Fed's history, functions, and educational resources. Personal finance publications were provided as part of the educational resources featured. In Alabama, Community Affairs plays a lead role in the Alabama Asset Building Coalition. Birmingham branch staffers serve on the steering committee for the College Access Challenge Grant Program (CACGP), awarded to the Alabama Department of Education by the U.S. Department of Education. This grant program aims to significantly increase the percentage of Alabama's qualified underrepresented students that complete the student aid application, enroll in college, and receive a degree. The steering committee has incorporated a student loan awareness component that educates participants on the types of loans and the benefits or drawbacks of each. Since its inception in 2002, the Federal Reserve Bank of Chicago's Money Smart Week has grown to include more than 20 cities and all five states within the Chicago Fed's district of Illinois, Indiana, Iowa, Michigan, and Wisconsin as well as several cities outside of the district. Each year, hundreds of educational classes and activities are offered through these campaigns and tens of thousands of consumers participate. Local financial institutions, nonprofits, government, schools, and libraries work together to promote and offer the many educational resources available to the community. During the designated week, participating organizations are asked to "do what they do already," whether that means their monthly home buying 101 class or annual financial literacy fair. It is the Chicago Fed's belief that well-informed consumers are more likely to make better financial decisions to the benefit of the consumer and possibly the economy overall. Examples of Money Smart Week events in the past year include activities at Northern Michigan University, helping students learn more about avoiding debt, saving and investing, and the economic stimulus package. Part of the Money Smart Week activities in Detroit included the launch of Bank On Detroit in partnership with the AARP Foundation and their Michigan state affiliate office. Additionally, the Reserve Bank hosted their third annual financial literacy summit. The Reserve Bank hosted a conference in the Quad Cities area of Iowa that explored the foreclosure situation. Speakers focused on counseling and mitigation programs, and financial literacy and education initiatives. Reserve Bank staff initiated a collaboration with the Securities Division of Missouri's Secretary of State, the Missouri Jump$tart Coalition, United Way, and the Southern Indiana Asset Building Coalition, to explore having a Money Smart Week in the district. Many of these same partner agencies will also be collaborating on an asset-building and financial education statewide conference in the fall. Staff also met with the Missouri Community Betterment (MCB) Educational Fund, Inc. to explore working together on projects such as Coming Up with the Money, Growing Entrepreneurs from the Ground Up, Get Checking, Money Smart, and It's Your Paycheck. It's Your Paycheck is a new nine-lesson curriculum that involves students in learning about wages and taxes, credit cards, payday loans, rent-to-own contracts, and check-cashing schemes. Other curricular resources include Cards, Cars and Currency, a five-lesson curriculum that focuses on purchasing a car, how small purchases can add up to one big problem and the costs and benefits of debit and credit cards. For the elementary schools (grades one through three), Reserve Bank staff produced Piggy Bank Primer: Saving and Budgeting. The Bank also provides basic economics and personal finance lessons, an eleven-lesson series for kindergarten through grade five. Staff has trained more than 1,700 teachers, who in turn reach about 72,000 students. For adult learners, the publication Kids and Money helps parents teach their school-age children how to manage money. You've Earned It explains how the Earned Income Tax Credit (EITC) works and how to find out if a family qualifies. Staff across the District (St. Louis, Memphis, Little Rock, and Louisville) provides advisory services to local individual development account (IDA) coalitions and collaboratives to link EITC benefits to IDAs. Learn Before You Leap promotes homebuyer counseling organizations within the district to encourage potential homebuyers to learn about the process and pitfalls before signing on the bottom line. Across the district, the Reserve Bank hosted the kick-off for Exploring Innovation in Community Development Week in Louisville and sponsored United Housing's 7th Annual Housing Fair in Memphis. This fair was unique because it targeted not only the general market but also Memphis' growing immigrant community. The event showcased local lenders, real estate professionals and other housing related service providers and gave participants the opportunity to explore affordable lending products available in Memphis and Shelby County. The event's objective was to provide easy access to educational information and materials for homeownership to first-time homebuyers and existing homeowners. Staff across the district is also involved in foreclosure prevention, mitigation, and neighborhood stabilization efforts. Reserve Bank staff is involved in the Community Financial Access Pilot (CFAP) program in St. Louis and the Mississippi Delta, two of the eight communities involved in the U.S. Department of the Treasury, Office of Financial Education's CFAP initiative. CFAP is designed to increase access to financial services and financial education for low- and moderate-income families and individuals. In addition, staff at the Louisville Branch provides technical assistance and advisory services to the Bank On Evansville, Indiana initiative regarding product development and regulatory issues. The branch hosted the first meeting to introduce the Bank-On initiative in Louisville. Reserve Bank staff continues to provide leadership to the Minnesota, Montana, and North Dakota Jump$tart Coalitions. They have worked with the Montana Financial Education Coalition, a Jump$tart affiliate, to conduct a series of foreclosure workshops throughout the state. These workshops included presentations on the benefits of developing and adhering to a budget, including stories of how families with and without budget disciplines are experiencing the current housing situation. In North Dakota, Reserve Bank staff presented at the North Dakota Jump$tart's annual conference and continue to provide technical assistance to the organization. The partnership with and support of the Minnesota Council on Economic Education's Personal Finance Decathlon continues. The Decathlon challenges students to demonstrate their knowledge of personal finance and sound money management. Students in grades 7 through 12 compete in teams by taking an online test that covers 10 areas of personal finance. This preliminary round is followed by a face-to-face competition for the finalists. The Bank recently contracted with an instructor to develop teaching resources for high school economics and personal finance utilizing articles that have been published in Reserve Bank publications. The articles that are relevant to EconomicsAmerica's 20 national standards are accompanied by a class supplement. Additional information is available on the Bank's Community and Education webpage. Financial education resources for teachers are also available. An example of these resources includes Our Money Curriculum Unit, which provides the history of money and a teacher's guide. The Bank supports the development of original research and data tools that can be used by financial educators in program development, implementation, and evaluation. In the area of Financial Education in the Workplace, the Bank has published interim findings from a longitudinal study of the efficacy of financial education services provided within the workplace setting. A final evaluation and review is under way using data gathered from a broader set of employers. As part of the Oklahoma Asset Building Coalition, the Oklahoma City Branch is supporting the development of the Oklahoma Self-Sufficiency Standard. This tool provides county-by-county data on the income needed by different family compositions to be financially self-sufficient, which can be used to educate and counsel students and clients facing financial and career decisions. The Reserve Bank regularly hosts conferences and trainings for educators and key stakeholders. The Denver office co-hosted the Third Annual Lt. Governor's Summit on Financial Education in Albuquerque, New Mexico. Over 400 people attended 17 diverse breakout sessions on various financial topics. The Bank is co-sponsoring the Financial Education Instructor Training for Native Communities, a comprehensive financial education instructor training for Native American communities, with the Oweesta Corporation on May 5-7, 2009, in Santa Ana Pueblo, New Mexico. This instructor training and certification program will help Native American organizations establish and sustain financial education programs in their communities. The Bank also hosts an annual statewide financial education conference for practitioners in Oklahoma in partnership with the Oklahoma Jump$tart Coalition. The Bank supports the formation and development of financial education coalitions throughout the District. In addition to providing organizational development and logistical support to coalitions, the Bank has replicated Money Smart Weeks in Kansas City, Colorado, Oklahoma, and Nebraska. Developing resources for educators and students continues to be a focus for the District as well. Many new curriculum resources focus on elementary level educators and students, with the goal of filling a gap of available resources for this audience and reaching students with financial and economic concepts at a younger age. Bank staff developed Fifty Nifty Econ Concept Cards that can be used in a variety of ways in the elementary/middle school classroom, as well as role plays and games that reinforce personal finance concepts in a fun and interesting way. In addition, each of the four district offices has a traveling educational trunk at its disposal to share with classrooms across their zones, making Federal Reserve resources and education more accessible in the farther reaches of the District. Staff also developed resources for high school age students to meet local needs, including teaching tips that tie to research published by economists in the District. All of these resources are shared by staff members with local educators and students through workshops, seminars, and conferences. The Bank is actively involved in the development of local Bank On campaigns in Omaha and Denver. The Omaha office hosted a meeting of partners that resulted in the announcement of the Bank On the Metro campaign and creation of a steering group. The Dallas Fed's Community Affairs' personal financial initiatives center on the Bank's publication, Building Wealth, a Beginner's Guide to Securing Your Financial Future, the Bank's most popular publication and the most frequently downloaded page from the Bank's website. Staff provided training at the University of Texas at El Paso for students and adult members of the Las Comadres Para Las Americas, a social network of Latina professionals; the New Mexico Lt. Governor's Financial Education Summit in Albuquerque; the Houston Urban League's Young Professionals; the U.S. Department of Housing and Urban Development's Neighborhood Networks regional workshop; and the Texas Department of Banking in Houston. Staff recently hosted the official kick-off for Bank On Houston, a collaborative effort to bring the city's unbanked individuals into the financial mainstream. Staff at the Houston branch participated on a panel for the Children's Defense Fund's Financial Literacy Workshop. Staff highlighted Building Wealth and made the CD-version available for the 120 attendees. Reserve Bank staff is involved in foreclosure prevention activities throughout the District, providing information and resources to consumers, industry professionals, and nonprofit housing counseling staffs that are assisting consumers in mitigating foreclosure, in particular in the Dallas-Ft. Worth, Houston, and San Antonio areas. Reserve Bank staff coordinated employee events for National Consumer Protection Week. The purpose of these events was to raise awareness of Dallas Fed employees about financial education, identity theft, and fraud protection. Staff contacted the Texas State Securities Board for a speaker for a Lunch and Learn event that was held in Dallas, video-conferenced to the branches, and attended by 150 employees across the District. Staff obtained publications from the Federal Trade Commission for distribution at the event and set up a meeting room with computer workstations and printers and organized volunteers to assist employees in ordering their free annual credit reports. Reserve Bank staff also provided training and professional development opportunities for teachers, including state and national professional development conferences for educators, Advanced Placement Summer Institutes for teachers, as well as student programs targeting diverse groups from at-risk students to those from suburban school districts. In response to teachers' feedback, the economic education staff developed Building Wealth in the Classroom, a collection of lesson plans specifically designed for adolescents. These lessons correlate with national and state personal finance standards. Economic Education staff worked successfully with the Texas Education Association to have both Building Wealth: A Beginner's Guide to Securing Your Financial Future and Building Wealth in the Classroom approved as recommended personal finance publications that can be used to satisfy the Texas mandate requiring personal finance instruction in the high school economics class. In addition to the Building Wealth initiatives, the El Paso Branch offers a program, Let's Talk About College: A Financial Perspective. The program was originally developed by the Boston Fed and Citizen School to help urban middle school children learn how to plan financially for college and in so doing develop personal finance skills. The curriculum has been customized for the El Paso branch. El Paso economic education staff offers a series of train-the-trainer workshops for teachers this summer, which will allow the program to be implemented across the area school district in the fall of 2009. The Federal Reserve Bank of San Francisco provided leadership for the first of what will become an annual week-long financial education campaign. Over 125 financial literacy events were held by over 20 different organizations throughout the metro area. Staff convened a group of key stakeholders to initiate discussions for establishing a Hawaii Jump$tart coalition. The 20 participants included representatives of local organizations involved in financial education and financial institutions. Staff also worked with college-bound juniors and seniors in East L.A. on the subject of personal financial literacy and the benefits of having banking relationships. The session was a function of the Youth Committee in the Alliance for Economic Inclusion. Staff at the Los Angeles Branch led the "Four First Fridays" quarterly convening of the Los Angeles Asset Building Coalition. Guest speakers from an adult multi-language financial literacy program and financial institutions presented on what literacy is, how it is tracked, and examples of best practices. The discussion was part of a continuum of training for nonprofits in advance of the Bank On LA program roll-out in the spring of 2009. Reserve Bank staff also participated in the forum on "Immigrants in Our Midst: Cultural Understanding in Diagnostic and Immigration Issues," discussing challenges faced by immigrants within the financial services system and their need for financial literacy resources.
https://www.federalreserve.gov/newsevents/testimony/braunstein20090429a.htm
Dear Ms. Prichard, I am writing to you as a concerned citizen of California about neonicotinoid pesticides. I earned a BA in biology from UC Riverside and an MA in biology from CSU Fresno in the 1960-s and -70’s. I was a biological research tech for about fifteen years, then I switched gears and graduated from University of Oregon with a second bachelor’s degree in landscape architecture. I have been a CA registered landscape architect since 1989 in private practice. I am writing to ask you to take action to terminate the studies that have been dragging on for five years about California’s use of neonicotinoid pesticides and their impact on honeybees. Please require very speedy publication of its findings. If the study is unbiased and comprehensive, I expect the Department of Pesticide Regulation to conclude that these pesticides should be restricted greatly or outright banned. The following explains why I think that should be the outcome. I have been following the honeybee colony collapse syndrome issue since it surfaced around 2007. Most recently, studies published in 2014 have now found conclusive correlations between bird population declines where neonicotinoid pesticides have built up in watersheds. The relationship is caused by the pesticides having persisted long enough to kill many species of insects. The destruction of insects harms all species of birds (even fruit and seed-eating birds) because terrestrial bird nestlings require the protein found in insects (or other sources) to reach maturity. I know that other factors are involved in honeybee colony mortality: poor feeding of hives over winter, as well as starvation in the very agricultural fields where they are brought to pollinate crops, when there are too many bees for the crop to feed, and there are no supplemental flowering plants in fallow land thatthey could feed on, due to excessive use of glyphosate and other herbicides. I know about a couple of diseases that are almost epidemic in some areas, and parasites, especially the Varroa mite. Neonicotinoid manufacturers misdirect our attention to Australia where the Varroa mite is not found, where neonicotinoids are in use, and where colony collapse disorder has not been found. However, I have not seen this as a serious scientific paper. I wonder, are Australian bees treated the same as US bees (transported long distances to pollinate thousands of acres of a single species of plant, with no other sources of nectar or pollen, for instance)? Are hedgerows or wildlands left for supplemental pollen sources to carry hives over when crops aren’t flowering sufficiently? Have neonicotinoids been used as long there as in Europe and the US with the subsequent buildup in the environment? Are Australian crops also treated with glyphosate as extensively as American crops? Is it possible that some plants in Australia give honeybees more resistance to stress and they can recover from nerve damage caused by neonics? What is the level of homeowner use of neonics? In short, is Australia’s apiculture and neonicotinoid exposure really comparable to the US and Europe to the extent that the Varroa mite is truly the only factor in their better survival? Leaving those questions aside, I think evidence is available to conclude that neonicotinoids are a key factor in bee declines in the US, and in California in particular. 1. Research has proven that neonicotinoid pesticides are killing large numbers of many other species of insects including bumblebees, insects that are NOT infested by Varroa mites. 2. Studies found that colonies of honeybees that had Varroa mite infestation had much greater mortality if the bees also tested positive for neonicotinoid exposure, far below the agency threshold listed for acute toxicity. 3. Please also consider this. I may not remember this correctly, but if I did, it is significant. In 2007 when I initially started to study this, I read the 1980’s era lab studies conducted for Bayer’s product (the original neonicotinoid) on bee mortality. Every research cycle for acute exposure, they reduced the exposure level to test, and at each lower level, bees died, not 50% for the LD50, but enough to count. These were lab studies, not where bees are working as part of a hive with all those stresses. I recall that the studies did not reach a point where mortality didn’t occur. Researchers never found a level at which bees were not harmed. They just stopped studying it, and the product was approved and in use by around 1990. I recommended Bayer’s product imidacloprid in the 1990’s when the lerp psyllid was swarming and harming Syzygium (= Eugenia) hedges in San Diego. I recommended it in part because the label said it was not harmful to bees, vs. Sevin and some other potent insecticides that were known bee killers. Now I am sad and angry because I am pretty damn sure that I was responsible for killing a lot of bees, and wild birds, as well as other insects that are critical to the functioning of the entire California natural ecosystem, and for pollinating our crops. I strongly urge you to complete the long delayed study of neonicotinoids being conducted by your Department. In the process I hope the Department will conclude it is time to severely reduce their use, so by reducing their harm, honeybees and other insect pollinators stand a better chance of surviving and performing all their ecosystem services, while we devise a way to protect them from Varroa mites. The corporations that sell these pesticides have made a lot of money, so they will undoubtedly protest, using all their wealth as a weapon against this. Farmers whose crops have been protected by use of neonicotinoids may have lower yields, and that may translate into higher prices, so consumers may be affected. Big tobacco corporations which have provided the feedstock for neonicotinoids will lose money. But just as people finally reduced tobacco consumption in cigarettes because tobacco was killing people, we need to stop killing Earth’s other creatures – and hurting ourselves in the process – even if we have to pay some more to make it happen. Maybe there is a silver lining: abandoned tobacco cropland could make great restoration sites for the wide variety of plants that would help restore insects and birds. I hope for a reply from your office. Sincerely,
https://grownatives.cnps.org/2014/07/24/kay-stewarts-appeal/
In June 2020, we hosted a webinar, “How I Design Bridge : Load Rating of Steel U-Through Bridge" by Abdullah Zaid, Senior Engineer/SMEC. Generally, we have aging structures for more than 50~100 years. And these structure has more increased loading along with the development. There are a few cases of bridge collapse due to overloading. Therefore, we need to manage the structures in the perspective of asset management efficiency. So, Load Rating is necessary. In this session, Abdullah Zaid from SMEC shared the presentation regarding the load rating of steel U-through bridge. He walked through about load rating and the application based on his previous experiences. We highly recommend this session to all bridge designers who want to understand levels of bridge investigations, load rating, and fatigue assessment around the world. Key Points 1. Assessment levels for bridges Bridge maintenance is the important element of bridge life cycle. There are various levels of bridge assessment. Dr. Abdullah discussed the types and levels of bridge assessments based on Australian practice. 2. Loads and Load rating New bridges are generally designed for a conservative idealized vehicle arrangements such as SM1600 and 300LA for road and rail bridges. The load rating of existing structures might include the idealized vehicles, but more importantly it should include an actual type of loads within the bridge. The speaker shared steel-U girder bridge assessment project which applied the actual type of loads for load rating. 3. Fatigue assessment Structural elements are generally designed under the ultimate limit state. When a load is repeated thousands/ millions of times, despite the sufficient capacity for ULS load. Fatigue failure might control the design. The speaker discussed the importance of fatigue assessment and demonstrated an example from practice. Assessment levels for bridges The flowchart for the bridge assessment and the load rating is provided in each national design code. But generally, we have levels of assessment and it consists of 3 levels. - Level 1. To check the general serviceability of the structure, and identifying any emerging problems. - Level 2. Comprehensive visual inspection, To rate the condition of the bridge components. - Level 3. Detailed engineering investigations, Field investigation and structural analysis. The load rating is included in this level and Fatigue assessment as well. Load Rating When we undertake the load rating we have different steps as well. Several steps regarding the load rating are shown below. 1. As-new load rating This is to check the structure using design load and as-built drawing based on design condition. Fig. Design Condition of Example bridge 2. Structural mapping and condition assessment This is to check the structure in the existing condition. Check lists have a section state, loads, crack, new member, and etc.. Fig. Structural Mapping of Example bridge 3. As-is load rating. This is to check the structure considering the existing condition. Fig. As-is Load Rating of Example bridge 4. Fatigue assessment This is to check the fatigue life of each bridge component as well as the residual life of the bridge. Fatigue Assessment A given loading may be repeated many times, provided that the stresses remain in the elastic range. Such a conclusion is correct for loadings repeated a few times. However, it is not correct when loadings are repeated thousands or millions of times. In such cases, the rupture will occur at a stress much lower than static breaking strength. This phenomenon is known as fatigue. Repetitive loading cycles and or over-stressing of steel members can eventually lead to fatigue cracking and potentially to brittle failure. Fig. Fatigue Strength Curves for Direct Stress Ranges Moving load analysis in midas Civil uses influence line method and the results show the forces envelop. This is useful for the design as the maximum forces govern the design. For fatigue assessment, the calculation requires no only the maximum forces, but rather to historical stresses at the element results from the moving load. The fatigue analysis looks at the stress fluctuation due to the moving load at specific point. Therefore, Time History Analysis is undertaken.
https://www.midasbridge.com/en/blog/casestudy/load-rating-of-steel-u-through-bridge
Fogg was a gentle giant. He lived near a village. He was always polite to the village. He did his every best not to upset them. But being huge, it was not easy. Take fogg’s feet, for instance . if he tiptoed past their homes. He still made the ground tremble. The startled folk fell out of bed, kitchen crockery rattled and bounced about. The villagers complained to fogg. They dared so because he was so nice and kind. He even said sorry and promised to creep around more carefully than before. Then there was his sneezing. When he sneezed, he sent such a blast of air howling across the valley, that the villagers had to rush indoors for fear of being blown away! They complained to fogg about that as well. The giant promised he would sneeze into his hanky. But sometimes a sneeze came upon him all of a sudden, before he could do anything about it. One complaint followed another. Eventually, the villagers decided life would be much more comfortable without a giant living on their doorstep. So they sent for spellbound, the wizard, and asked him if he could shrink fogg down to a normal human size. Fogg agreed to the plan at once, proving what a big-hearted giant he was! All at once, a silvery mist appeared, hiding fogg from sight when it cleared, the delighted villages saw that fogg was just the same size as them! Everyone started living peacefully. Fogg moved in with a kind family who looked after him very well, and he began to enjoy life at his new size. Life is full of surprises. The surprise that arrived to the villagers was a giant, Tyson, the walking, talking type, just like fogg used to be! Tyson was short-tempered and always wanted to get his own way. The villagers did not know this at first. But they soon found out. When Tyson lay down for snooze in a lush the sheep on green meadow scattered in fear. The villagers complained, just as they had to fogg. “Clear off!” he bellowed. “i like this valley and i’m here to stay!” from that day on, Tyson stomped about wherever he pleased, flattening crops, and knocking down trees. When he lay down for a rest, he always slipped off his boots and used them as a pillow. Tyson had horrible smelly feet, and the rotten pong wafted through the valley, sending everyone indoors, rushing to shut their doors and windows. And when he slept, he snored louder than thunder. The villagers huddled in their homes, holding their heads and wishing Tyson would go away. But he would not. It was not long before they began to wish something else. “if only fogg was still big, he’d soon of Tyson!” sighed one. “it’s our own silly fault,” agreed the second. “we shouldn’t have been so selfish,” said the third. “fogg was such a thoughtful, kind and good-tempered giant. He never did anyone any harm!” “we’ve learned our lesson. Let’s get fogg the giant back!” all of them said together. So spellbound was sent for again. “Hmm!” he said, stroking his long, grey beard. “I’ll have to look up a growing spell. It might take quite a while to get it right!” “we can’t wait,” replied fogg. “Tyson is causing too much trouble. I’ve an idea! Listen carefully…………” As Tyson lay snoring in the shade, he felt something tickle his nose. It did not stop until, snuffling and snorting , he opened his eyes. It was fogg. “I tickled you!” he called cheekily. “you can’t catch me!” Fogg made a funny face and ran of. Before a split- second a huge hand snatched at him. Fogg jumped onto a horse he had left nearby, and galloped towards the mountains, while a furious Tyson reached for his boots. But some of the villagers had tied the laces together and by the time Tyson had unknotted them, fogg, had reached the mouth of an enormous cave. He did not try to hide himself , but waited till Tyson had seen him before disappearing inside in hot pursuit! Fogg had found the cave long ago, during his days as a giant. He knew another way out that was small enough. Now, of course he was! Fogg scrambled out into the fresh air. As fogg rode clear, the villagers pushed against a rock high above the cave mouth heaving and shoving until they it rolling down the mountains loosening others as it went, until an avalanche fell across the front of the cave. Tyson had no time to escape. “He’s trapped inside” cried the villagers but not for long! Now the mountain trembled as Tyson raged and cursed, as he began to dig himself out. He worked all day and night. So did spellbound, until at last his spell was ready. He whirled his wand and muttered strange magic words. A dazzling arc of stars appeared around fogg who began to grow and grow, just as Tyson came bursting from the cave. Surprisingly, he stared at fogg. “Go and find your own valley this one is mine!” roared fogg, raising his voice for the first time in his life. None of the villagers minded one bit. They were only too pleased to see good old fogg back to normal. Tyson took off nervously across the mountains without looking back. “we promise never to complain again, fogg,” the thankful villagers told him. We know we made a big mistake before!” “More like a giant one!” someone joked and everyone laughed, though fogg took care not to laugh too loudly!
http://bukisa.com/articles/749356_good-giant-versus-bad-giant
This item is only available as the following downloads: |Full Text| | | PAGE 1 !"# $ %&'(!)*"%'+#, ( ( ECO ART EDUCATION: SUSTAINING OUR COMMUNITY By REBECCA GILMARTIN A PROJECT IN LIEU OF THESIS PRESENTED TO THE COLLEGE OF FINE ARTS OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF ARTS UNIVERSITY OF FLORIDA April 2014 PAGE 2 !"# $ %&'(!)*"%'+#, ( ( 2014 Rebecca Gilmartin PAGE 3 !"# $ %&'(!)*"%'+#, ( / ( Acknowledgements I am grateful to the many people involved throughout my effort during this project. I would like to give a special thanks to my committee chair, Dr. Michelle Tillander. Her resoluteness challenged me to deliver a thorough investigation during my research. I would also like to thank my committee member, Dr. Craig Ro land, for his realistic viewpoints and dedication to the advances in art education He has been an inspiring educator to follow. I would like to thank Lisa Igle sias creative inspirations, propelling me forward in the project preparations and website. Likewise, my ceramic professors Anna Holcombe and Charlie Cummings were essential in fostering the development of meaningful expression in my artwork. I would like to give a special thanks to Bonnie Bernau, education curator for the Samuel P. Harn Museum. She was a major contributor to the organization and implementation of the research project "Person al Adornment". I would like to thank my classmate Carrie Grunnet for her outstanding photography skills and encouragement throughout the research process. I would like to thank Mike Myers, founder of the Repurpose Project a local center promoting sustai nable practices in the community. In addition, I am grateful for the enthusiasm and detailed input from the teachers that I interviewed for this study. Lastly, I would like to thank my family for my life experiences and an inspiration to create a better w orld for future generations. ( PAGE 4 !"# $ %&'(!)*"%'+#, ( 0 ( ABSTRACT OF PROJECT IN LIEU OF THESIS PRESENTED TO THE COLLEGE OF FINE ARTS OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF ARTS ECO ART EDUCATION: SUSTAINING OUR COMMUN ITY By Rebecca Gilmartin April 2014 Chair: Michelle Tillander Committee Member: Craig Roland Major: Art Education Abstract This research project was based on a desire to examine connections between environmental literacy, sustainability and art education. My research explored current ecological art education methodologies sustainable classroom practices, and art making promoting ecological stewardship. Based on action orie nted research, I discovered that effectively introducing environmental education in the art classroom practice requires thoughtful consideration in how it is implemented Based on my findings, I created a n online website resource ( http://rebeccagilmartin.com/green art room ) that promotes Eco art education cate gorized into E co literacy, Sustainable Classroom P ractices, Projects, Artists, and Resource Links. This curriculum resource is housed in my personal website at rebeccagilmartin.com. PAGE 5 !"# $ %&'(!)*"%'+#, ( 1 ( Table of Contents Title Page .1 UF Copyright page...2 Acknowledgements ..3 Abstract 4 Table of Contents .5 Introduction ..7 Statement of Problem ...8 Purpose and Goals of the Study ..... .10 Research Questions 10 Rationale and Significance of the Study 10 Assumptions . ...11 Limitations .11 Definitions of Terms .. .11 Literature Review .. .. 13 State of the Environment ... .. 13 Environmentalism and Sustainability in Art Educati on .. ....14 Place Based Pedagogy .1 5 Methodology ....16 Data Analysis Procedures .. .. 18 Findings .. .19 Critical Place Based Community Education .. 19 Sustainable Community and Classroom Practices ..20 PAGE 6 !"# $ %&'(!)*"%'+#, ( 2 ( Artists Promoting Ecology .23 Summary Across all Findings 2 4 Discussion and Conclusion 25 Discussion and Interpretation of Findings .25 Significance, Implication s, and Recommendations ...28 Conclusion .28 References ..31 Appendic es 35 List of Figures a nd Figure Captions ..45 Author Biography ..46 PAGE 7 !"# $ %&'(!)*"%'+#, ( 3 ( You may be aware of the growing social movement of people concerned about the environment. Companies use labels such as eco friendly, nature friendly, and green to make environmental marketing claims when promoting their products. However many recyclable products require appropriate action by the consumer to fulfill reprocessing claim s For example, during my past teaching experience as an art educator, my school location did not have a recycling program. Plastic bottles, paper products and other materials were discard ed each day in the classroom. I can still recall the time at the end of the school year when I was left with many recyclable paint bottles th at needed to be cleaned an d placed into a recycling bin. Because of the lack of accessibility to a recycling company and knowledge of recycling requirements, sadly, they ended up in a nearby trash container filled with paper and plastic drinking bottles. I believe that educational institutions need a paradigm of responsible behavior toward environmental concerns that engage responsible material practices and stewardship of our planet My research explore s e co art education as it integrates art education with environmental education as a means of developing awareness of environmental concepts and issues, such as conservation, preservation, restoration and sustainability (Inwood, 2008 para. 3) According to Wallen (2014), at a time when the wo rld is beset by ecological crises, art that aims at address ing environmental issue s is worth examining to find solutions to the many problem s facing the planet Wallen argues specifically that artistic and scientific roots of the practice can demonstrate t he significant role that art can play in the initiation, development, and endorsement of a culture of sustainability. For example, seeing images of birds with plastic contents in their stomachs and understanding the data on quantities of materials being co nsumed daily can be very revealing. There is a great necessity for educators across disciplines to examine the human connection to PAGE 8 !"# $ %&'(!)*"%'+#, ( 4 ( the environment and examine educational strategies that will foster a greater possibility of promoting change of consciousness toward environmental issues (Wallen 2014) Statement of the Problem Recent reports about climate change and environmental destruction show the urgent relevance for attention to the environment. On a global scale, a report by The Guardia n (2013), the IPCC states that climate change is human induced and that world leaders must now respond with policies to cut greenhouse gas emissions. If action is not taken, the consequences will be the rise of sea levels, heatwaves, and changes to rainfa ll. Prof David Mackay, chief scientific adviser of the Department of Energy and Climate Change said, "We need to take action now, to maximize our chances of being faced with impacts that we, and our children, can deal with (Harvey, 2013). Another concern is plastic waste. A ccording to Campaign for Recycling (2013), plastic litter is the fastest growing component of the waste stream because plastic never biodegrades. Locally, communities are seeking to take action New policies are currently being created to deal with the increasing amount of plastic polluting the earth. For example, the city of Los Angeles adopted an ordinance to ban plastic bags beginning the year 2014 ( Dpw.lacounty.gov, 201 4). Now more that ever there is a need to integrate environmental literacy in education Yet, our current educational system promotes a disconnection of the current issues in our world. According to Smith (2002) the disconnection between children's lived experience and school learning has been exacerbated by our national preoccupation with standardized test scores (p.586). Additionally, Gruenewald (2003) describes the standards and tes ting dominating today's educational discourse as a curricula that discourages empathy and expl oration of local places. He states, "classroom based research is inadequate to the larger tasks of cultural and ecological PAGE 9 !"# $ %&'(!)*"%'+#, ( 5 ( analysis" (Gruenewald, 2003, p. 4). Education programs do not prepare future teachers to create curricula designed to allow actual e xperience with the phenomenal world so teachers largely accept and follow the mandates of standardization (Gruenewald, 2003). Historically, artists have supported community efforts by promoting awareness of environmental issues, transforming their concept over time. According to Krug & Siegenthaler (2006, para. 1), during the 1950s and 1960s, artists helped connect art with life centered issues. A change in views happened from 1960 to 1990. Envi ronmental artists in the 1960s to 1980s were less concerned with environmental issues and more concerned land being a resource to create earthworks or land art (par. 3). Since the late 1970s, and continuing today, artists are creating ecologically sound art intended to heal the environme nt (e.g., Alan Sonfist and Joseph Beuys ) In the 1990's artists attempted to heighten people's awareness of the need for ecological sustainability through problem solving, shock, humor, and educational documentation (e.g., Chris Jordan and Lynne Hull). Tod ay, contemporary ecological artists are actively involved in local and global advocacy (para. 5). Additionally art educators have promoted eco art education however, struggled to embrace the practice Art educators such as McFee (1961) stressed the conn ections that need ed to be made between art, culture, meaning, and the environment. Later, a modern environmental movement called Earth Day began on April 22, 1970. According to Blandy & Hoffman (1993), during this movement, scholars proposed an art educati on response to the problem and proposed that individuals that value aesthetic experience can also be sensitive to the environment. It was not until 1992 that the National Art Education Association addressed environmentalism at a national convention, "the l and, the people, the ecology of art education" (Blandy &Hoffman, PAGE 10 !"# $ %&'(!)*"%'+#, ( -6 ( 1993, p. 31). However, according to Blandy & Hoffman, it was disappointing that the conference organizers did not provide ecological alternatives to the usual convention practice (p. 31). Pur pose and Goals of the Study My research aims to reveal ways of successfully implementing eco art education focused on methods of teaching, sustainable practice, and art making that promote ecological stewardship My goal is to propose a new model for art educat ion that aim s to build meaningful, empathic, connections between humans and the environment. As a tool for profe ssional development, my website resource promote s Eco Art E ducation categorized into s ustainable c lassroom p ractices, e co literacy, p rojects, artists, and resource l inks. Research questions The following questions direct my research toward environmental art education, sustainable classroom practice and ecological stewardship. 1. How can art educators effectively teach eco art education ? 2. How can art educators implement a sustainable classroom practice? 3. How can art practices promote ecological stewardship? Rationale and Significance of the Study As environmental issues rise, developing values associated with taking respo nsible action are suggested ways of teaching, and need to be further explored. Educational textbooks of the 19 th century present the view that humans should dominate nature. For example, students were instructed to re imagine and re design nature through their art (Krug, 2003, para. 18). In contrast, a s a contemporary approach, Graham (2007, p. 375) suggests a critical place based pedagogy A critical place based pedagogy reco gnizes the experiences of a community grounded in shared understandings. A critical pedagogy of place strives to critically rethink our relationship to the PAGE 11 !"# $ %&'(!)*"%'+#, ( -( environment. Research is needed in explo ring e ffective art teaching met hods that promote understandi ng of the complexity of the environmental problems. Assumptions This project assumes that art teachers are interested in a human connection to environmental ethics, sustainability, and stewardship through art education. According to Lankford (1997), becau se of the complexity of ecological topics, teachers must be willing to plan interdisciplinary lessons which connect art to science, social studies, economics and community related topics I assume that teachers will und ertake the challenges of introduci ng art that integrates a study of other subject areas that may involve collaboration with other teachers and experts in the community. Thirdly, I assume th at art teachers are open to pedagogical practices that encourage art student investigation of places and engagemen t in critical thinking skills. Limitations This project included collecting a small set of data through personal journaling of responses from interviews with a limited number of participants in a localized geographic area. The questions for the interview focused on recycling Questions did not specifically address energy conservation, however, interviewees volunteered this information. The time allowance for completion of the project limited the analysis of its impact in prom oting environmental stewardship, but conclusions may prompt further research. Definitions of Terms Environmental e ducation According to the United States Environmental Protection Agency (n.d.), Environmental education is a process that allows individuals to explore environmental issues, engage in problem solving, and take action to improve the environment. As a result individuals develop a deeper understanding of environmental issues and make PAGE 12 !"# $ %&'(!)*"%'+#, ( -. ( re sponsible decisions (United Sates Environmental Protection Agency. What is Environmental Education? Retrieved from http://www2.epa.gov/education/what environmental education 2014 ). Environmental a rt. According to Sam Bower, executive director of greenmuseum.org environmental art is an umbrella term to encompass the most common terms, "ecological art" (short er version eco art), "land art", "earth art", "earthworks" and "art in nature" (A profusion of terms, greenmuseum.org, para. 3, 2010 ). Ecolog ical art. According to Sam Bower, executive director of greenmuseum.org ecological art or "eco art" is a contemporary art movement that addresses environmental issues and often involves collaboration, restoration and eco friendly methodology (A profusion of terms, greenmuseum.org, para. 8 2010 ). E co art e ducation. Hilary Inwood (2010), a universi ty based art educator, defines environmental art e ducation (or eco art education ) as education integrating "art education with environmental education as a means of developing awareness of and engagement with concepts such as interdependence, biodiversity, conservation, restoration, and sustainability" (para. 4). Sustainability. According to the United States Environmental Protection Ag ency (n.d.), sustainab ility is e verything that we need for our survival and well being depends, either directly or indirectly, on our natural environment. Sustainability creates and maintains the conditions under which humans and nature can exist in productive harmony, that per mit fulfilling the social economic and other requirements of present and future generations. (What is Sustainability?, EPA website, para. 1). Environmen t al s tewardship According to the United States Environmental Protection Agency (n.d.), environmental s t ewardship is the responsibility for environmental quality shared PAGE 13 !"# $ %&'(!)*"%'+#, ( -/ ( by all those whose actions affect the environment (Environmental Stewardship, EPA website, para. 1). Recycle. According to Oxford Dictionaries (2014), To recycle means to convert waste into usable material. Repurpose. According to Oxford Dictionaries (2014), To repurpose means to adapt material for use in a differ en t way Critical place based e ducation. Gruenewald (2003) describes critical place based education as an educational approach encouraging teachers and students to reinhabit their places to pursue the kind of social action that improves the social and ecological life of places, near and far, now and in the future ( p 7 ). Literature Review Research for this project began with a scholarly literature review on the curr ent state of the environment. The literature review researched art education journals which examined key phrases such as environmental art education sustainable classroom practice and environm ental stewardship There are scholarly articles written about the importance of environmental art education and connections to art practice and stewardship. However, I found that more research could be explored in affective ecological art teaching methods promoting sustainability. The State of the Environment Literature sources for this research ranged from science articles, government agencies such as the International Panel on Climate Change to recent reports in news art icles. According to Kerry (2013) the top ten environmental concerns are related to climate change L ooking into the future, however, a recent report from The Gaurdian expert, C. Tickell (201 1 ) states that the one of the top environmental concerns for the next forty years is the proliferation of our own PAGE 14 !"# $ %&'(!)*"%'+#, ( -0 ( species. The report further states that the health of humans i s highly connected to our ocean that is being polluted with toxic contamination from industrial runoff, plastic pollution, and acidification. All of which pose threats to the health of the world's population. Plastic pollution is now affecting every waterway, sea, and ocean in the world ( Natural Resources Defense Council, 2014). According to the Environmental Protection Agency (2012), the amount of waste produced continues to rise. Between 1960 and 2007, the amount of trash generated in the U.S. nearly doubled from 2.6 to 4.6 pounds per person per day. This was te has found its way to oceans. Plastic do es not biodegrade in the ocean. It breaks up into small pieces A ccording to the California Department of Toxic Substances Control this is what we have to stop. In essence, the world population eating fish that have eaten other fish, which have eaten toxin saturated plastics, are eating their own w aste (Lytle, 2014). O cean pollution can be managed on a local level by changing human behavior. The Environmental Protection Agency (EPA) in 2012 reported that three of the top five types of marine litter are plastic bottles, plastic bags, and cans. These items are recyclable, however, lack of knowledge and infrastructure in municipalities may be limiting the best efforts to reduce, reuse, or recycle. Lytle (2014) f urther adds that the undeniable b ehavioral propensity of increasingly over consuming, discarding, littering and thus polluting are a major cause of the problem. Environmentalism and Sustainability in Art Education How can we connect environmental issues and sustainable actions to art education? According to Capra (2004), teachers can nurture the knowledge, skills, and values essential to sustainable living. According to Inwood (2008), environmental education is traditionall y linked to science based approaches, however, sensory subjective orientation typically found in art PAGE 15 !"# $ %&'(!)*"%'+#, ( -1 ( education may prove to be more effective in changing behaviors towards the environment because art offers a dynamic way to increase the power and relevancy of learning about the environment Art education has th e ability to stimulate learners minds and also touch their hearts which can be a powerful approach in fostering ecological literacy (Inwood, 2010). Likewise, Anderson & Guyas (2012), add that we need a paradigm shift in our relation to the Earth from a consumptive, dominating model to one embracing the idea that life has intrinsic value. This shift can be made through art and art education which may be the most meaningful too l for influencing beliefs and values (Anderson & Guyas, 2012). Scholarly researches on sustainable classroom practices were found using key phrases such as "environmental sustainable art practice" led to topics about classroom material. According to Taylo r (1997) in the art room, trash continues to be a problem. As a result, the issue of sustainability will be central. There are also some incentives for recycling materials such as Crayola ( 7 ( markers (Perry, 2014). Schoolsrecycle.planetark.org (2014) provide a guide for setting up a recycling system in schools. In addition, my research revealed ways in which sustainable practices can be encouraged in the classro om For Example, Elliott & Bartle y (1998), describe ecologically based art activities designed for a high school curriculum. The course content is about exploration of materials, creation, and re creation as part of the ongoing human ecosystem. Building on this research, I interviewed local teachers to find out what sustainable methods are practiced in the art room. Place B ased P edagogy In the past, t raditional environmental art experience s, such as having learners making nature drawings or using found materials have fall en short of fully developing ecological literacy (Inwood, 2010) Recent s cholars have begun to address place based methods in PAGE 16 !"# $ %&'(!)*"%'+#, ( -2 ( promoting environmental education. Over a decade ago Blandy and Hoffman (1993) encouraged "an art education of place" (p. 23) in its practices which acknowled ge that art can significantly contribute to how people live by influencing their perceptions and actions More recently Gruenewald (2003), suggests a critical pedagogy of place to challenge the assumptions, practices and outcomes taken for granted in domina nt culture and conventional education Scholars promote diverse ways of implementing a place based approach in education. Because our current education system promotes a detachment of our world Smith (2002) encourages place based learning as an investigat ion of local natural phenomena. Additionally Sanger ( 1997 ) describes the importance of history of place so that if students see themselves as part of a continuous line from the past they visualize their role in the future Furthermore Kushins & Brisman (2005) explore place by describing how teachers can foster awareness and respect for the environment by using the classroom as a learning space. The scholarship of Gruenewald (2003) and Kushins & Brisman (2005) are particularly i mportant to this Project In Lieu Of T hesis as their research captures the impor tance of building art curricula that involve critically place based education that promotes action specific to the classroom and intrinsically connected outside to the local community. These scholars e xplore critically place based methods pro moting environmental education; however, I found more experimentation and documentation of practical approaches as well as its effectiveness could be explored. Methodology I conducted two studies for my Project In Lieu Of T hesis. Both studies were conducted within a local community within an approximate twenty mile radius. I conducted the first study with fifty three participants involved in an art project. The ages of the first study ranged from PAGE 17 !"# $ %&'(!)*"%'+#, ( -3 ( age two to adult wi thin a three hour time frame. As Graham (2007) stated, education that ignores issues of ecology and community becomes complicit in their erosion. The first strategy in my research was to explor e sustainable practices in the l ocal community. I visited the R epurpose Project in Gainesville, Florid a a local non profit organization that gather s items from the community that are headed for landfills. I discovered that they were amassing large numbers of plastic bottle caps because they are not recyc l able by the local services I thought that it would be an interesting challenge to repurpose the bottle caps as material for an ecological art project. I also envisioned the project being a vehicle for promoting the Repurpose Project efforts as well as an opportun ity to engage families in a critical dialogue, which promoted action about recycling. At the same time, I discovered that a compelling project needed to be developed and implemented for the February 2014 Family Day event, Kongo Across the Waters at the Samuel P. Harn Museum in Gainesville Combining my sustainable research and inspiration from the museum needs I developed a workshop (see Appendix A ) for the family day event at the Harn museum The preparation for the event included coordination with the museum education curator, interns, and staff. T here were several requirements for material preparations. The materials for the event needed to accommodate up to two hundred participants. I estimated that nine bottle caps would complete one person al adornment. S ixteen hundred bottle caps as well as two hundred larger caps were needed for the workshop Because of the three hour time allowance for the workshop, a n art process that did not re quire drying time was necessary so that pa rticipants could take their art work with them A dditionally, a primer and black paint needed to be applied and dried on to the larger caps prior to the day of the event so that participants could decorate them with oil pastels. Yarn was pre cut and used for connecting the m aterials. I wanted the final PAGE 18 !"# $ %&'(!)*"%'+#, ( -4 ( art product to be recyclable as well. If materials were connected with yarn without any gluing process es the final art piece could be disassembled for reuse or recycling The research for this project included observations of the participants of the event and a survey (see Appendix B ) filled out by the participants as an indicator of the achievement of the project. In t he second study I collected input from government agencies and art teachers. The second study occurred within a four week time span in which I gathered information within forty minute interview s consisting of nine questions. I interviewed a total of seven art educators and five government agents about the effectiveness of environmental sustainable practice s in the community and classroom. I submitted and received approval for the research from the Institutional Revie w Board (IRB) pro tocol for documentat ion of the community workshop and interviews (s ee Appendix C ). Participants' identities are protected thr ough the use of pseudonyms. T he workshop and interview insights along with continued resear ch informed my website resource. In addition, m y research focused on environmental education in the art classroom. I used a combination of research methods, including historical and philosophical in reviewing science and educational resources. According to authors Koroscik, J., & Kowalchuk (1997), historical inquiry involves collecting, evaluating, and interpreting dat a related to past events. For this project, I not only reviewed the historical connection of environmental education and art, I also collected information on current approaches to eco art education, sustainable classroom practices, and ecological contempo rary artists. Data Analysis Procedures During both studies, I gathered data by a method of action oriented research. Ac cording to May (1993), action oriented research is the study and enhancement of one's on practice. PAGE 19 !"# $ %&'(!)*"%'+#, ( -5 ( Action research is used to support democratic principles, giving a voice to the teachers in practice. The primary purpose of action research is to gain a better understanding of ones beliefs and practice and pay closer attention to what students say and do in class so as to understand what sense students are making of their learning. After collecting the interviews, images, and journal notes, the data was analyzed by looking for patterns, similarities, disparities, trends, and other relationships in interpreting their meaning. Findings Thi s section addresses t he findings and is divided into three sections. T he first section articulates my observations from a community workshop using a c ritically place based method of teaching. The second section compares and contrasts findings from governme nt agencies in the local community and teachers in the classroom setting. The t hird section highlights artists focused on ecology of place and a description of the website I created for disseminating the results of this project. Critical Place B ased Community Education My findings show that a critic al place based method of teaching promote s ecological awareness of the local community. During my academic studies, I enrolled in a sketchb ook course emphasizing a place based approach to learning. I was required to create sketchbook entries from observations of various places in the local community. This local investigation of the community informed me about the local ecology and resou rces promoting sustainability. Additionally, t his investigatio n led to my creation of the eco art project for a community workshop that I will later describe. The workshop created a unique opportunity to join several community entities, a local business, the university and the community at large. Within this diverse group, an opportunity was created to engage in a critical dialogue about recycling.
http://ufdc.ufl.edu/AA00025518/00001
Fianna Fáil outlines how it would tackle the beef crisis Fianna Fáil put have forward a number of measures that should be taken by the Government and new EU Commissioner Designate Phil Hogan to address the ongoing beef crisis, which is adversely affecting farmers all over the country. Party Agriculture Spokesperson Éamon Ó Cuív TD, commented: “Over the past 12 months farmers all over country have been at the receiving end of predatory exploitation by large supermarkets and processors. Farmers have been forced to accept losses of up to €300 per head per animal, which is driving many to the brink of poverty. Some supermarkets are selling fruit and vegetables at below cost price, which is unsustainable and putting huge pressure on already struggling farmers. Liquid milk farmers are also receiving a poor return for the produce also due to pressure from supermarkets “As the largest indigenous industry in the country, the agri-food sector must be protected against these unfair and exploitative practices. So far the response from the Government and Minister Simon Coveney has been extremely poor. “Fianna Fáil is proposing a number of measures than can be taken to ease the pressure on farmers, who are not receiving a fair price from their produce, despite this being outlined explicitly in the Lisbon Treaty. “Fianna Fáil is calling on Minister Simon Coveney to immediately establish a €200 per head beef genomics scheme to boost farm incomes in the short-term. An independent beef regulator should also be appointed to ensure that the exploitation of farmers by large multiples is stopped. “We are also calling on the new EU Commissioner Designate for Agriculture Phil Hogan to ensure article 39b of the Lisbon Treaty, which states that farmers should get a fair price for their product, is enforced.
https://www.agriland.ie/farming-news/fianna-fail-outline-tackle-beef-crisis/
This item is not available in full-text via OUR Archive. If you are the author of this item, please contact us if you wish to discuss making the full text publicly available. Cite this item: Carty, S. A. (2012). The Out of the Box intervention: The complexity of family food cultures (Thesis, Master of Science). University of Otago. Retrieved from http://hdl.handle.net/10523/2631 Permanent link to OUR Archive version: http://hdl.handle.net/10523/2631 Abstract: Background: The aim of this theory-guided constructivist research was to explore factors that influence fruit and vegetable consumption in low- and high-income households (a “household” was defined as one or more individuals living together in a dwelling). This research was designed to control for availability, accessibility and affordability of fruit and vegetables in order to identify other resources households need to consume more fruit and vegetables. Primary methods: An adapted ethnographic approach was used to observe twenty households in their home environment for three months. An even number of low- and high-income households representing a range of household types were recruited from across New Zealand. Each household received a free box of fresh fruit and vegetables each week, delivered to their home, and were home-visited on two occasions each week by a researcher. Observations, discussions and interventions were documented using field notes and digital technology. The researcher responsible for data collection manually coded the expanded field notes in light of the Sustainable Livelihoods Framework to develop individual household reports. Secondary methods: For the research presented in this thesis, five low- and five high-income households with children were purposively selected from the households participating in the primary research. An inductive thematic analysis of the expanded field notes using MAXQDA software was conducted. Eating behaviour is extremely complex and a myriad of factors have been shown to affect it. This approach enabled an analysis of the everyday experiences of participating households. Results: The data was organised under five major themes to describe factors influencing fruit and vegetable consumption at a household-level: early life exposures, individualised drivers to consume fruit and vegetables, adaptations to the household’s evolving socio-cultural food environment, social connectedness, and external organisations and the built environment. This research suggests that households with children required a range of resources to consume fruit and vegetables, and financial resource was not the only resource contributing to the social gradient in healthy eating. The socio-cultural context of the home environment was central to families’ eating behaviours. Family food cultures were dynamic and resources changed over time. Even when free fruit and vegetables were delivered to the home, families required human resource (personal drivers influenced by early life exposure and household dynamics) and external social networks to make use of them. When resources were limited within a household, there was a greater dependence on external organisations. Implications: Future researchers, policy makers and practitioners attempting to improve the eating habits of low-income households need to consider the breadth of resources households need to achieve this outcome. In addition, the complexity of resource access and utilisation in an evolving home environment must be considered.
https://ourarchive.otago.ac.nz/handle/10523/2631
Introduction: this article reflects on data that emanated from a programme evaluation and focuses on a concept we label ‘distributed-efficacy’. We argue that the process of developing and sustaining ‘distributed-efficacy’ is complex and indeterminate, thus difficult to manage or predict. We situate the discussion within the context of UNAIDS’ recent strategy — Vision 95:95:95 — to ‘end AIDS’ by 2030 which the South African National Department of Health is currently rolling out across the country. Method: A qualitative method was applied. It included a Value Network Analysis, the Most Significant Change technique and a thematic content analysis of factors associated with a ‘competent community’ model. During the analysis it was noticed that there were unexpected references to a shift in social relations. This prompted a re-analysis of the narrative findings using a second thematic content analysis that focused on factors associated with complexity science, the environmental sciences and shifts is social relations. Findings: the efficacy associated with new social practices relating to HIV risk-reduction was distributed amongst networks that included mother—son networks and participant—facilitator networks and included a shift in social relations within these networks. Discussion: it is suggested that for new social practices to emerge requires the establishment of ‘distributed-efficacy’ which facilitates localised social sanctioning, sometimes including shifts in social relations, and this process is a ‘complex’, dialectical interplay between ‘agency’ and ‘structure’. Conclusion: the ambition of ‘ending AIDS’ by 2030 represents a compressed timeframe that will require the uptake of multiple new bio-social practises. This will involve many nonlinear, complex challenges and the process of developing ‘distributed-efficacy’ could play a role in this process. Further research into the factors we identified as being associated with ‘distributed-efficacy’ — relationships, modes of agency and shifts in social relations — could add value to achieving Vision 95:95:95.
https://repository.nwu.ac.za/handle/10394/14274
Nature-based Solutions (NbS) that enhance climate resilience are pragmatic solutions building on the services and resources provided by ecosystems and biodiversity, which are more sustainable, robust and often more cost-efficient than conventional man-made solutions alone. Here are the main findings and key policy recommendations from the policy paper "Outsmart climate change: work with nature! Enhancing the Mediterranean’s climate resilience through Nature-based Solution." MAIN FUNDINGS 1. The Mediterranean region has been identified as a climate change ‘hotspot'. Average temperatures in the region have already risen to 1.6°C above pre-industrial levels, while a temperature rise of 2-3°C by 2050, and a rise of 3-5°C by 2100, have been forecast for the region (IPCC, 2013). This will lead to an increased frequency of extreme weather events, such as droughts, heat waves, storms and floods. 2. Conventional infrastructure alone will not be able to cope with this new, highly dynamic and challenging context, which implies a significant level of uncertainty. Robust but flexible solutions are needed to help societies adapt. 3. Biodiversity and healthy ecosystems provide a broad range of services – through Nature-based Solutions (NbS) – in terms of adaptation to and mitigation of climate change and can increase society’s overall resilience to stresses and shocks (FAO, 2019). 4. NbS are generally robust, flexible, cost-efficient, inclusive and long-term oriented solutions. Stand alone or combined with man-made solutions, they also offer co-benefits related to food security, livelihoods, improved health and well-being, water regulation and disaster risk reduction, while contributing to nature conservation and restoration. 5. To facilitate the deployment and implementation of NbS and fully reap their benefits, shifts in mindsets, public policies (including legal and regulatory frameworks), and sound investment opportunities are needed. This will help to overcome current barriers and allow NbS to reach their maximum potential. KEY POLICY RECOMMENDATIONS 1. Within the framework of the Barcelona Convention, develop a strategy to fully integrate NbS into national policies across all sectors so as to significantly enhance countries’ climate resilience by 2030. 2. In particular, mainstream NbS into national plans for climate mitigation and adaptation, such as the NDCs (Nationally Determined Contributions) and NAPs (National Adaptation Plans) required under the Paris Agreement, and DRR (Disaster Risk Reduction) plans under the Sendai Framework. 3. Foster “Green City” schemes throughout the region to improve citizens’ resilience to heat waves, flood surges and coastal erosion, and possible water and food shortages. 4. Promote sustainable and biodiversity-friendly practices and initiatives in the field of agriculture and aquaculture, such as agroecology, local integrated nature-based production systems and sustainable fisheries to secure food security, rural and coastal livelihoods and employment opportunities. 5. Manage coastal and marine ecosystems, including wetlands, in a sustainable manner to enhance their capacity as carbon sinks and buffers, restore depleted fish stocks and protect marine biodiversity. 6. Overall, implement adequate institutional structures, economic incentives and land tenure instruments to facilitate the uptake and implementation of NbS and overcome existing obstacles to their implementation with a view to moving towards a blue-green and circular economy and ensure society’s long-term resilience. Read the full policy paper here.
https://buildersproject.eu/news/3/nature-based-solutions
GPON,FTTx , Project Management, Fiber Optic Technology, Internet Troubleshooting. Manage GPON Infra-structure and operation. Supports team manager and performs management duties when manager is absent or out of office. Manages inventories and stock, including keeping detailed records of inventory use and sales, and advising management on ordering where necessary. Provides encouragement to team members, including communicating team goals and identifying areas for new training or skill checks. Assists management with hiring processes and new team member training. Working as a Field Support enginner in Wi-Tribe Pakistan responsible for ; Provides guidance and training to customer personnel in establishing Educate customer about the product operation and maintenance procedures. Coordinate onsite contractors for facility support as necessary. Supervise all product installations and the related repair and maintenance activities when required. Complete service orders and service reports in a timely manner. Monitor the issue resolution status and time closely. Act as the main point of contact for customers for any complaints, inquiries and issues. Update technical manual and reference guides with the recent product updates and developments. Develop training programs to assist support engineers to acquire necessary product expertise. Takes ownership of customer issues and requirements and ensures they are resolved in a timely manner. Determines and schedules appropriate support requirements for a region or area and participates in negotiating service level agreements. Identifies and addresses the training and development needs of project managers and team staff.
https://www.rozee.pk/UR/people/1587030/muhammad.mohtashim.khan
“Pierre Cardin: Future Fashion traces the legendary career of one of the fashion world’s most innovative designers, one whose futuristic designs and trailblazing efforts to democratize high fashion for the masses pushed the boundaries of the industry for more than seven decades. The retrospective exhibition features over 170 objects that date from the 1950s to the present, including haute couture and ready-to-wear garments, accessories, photographs, film, and other materials drawn primarily from the Pierre Cardin archive. Pierre Cardin: Future Fashion, curated by Matthew Yokobosky, Senior Curator of Fashion and Material Culture, Brooklyn Museum, will reveal how the designer’s bold, futuristic aesthetic had a pervasive influence not only on fashion, but on other forms of design that extended beyond clothing to furniture, industrial design, and more. Pierre Cardin (French, b. 1922) is best known for his avant-garde Space Age designs and pioneering advances in ready-to-wear and unisex fashion. Cardin’s fascination with new technologies and the international fervor of the 1960s Space Race visibly influenced his couture apparel, which subsequently became emblematic of the era. His clothing designs, which featured geometric silhouettes and were often made from unconventional materials, were worn by international models and film stars from Brigitte Bardot and Lauren Bacall to Alain Delon, Jacqueline Kennedy, and Raquel Welch. Fueled by an appetite for experimentation and ‘breaking the mold,’ he was one of the first European designers to show in Japan, China, and Vietnam and license his name, using it to brand an expansive line of products on a global scale.” — Brooklyn Museum Photographs by Corrado Serra.
https://artssummary.com/2019/07/21/pierre-cardin-future-fashion-at-brooklyn-museum-july-20-2019-january-5-2020/?shared=email&msg=fail
et q = g - -84245/20826. Find the common denominator of q and 53/10. 90 Calculate the common denominator of 79/105 and 1716/(-3520) - 2/(-15). 1680 Let m be (-4)/2 - 1798 - -2. Let c = 10765/6 + m. Let h = -7379/12 - -614. What is the common denominator of h and c? 12 Let c = -81 + -33. Let u = c - -124. Calculate the smallest common multiple of 17 and u. 170 Suppose 4*i - 240 = -0*i + 2*w, i + 3*w = 60. Let p = i + -40. Suppose 0 = 3*y - 12. What is the smallest common multiple of p and y? 20 Suppose 19 + 17 = -6*r. Let s = -26 + 38. Let f = r + s. Calculate the least common multiple of f and 5. 30 What is the lowest common multiple of ((-80)/12)/(30/(-108)) and 144? 144 Suppose 0 = 2*c + 948 - 1488. What is the least common multiple of 27 and c? 270 Let g(o) = -15*o + 466. What is the lowest common multiple of 1 and g(31)? 1 Let w = -29 - -5. What is the common denominator of (-2)/9*(-354)/8 and ((-2)/(-1))/(w/(-226))? 6 Let g = -97 + 106. What is the least common multiple of g and 8? 72 Suppose -39*j + 37*j + 108 = 0. Calculate the smallest common multiple of j and 4. 108 Let c be (30/9)/((-6404)/(-222)). Let p = 3419903/29554460 - c. Let l = p - 784583/203060. Calculate the common denominator of -21/5 and l. 110 Suppose 2*q + 3 = -q. Let c(n) = -89*n - 2 + 177*n - 92*n. Calculate the smallest common multiple of 1 and c(q). 2 Let w(n) = -n**3 - 8*n**2 + 8*n + 5. Let h be w(-9). Let p = h - 5. Suppose -3*g = 5*c - p - 3, 3*c = 4*g - 16. What is the lowest common multiple of 20 and g? 20 Suppose 43 = 5*t - t - 5*m, 2*t + m - 25 = 0. Suppose 0 = t*l - l - 88. Calculate the least common multiple of 16 and l. 16 Let g(x) = 22 + x - 3 + 17. Calculate the least common multiple of g(0) and 24. 72 Let p = -16229/6 + 2721. Suppose 5*i + 26 + 1 = 2*a, 2*a + 4*i = 0. Find the common denominator of p and (a/45)/((-4)/745). 6 Let x(y) = 10*y - 155. What is the least common multiple of 21 and x(17)? 105 Let n = 174043/14604 - 1/1217. Find the common denominator of n and 2/(-6)*189/684. 228 Let c = -78 + 66. Find the common denominator of c*((-660)/128)/11 and 97/62. 248 Let i(d) = -d**2 + 14*d + 8. Let r be i(14). Suppose 180 = -3*v - 3*n, -r*v + 3*n = -4*v + 240. Let w = -42 - v. What is the lowest common multiple of 2 and w? 18 Suppose -2*f - 8 = -4*f. Suppose 0 = -4*b + 5*g + 11 - 4, -f*b + 11 = -g. Calculate the smallest common multiple of 10 and b. 30 What is the lowest common multiple of ((-19)/3 + 2)*-3 and 4 + -5*(-36)/30? 130 Calculate the common denominator of ((-1010)/(-12))/(-24 + 29) and 21/1076. 3228 Let l(k) = 15*k - 18. Let v(u) = -u + 1. Let c(d) = -5*l(d) - 90*v(d). Suppose s - 41 + 36 = 0. Calculate the smallest common multiple of s and c(1). 15 Calculate the common denominator of -89/48 and (81/(-6))/(360/135). 48 Let q be ((-35)/(-45))/(1/995629). Let i = -772395 + q. Let l = 1981 - i. Calculate the common denominator of 17/9 and l. 9 Let y be (5 - 8) + 111093528952125298/(-44728944). Let i = 106799341325/43 + y. Let h = 2/21671 + i. Calculate the common denominator of -19/4 and h. 24 Let b be (-202999)/(-24) - (-27)/(-72). Let o = b + -8449. Find the common denominator of o and -73/18. 36 Suppose -2*z - d = -47, -d + 1 = -2. Suppose 0 = 9*p - 7*p - z. Calculate the least common multiple of p and 9. 99 Suppose -12 = 4*z - 52. What is the least common multiple of z and 75/7 + (-4)/(-14)? 110 Let l = 9 + -9. Suppose l = 7*a - 36 + 8. What is the common denominator of a/(-1) - (-3 + -35) and -101/14? 14 Let f = -3 + 7. Suppose 5*y - 3*t = y + 168, -4*y = f*t - 140. What is the common denominator of y - (3 - 3)/(-2) and 83/4? 4 Suppose -4*y + 90 = -y - 3*v, -y + 38 = 3*v. Calculate the lowest common multiple of 8 and y. 32 Let d = -1073/3 - -6049/6. Let n = 181140 - 1454389/8. Let m = n + d. What is the common denominator of m and 93/4? 8 Calculate the common denominator of (-53)/(-4)*(-18)/90 and -7/16. 80 Let h be 209380/(-6200) + 1*-5. Let y = -42/155 - h. Find the common denominator of 61/14 and y. 14 Let p be (850/15)/((-2)/(-3)). Suppose -p = -13*f + 45. What is the lowest common multiple of f and 11? 110 Let s = 1536 - 1461. What is the smallest common multiple of s and 150? 150 Find the common denominator of 1778/(-4410) - 12/42 and 133/6. 90 Let m = -659 + 31619/48. Find the common denominator of 77/36 and m. 144 Let b(n) = n**2 - 8*n - 4. Let x be b(4). Find the common denominator of -61/2 and x/(-45)*(-426)/16. 6 Calculate the common denominator of (-96)/(-176) - 876/154 and 18/7. 7 Let y be 529665/160*(-2)/(-1428). Let m = y + 7/1088. Calculate the common denominator of m and -5 + 3 - (-142)/24. 84 Let i = -1 + 3. Let v be (0 - 19)*(2 - -2)/(-4). Suppose 35 = 2*k + v. Calculate the least common multiple of i and k. 8 Find the common denominator of (-5)/110*3/2*-42 and (-402)/15*2/8*-1. 110 Let g(d) = d**3 + d**2 - 8*d + 64. What is the lowest common multiple of 12 and g(0)? 192 Let x(t) = -2*t - 3. Let c be x(-3). Suppose 5*m + 14 = 4, -c*m - 1 = z. What is the smallest common multiple of 5 and z? 5 Let p be (-6)/(-51) - (-7375)/(-56185). Let c = p - 54719/10576. Find the common denominator of c and 115/54. 432 Suppose 2*b - 27 = -3*w, -5*b + 2*w + w = -99. Suppose -o + 144 = 3*o - 2*p, -o = 2*p - 26. Calculate the smallest common multiple of o and b. 306 Suppose 3*p + 47 = 62. Let k = 22 - 13. Calculate the lowest common multiple of k and p. 45 Let f = -24 + 35. Let w be (-120)/(-48)*(-4)/(-2). Suppose -c - 3*g - 3 = -0*g, 4*c + w*g = 2. What is the smallest common multiple of f and c? 33 Let w(m) = m**3 + 6*m**2 - 3*m + 4. Let f = 10 - 16. What is the smallest common multiple of w(f) and 8? 88 Let u = 1103/8 + -138. What is the common denominator of u and 63/310? 1240 Let m(x) = -2*x**2 - 17*x + 14. What is the smallest common multiple of 154 and m(-8)? 154 Let t = -6742 - -6752. Let s be (-3)/2*(-2)/3. Calculate the smallest common multiple of (1 + s)*(-11)/(-2) and t. 110 Let f be 3*2 + -2 + 0. Let x(i) = i**3 - 5*i**2 + 2*i - 1. Let b be x(2). What is the common denominator of (-112)/b*(-1)/f and -89/2? 18 Let l(z) = -z**2 - 11*z + 28. What is the least common multiple of 10 and l(-13)? 10 Let m(f) = 2*f - 27. Let t be m(15). Calculate the common denominator of t + (786/20 - 7/(-35)) and -87/14. 14 Suppose w - 2*a = 2*a + 48, -267 = -4*w + a. Calculate the least common multiple of 4 and w. 68 Let q = 6535941/323 - 20235. What is the common denominator of 89/4 and q? 1292 Let k = 18472031557 - 461469413838/25. Let b = -13253174 + k. Let f = b - 1829. Calculate the common denominator of -133/6 and f. 150 Calculate the common denominator of -45/26 and (-1 - -1) + (-5 - (-205)/110). 286 What is the common denominator of 77/8 and (-3)/168*321 - -4? 56 Calculate the common denominator of -99/76 and 0 + ((-4)/20)/(4/250). 76 Let a = -29 + 37. Calculate the common denominator of (2 + (-38)/a)/((-15)/3) and 95/4. 20 Calculate the common denominator of (0 - 0/3) + 1356/504 and 19/30. 210 Suppose -4*x + 4 = 0, -3*w + 0*x - 5*x + 23 = 0. Suppose 38 - w = 8*i. What is the least common multiple of 8 and (-204)/(-12) - (1 - i)? 40 Let t = -24/617 + -71853/8638. Find the common denominator of t and 63/68. 476 Let m be -5*((-376)/(-20) + 1). Let c = m + 101. Calculate the least common multiple of 18 and c. 18 Suppose 0 = -z + 5*o + 538, 4*o = 4*z - 7*z + 1614. Calculate the smallest common multiple of z and 4. 1076 Suppose 22*q - 88 = 18*q. Find the common denominator of (54/4)/(q/6) and -31/14. 154 Calculate the common denominator of -4*1 - (-287)/(-656) and 21/68. 272 Calculate the lowest common multiple of (-18)/45*5 + 70 and 187. 748 Let f = -291021/5 - -58459. Let y = 9073/35 - f. Calculate the common denominator of ((-144)/110)/(4/(-10)) and y. 77 Let s(r) be the third derivative of -r**4/12 + r**3/6 + 9*r**2. Calculate the least common multiple of 3 and s(-5). 33 Suppose 5*s - 4 = 2*t, -3*s = -2*t - 0 - 4. Let o be (96/18)/(t/108). Find the common denominator of 3/10*o/(-27) and -75/8. 40 Suppose 0 = 4*t - 22*t + 360. Suppose -5 = -3*a + 7. Calculate the smallest common multiple of a and t. 20 Let r be -3 - (3 - 4)*-4. Let s = r + 9. Suppose -2*y - s*y + 12 = 0. What is the lowest common multiple of 10 and y? 30 Suppose -10*w
Mindfulness-Based Relationship Enhancement Transcription 1 BEHAVIOR THERAPY 35, , 2004 Mindfulness-Based Relationship Enhancement James W. Carson Kimberly M. Carson Karen M. Gil Donald H. Baucom University of North Carolina at Chapel Hill Using a randomized wait-list controlled design, this study evaluated the effects of a novel intervention, mindfulness-based relationship enhancement, designed to enrich the relationships of relatively happy, nondistressed couples. Results suggested the intervention was efficacious in (a) favorably impacting couples levels of relationship satisfaction, autonomy, relatedness, closeness, acceptance of one another, and relationship distress; (b) beneficially affecting individuals optimism, spirituality, relaxation, and psychological distress; and (c) maintaining benefits at 3-month follow-up. Those who practiced mindfulness more had better outcomes, and within-person analyses of diary measures showed greater mindfulness practice on a given day was associated on several consecutive days with improved levels of relationship happiness, relationship stress, stress coping efficacy, and overall stress. The field of intimate relationships, while largely characterized by a focus on distressed or at-risk couples, has long harbored a prominent precursor to the current positive psychology movement (Seligman & Csikszentmihalyi, 2000). Couples researchers have elucidated the origins of love and intimacy (e.g., Berscheid & Walster, 1978) and, more recently, have focused on the dynamics of well-functioning relationships (Wenzel & Harvey, 2001). Strengthening the relationships of even well-functioning couples may lead to This work is the result of a dissertation completed by James W. Carson in fulfillment of Ph.D. requirements at the University of North Carolina at Chapel Hill under the direction of Karen M. Gil, Ph.D., and was partially supported by a grant from the University Research Council at the University of North Carolina. We acknowledge our gratitude to all those who contributed to this study, and especially to Jon Kabat-Zinn, Saki Santorelli, and their colleagues at the Center for Mindfulness at the University of Massachusetts Medical School, for their inspiring example and many years of work in applying mindfulness to people s needs. Correspondence concerning this article, including information about the treatment manual, should be addressed to James W. Carson, who is now at Duke University Medical Center, Department of Psychiatry, Box 90399, Durham, NC 27708; /04/ $1.00/0 Copyright 2004 by Association for Advancement of Behavior Therapy All rights for reproduction in any form reserved. 2 472 carson et al. important benefits, such as improvement in abilities to overcome life challenges and enhancements in parenting and child outcomes (Sayers, Kohn, & Heavey, 1998). However, there have been few if any controlled trials of interventions specifically aimed at enhancing the relationships of well-functioning couples. Although a few studies have combined distressed and nondistressed couples (e.g., Ross, Baker, & Guerney, 1985), nearly all clinical researchers have focused on developing effective therapies for distressed couples (e.g., Baucom, Hahlweg, & Kuschel, 2003; Greenberg & Johnson, 1988; Snyder & Wills, 1989) or early prevention interventions for premarital or at-risk couples (Guerney, 1977; Markman, Floyd, Stanley, & Storaasli, 1988). The aim of the present randomized controlled study was to test the efficacy of a novel couples program, Mindfulness-Based Relationship Enhancement. Mindfulness meditation methods foster greater awareness, ease, and fresh discovery in all of life s experiences, with the ultimate purpose of enhancing access to innate resources of joy, compassion, and connectedness. Mindfulness has been described as the ability to remain focused on the reality of the present moment, accepting and opening to it, without getting caught up in elaborative thoughts or emotional reactions to situations (Kabat-Zinn, 1990). Mindfulness techniques are used to develop a perspective on thoughts and feelings that cultivates recognition of them as passing events in the mind, rather than identifying with them or treating them as necessarily accurate reflections of reality. By practicing the skills of moment-to-moment awareness, people seek to gain insight into patterns in their thoughts, feelings, and interactions with others, and to skillfully choose helpful responses rather than automatically reacting in habitual, overlearned ways (Teasdale et al., 2000). In recent years mindfulness has been applied efficaciously in several interventions. Applications of the Mindfulness-Based Stress Reduction program (Kabat- Zinn, 1982) have been empirically supported across a variety of nonclinical (e.g., Shapiro, Schwartz, & Bonner, 1998) and clinical populations (depression Teasdale et al., 2000; cancer Speca, Carlson, Goodey, & Angen, 2000; psoriasis Kabat-Zinn et al., 1998). Promotion of a mindful perspective is also integral to Dialectical Behavior Therapy (Linehan, 1993) and Acceptance and Commitment Therapy (Bach & Hayes, 2002), albeit without training in mindfulness meditation per se. For the present study, we adapted the Mindfulness-Based Stress Reduction program to enhance the relationships of nondistressed couples. Building on the notion that healthy individual functioning is important to successful marriages, current reviewers of the couples literature (e.g., Sayers et al., 1998) have advocated the development of programs aimed in part at boosting individual partners stress coping skills. One application of a stress coping approach has demonstrated promising results in quasi-experimental studies (Bodenmann, Charvoz, Cina, & Widmer, 2001). The theoretical foundation for testing a mindfulness approach to boosting partners stress coping skills and enhancing their relationships was based on three salient aspects of this type of intervention, as follows: First, mindfulness 3 mindfulness-based relationship enhancement 473 meditation, like other meditation techniques (Benson, Beary, & Carol, 1974), is likely to promote the well-known relaxation response, resulting in psychophysiological changes that are the opposite of those of stress-induced hyperarousal. Researchers have suggested that psychophysiologically soothing techniques are likely to translate into a calmer approach to shared difficulties and challenges (Gottman, 1993). Second, in mindfulness a fundamental emphasis is placed on the acceptance of one s experiences without judgment. Through acceptance, participants often report an increase in the compassion they feel for themselves and greater empathy for others (Shapiro et al., 1998). Notably, theorists in the area of enhancement of healthy relationships endorse the importance of acceptance (Wenzel & Harvey, 2001), as do numerous marital therapy researchers (e.g., Christensen & Jacobson, 2000). Third, mindfulness appears to have wide generality in its effects. In keeping with the tenets of positive psychology, mindfulness is highlighted as a way of being in all of life experience, rather than a way to cope with specific troublesome aspects of life (Kabat-Zinn, 1990). This global approach of incorporating all experiences whether enjoyable or difficult into mindful, nonjudging awareness appears to be particularly applicable to optimal interpersonal functioning. We hypothesized that the mindfulness condition would be superior to the wait-list condition on both summary and daily measures of relationship and individual functioning. Specifically, we hypothesized that those in the intervention would demonstrate benefits on (a) measures of relationship satisfaction, autonomy, relatedness, closeness, acceptance of partner, daily relationship happiness, and daily relationship stress, as well as on (b) measures of individual well-being including optimism, spirituality, individual relaxation, psychological distress, daily coping efficacy, and daily overall stress. Moreover, we tested whether mindfulness couples would demonstrate greater resilience to the impact of daily stress, and if day-to-day time spent in mindfulness practice would predict same-day or following-days levels of relationship happiness, relationship stress, stress coping efficacy, and overall stress. Method Participants The participants were 44 nondistressed heterosexual couples (22 intervention, 22 wait-list) recruited principally from employees and their partners at a major hospital via advertisements placed in employee newsletters and gathering places. To qualify for the investigation, a couple had to be married or cohabitating for at least 12 months, surpass relationship distress and psychological distress cutoff criteria (T score of 58 on the Global Distress Scale Snyder, 1997; T score of 65 on the General Severity Index of the Brief Symptom Inventory Derogatis & Melisaratos, 1983), and could not be practicing meditation or yoga exercises on a regular basis. The mean age of the participating women was 37 years (SD 10.9, range 23 to 69) and of the men was 4 474 carson et al. 39 years (SD 12.4, range 24 to 69). Both the women and men were mostly very well-educated (82% of women and 63% of men had done graduate-level studies), had at least one child, and all were Caucasian except for one African American woman. Thirty-seven couples were married, and 7 were cohabitating. The mean duration of their relationships was 11 years. Overall Design and General Procedures Structured screening interviews were held approximately 1 month before the beginning of intervention cycles. The program was described as a challenging opportunity to develop inner resources for growth and change. Interviewees were informed of immediate entry versus wait-list randomization procedures, and emphasis was placed on commitment to attend sessions and complete homework assignments during their assigned intervention. Participating couples were assigned to one of two conditions. The Mindfulness-Based Relationship Enhancement condition (6 to 8 couples per group), consisting of 8 weekly 150-minute group sessions plus a full-day retreat, provided training in mindfulness meditation methods. The wait-list control condition, in which couples tracked their daily stress levels at specific intervals, controlled for the effects of measurement reactivity in couples not currently receiving the intervention. After the completion of follow-up measures, wait-list couples were invited to participate in the intervention program; however, data from their participation in the program were not used in the study. Measures Summary measures were administered before and after the intervention and 3 months later, and daily measures were recorded for 2 preintervention weeks (baseline period, collected just prior to the intervention) and the final 3 weeks of the 8-week program (treatment period, collected immediately after the intervention ended). Summary measures were selected to tap two distinct (though related) outcome domains that the intervention might affect: relationship functioning (relationship satisfaction, autonomy, relatedness, closeness, acceptance of partner, and relationship distress) and individual well-being (optimism, spirituality, individual relaxation, and psychological distress). Diary measures also assessed these two domains (daily relationship satisfaction and relationship stress, and individual stress coping efficacy and overall stress). Summary Relationship Measures Quality of Marriage Index (QMI). The QMI (Norton, 1983) utilizes 6 Likerttype items to assess global relationship satisfaction (e.g., We have a good relationship ). This measure has demonstrated high internal consistency (alpha coefficient for both women and men.97) and excellent convergent and discriminant validity (Heyman, Sayers, & Bellack, 1994). Internal consistency in the current study was also good ( for women.95, for men.86). The QMI correlates very highly (r.85 for women,.87 for men) with 5 mindfulness-based relationship enhancement 475 the most commonly used measure of marital functioning, the 32-item Dyadic Adjustment Scale (DAS; Spanier, 1976), and has been deemed equivalent to the DAS for many purposes (Heyman et al., 1994). In this study formulas for deriving DAS scores from QMI scores were applied to facilitate comparisons with the many studies that have used the DAS. Autonomy and Relatedness Inventory (ARI). The ARI (Schaefer & Burnett, 1987) is a 48-item self-report inventory with twelve scales assessing perceived partner behavior along major dimensions of independence/dependence and love/hostility. Scales of interest in the current investigation included the Relatedness Scale, assessing the extent to which each partner believes his or her partner contributes to a sense of the respondent s togetherness; and the Autonomy Scale, assessing the degree to which each partner believes his or her partner contributes to a sense of the respondent s independence within the relationship. Rankin-Esquer, Burnett, Baucom, and Epstein (1997) reported alpha coefficients for the Relatedness and Autonomy scales were good (Relatedness for females.72, for males.78; Autonomy for females.70, males.80). Reliability coefficients in the current study were good (Relatedness for females.89, for males.88; Autonomy for females.85, males.74). ARI scales have been demonstrated to have significant stability, as well as good predictive validity to measures of demoralization, across a 3-year period (Schaefer & Burnett, 1987). Inclusion of Other in the Self Scale (IOS). The IOS (Aron, Aron, & Smollan, 1992) is a single-item pictorial instrument that measures interpersonal closeness. From a series of overlapping circles, respondents select the pair that best describes their relationship with an individual. The IOS has demonstrated test-retest reliability, discriminant validity, predictive validity for whether romantic relationships are intact 3 months later, and convergent validity with other measures of closeness (Aron et al., 1992) and also with marital satisfaction (r.62 with DAS satisfaction subscale). Acceptance of Partner Index (API). The API was devised for this study as an index of relational processes that were expected to change as a result of participation in the mindfulness intervention (i.e., perception of ability to accept difficult characteristics in the partner or relationship). This process was measured by two items (e.g., Considering characteristics of your partner, or your relationship, which you find difficult to deal with, over the last 2 months [3 months at follow-up] how easy has it been for you to stop struggling and just allow such things to be? ), with responses indicated by marking 100-mm VAS scales. The alpha coefficients for API were good (for women.81, for men.87). Global Distress Scale (GDS) From the Marital Satisfaction Inventory Revised (MSI-R). The GDS (Snyder, 1997), a widely used scale of relationship distress in couples, contains 22 true/false items, with responses summarized into normalized T-scores in which higher scores reflect greater discontent with the relationship. Snyder (1997) has reported high internal consistency for the GDS ( for both women and men.91), and provided data supporting 6 476 carson et al. its criterion, discriminant, and construct validity. Internal reliability in the current study was good ( for women.75, for men.76). Analyses have validated use of the GDS with nonclinical samples (Snyder, 1997). Summary Individual Measures of Psychological Well-Being Revised Life Orientation Test (LOT-R). Dispositional optimism versus pessimism was assessed by the LOT-R (Scheier, Carver, & Bridges, 1994), a 6- item Likert scale (plus 4 fillers) that yields a continuous distribution of scores. The authors report a Cronbach s alpha of.78 (in the current study.81) and a 28-month test-retest reliability of.79 (Scheier et al., 1994). Index of Core Spiritual Experiences (INSPIRIT). Spirituality was assessed by the 7-item INSPIRIT (Kass, Friedman, Leserman, Zuttermeister, & Benson, 1991) designed to assess core elements of spiritual experiences such as the perception of a highly internalized relationship between God and the person. The Cronbach alpha for this scale was reported as.90 (.85 was found in the present sample) and higher scores have been demonstrated to predict enhanced physical and psychological health (Kass et al., 1991). Individual Relaxation Index (IRI). The IRI was devised for this study to assess each individual s perception of his or her ability to relax. This was measured by two items (e.g., Over the past 2 months [3 months at followup], how easy has it been for you to wind down and relax at the end of the day? ) marked on 100-mm VAS scales. The alpha coefficients for the IRI were good (for women.81, for men.76). Brief Symptom Inventory (BSI). The BSI was used to assess psychological distress because of its brevity, sensitivity to change, and well-documented reliability and validity (Derogatis & Melisaratos, 1983). Each of its 53 items is rated on a 5-point Likert-type scale. The General Severity Index, a weighted frequency score based on the sum of the ratings of all items, was used as a measure of current distress. This index has a reported alpha of.85 (Derogatis & Melisaratos, 1983); in the current study, the coefficient was.89. Daily Measures of Relationship Functioning and Individual Psychological Well-Being Daily Diary. Participants completed a daily diary sheet as a global prospective measure of (a) relationship happiness, (b) relationship stress, (c) stress coping efficacy (perceptions that their coping efforts were successful; Aldwin & Revenson, 1987), and (d) overall stress. In training participants to complete the diaries, investigators clarified the meaning of the word stress in the diary items as referring to subjective feelings of distress related to the day s events, as distinguished from the number of stressful events. All four variables were indicated by marking 100-mm visual analogue scales (VAS), in which higher scores reflected greater amounts (e.g., for stress coping efficacy, the item read: Please indicate how successful you were in coping with all types of stresses today by marking the line below, with anchors set as not at all successful and extremely successful). Similar VAS measures are extensively 7 mindfulness-based relationship enhancement 477 used in clinical settings to measure subjective phenomena, and have been shown to be valid, reliable, rapid, and sensitive in measuring such variables as global affect, pain, and fatigue (e.g., Cella & Perry, 1986). For couples participating in the mindfulness intervention, diaries also asked participants how many minutes were spent in completing the day s formal mindfulness homework assignment. Treatment Credibility Prerandomization expectations regarding the intervention were measured by a credibility questionnaire completed by all study participants based on an overview of the program provided during the screenings. The measure was adapted from Borkovec and Nau s (1972) format, which has been frequently used for this purpose (e.g., Gil et al., 1996). The questionnaire asked subjects to rate, on 10-point Likert-type scales, how confident they were in the program, how logical the program seemed, how successful they thought it would be, how helpful the leaders would be, and whether they would recommend the program to a friend. Intervention Description A treatment manual was developed to specify the methods and techniques to be used in the intervention. The overall structure of the intervention bore some resemblance to standard cognitive-behavioral couples programs (e.g., Prevention and Relationship Enhancement Program; Markman et al., 1988) in that sessions included such common elements as skills instruction, didactic presentations, couples exercises, group discussions, and relied strongly on homework assignments for skills development. However, the contents of each of these elements differed in important ways in the mindfulness program (e.g., continual development of a single generic skill, that of mindful attention, versus various domain-specific skills such as problem-solving strategies; didactic focus on stress reactivity versus sexual functioning). The intervention was directly modeled on Kabat-Zinn s mindfulness program in terms of format, teaching style, sequence of techniques, composition of topics, and homework assignments (for a complete description, see Kabat-Zinn, 1990, and Kabat-Zinn & Santorelli, 1999). Modifications were incorporated to meet needs specific to working with nondistressed couples to enhance their relationships. Interventions consisted of 8 weekly 2.5-hour evening meetings plus a single full-day (7-hour) Saturday retreat session. The average attendance rate at group sessions was 80% (range 61% to 100%). Couples were additionally assigned daily homework assignments. Sessions. Table 1 provides a brief summary of intervention sessions. First, couples were presented with the rationale that mindfulness training allows them to gain access to important information about their mutual interactions and their thoughts, feelings, behaviors, and environment, thereby helping them to understand themselves, their relationship, the nature of any problems, and potential solutions. Participants were not encouraged to target any 8 478 carson et al. TABLE 1 Main Topics of Intervention Sessions Session 1 Session 2 Session 3 Session 4 Session 5 Session 6 Full Day Session Session 7 Session 8 Welcome and guidelines, loving-kindness meditation with partner focus, brief personal introductions, introduction to mindfulness, body-scan meditation, homework assignments (body scan, and mindfulness of a shared activity) Body-scan meditation, group discussion of practices and homework, introduction to sitting meditation with awareness of breath, homework assignments (body scan plus sitting meditation, and pleasant events calendar including shared activities) Sitting meditation, group discussion of practices and homework with didactic focus on pleasant experiences, individual yoga, homework assignments (alternating body scan with yoga plus meditation, and unpleasant events calendar including shared events) Sitting meditation, group discussion of practices and homework with didactic focus on stress and coping, dyadic eye-gazing exercise and discussion, homework assignments (alternating body scan with yoga plus meditation, and stressful communications calendar including communications with partner) Sitting meditation, taking stock of program half over, group discussion of practices and homework with didactic focus on communication styles, dyadic communication exercise, homework assignments (alternating sitting meditation with yoga, and attention to broader areas of life [e.g., work] that impact relationship, exploration of options for responding with mindfulness under challenging conditions) Partner yoga, sitting meditation, group discussion of practices and homework with didactic focus on broader areas of life (e.g., work) that impact relationships, homework assignments (alternating sitting meditation with yoga, and attention to obstacles and aids to mindfulness) Multiple sitting meditations and walking meditations, individual and partner yoga, mindful movement and touch exercise, dyadic and group discussions Sitting meditation, group discussion of experiences during full day session, discussion of obstacles and aids to mindfulness, lovingkindness meditation, mindful touch exercise and discussion, homework assignments (self-directed practice) Partner yoga, sitting meditation, group discussion/review of program focusing on lessons learned, personal and relationship-related changes, and wrap-up particular, specific set of behaviors for change as their primary goal. Rather, a nonstriving attitude was advocated as most helpful to enhancing their relationships and reducing stress. The program actively involved participants in learning and practicing a range of formal (body scan meditation, yoga exercises, and sitting meditation) and informal (e.g., mindfulness during routine activities) meditation-based methods. As in standard mindfulness programs, couples also were presented with didactic material on topics such as the impact 9 mindfulness-based relationship enhancement 479 of stress on mental, physical, and relationship health. They participated in structured exercises based on these topics and discussions of their experiences of practicing mindfulness. Home practice. Practice of the formal mindfulness techniques was guided by audiotapes (except during the final week, when participants transitioned to self-guided practice), and required a special time for each partner of about 30 to 45 minutes per day, 6 days per week. Informal mindfulness techniques to practice during the conduct of everyday living were also assigned each week. The details of certain informal mindfulness exercises were recorded daily by each partner on specialized forms (e.g., shared pleasant moments, challenging communications). Couple-focused adaptations. Principal adaptations to the standard Kabat- Zinn protocol that were specifically targeted at enhancing couples relationships included: (a) greater emphasis on loving-kindness meditations, with a particular focus on one s partner; (b) incorporation of partner versions of yoga exercises, in which partners physically supported and facilitated one another in the performance of therapeutic, often pleasurable postures; (c) mindful touch exercises, with each partner paying close attention to the giving and receiving of a gentle back rub, followed by dyadic discussion of the implications of this for sensual intimacy (i.e., sensate focus; Spence, 1997); (d) a dyadic eye-gazing exercise (adapted from S. Levine & Levine, 1995), with partners acknowledging and welcoming the deep-down goodness in one another; (e) application of mindfulness to both emotion-focused and problemfocused approaches to relationship difficulties; and (f) the context for practicing various mindfulness skills, both in-session and at home, was tailored to bring couples relationships into focus (e.g., partners were encouraged to be more aware during shared pleasant activities, unpleasant activities, and stressful interactions, and to discuss and keep daily records about new understandings arising from such interactions). In addition, group discussion and didactic components provided opportunities to consider the impact of these exercises on relationship functioning. Intervention Leaders Training and Treatment Integrity All intervention sessions were jointly led by a married couple composed of a clinical psychology doctoral student (J.W.C.) and a health educator (K.M.C.) who is a certified yoga instructor. Both intervention leaders had extensive experience practicing and teaching mindfulness, and had attended multiple seminars for health professionals directed by Kabat-Zinn which provided specific training in the conduct of mindfulness interventions. Sessions from this study were audiotaped, and a random selection was checked for treatment integrity as described by Waltz, Addis, Koerner, and Jacobson (1993). Leaders adherence to the specific elements of the intervention (e.g., employing partner yoga exercises, assigning mindfulness home practice) was assessed by trained undergraduate honors students (100% interrater agreement was demonstrated across three sessions). Treatment competence (e.g., rapport 10 480 carson et al. with group members, clear directive comments) was assessed by two licensed clinical psychologists acquainted with mindfulness interventions. Adherence raters judged that therapist behaviors adhered to protocols on 100% of rated items, and the mean competence rating was 4.93 out of a maximum of Results Intervention outcomes were evaluated by two distinct sets of analyses. Standard regression models were employed for summary measures, and multilevel models were applied to daily diary measures. In all outcome tests, the fundamental unit of analyses was couple dyads. Equivalency of Conditions A series of regression and chi-square analyses determined that randomization procedures resulted in roughly equal groups at baseline, with no significant differences in means of dependent variables, demographic characteristics, or treatment credibility. Attrition from the two conditions was also equivalent, resulting in 22 in each condition. For couples in the treatment group, dropout was defined as those who requested to withdraw at any point during the intervention (7 of 29; further evaluations were not collected from these), or couples in whom either partner was not present in five or more sessions (none were dropped for this reason). For wait-list couples, dropout was defined as those who declined to complete further evaluations (6 of 28). Analyses of differences between study completers and those who dropped out produced a significant main effect for number of children for treatment completion, F(1, 55) 4.35, p.04, which did not interact with gender. Those who dropped out were likely to have more children (for women, M 0.8 for completers vs. M 1.5 for dropouts; for men, M 0.9 for completers vs. M 1.6 for dropouts). Chi-square analyses by gender also revealed a significant difference in men for history of individual psychological therapy, 2 (1, N 57) 5.21, p.02, with dropouts more likely to have been in individual therapy (77% of dropouts vs. 41% of completers). Outcome analyses were based on data from study completers only. Prior to testing for treatment effects, another set of regression and chi-square analyses were performed to determine whether postrandomization attrition may have resulted in important group differences in pretreatment means of dependent variables, demographic characteristics, or treatment credibility. However, no significant differences were found. Pre-, post-, and follow-up means for all measures are displayed in Tables 2 and 3. Treatment Effects on Summary Measures Separate (Treatment Condition by Time by Gender) multivariate analyses of variance (MANOVAs) with repeated measures were used to conduct comparisons between the mindfulness and wait-list conditions on pre-, 13 mindfulness-based relationship enhancement 483 post-, and follow-up summary measures of (a) relationship functioning and (b) individual psychological well-being. The interdependency of male and female scores was handled by treating gender as a repeated within-subjects factor (Markman et al., 1988). When MANOVAs were significant, univariate analyses of variance were then employed to reveal the locus of effects. Also, to determine whether preintervention treatment credibility scores may need to be controlled, post-hoc multivariate regression tests were conducted within the treatment group only to discover whether credibility was predictive of improvements (pre-to-post residualized scores; Baucom, Sayers, & Sher, 1990). However, these treatment credibility tests were nonsignificant. Relationship Outcomes Results of the multivariate test indicated a significant Treatment Time interaction, F(12, 29) 2.11, p.050. Neither treatment, F(6, 35) 1.40, p.242, nor time, F(12, 29) 1.24, p.303, was significant. Gender showed a significant effect, F(6, 35) 7.88, p.001, which did not interact with treatment, F(6, 35) 0.62, p.712, or time, F(12, 29) 0.58, p.838. Because only the significant Treatment Time interaction was highly pertinent to our hypotheses, and gender did not interact with treatment in any subsequent test, the following univariate reports focus exclusively on Treatment Time interactions. Pre-to-post univariate tests revealed significantly superior scores in mindfulness couples on measures of relationship satisfaction (QMI; F[1, 42] 12.11, p.001), autonomy (ARI; F[1, 42] 11.80, p.001), relatedness (ARI; F[1, 42] 16.62, p.001), closeness (IOS; F[1, 42] 5.48, p.024), acceptance of partner (API; F[1, 42] 6.25, p.016), and relationship distress (GDS; F[1, 42] 4.95, p.031). A supplementary test showed the mindfulness treatment was also significantly superior to the wait-list treatment at posttest in terms of estimated DAS relationship satisfaction scores derived from QMI scores, F(1, 42) 12.11, p.001. Univariate analyses conducted to test for significant changes between posttest and 3 months followup were nonsignificant, indicating that posttreatment effects were generally maintained at follow-up. Individual Outcomes The MANOVA of effects of the intervention on individual summary outcomes also demonstrated a significant Treatment Time interaction, F(8, 33) 3.04, p.011, suggesting that any significant main effects needed to be interpreted with that in mind. Treatment showed a trend toward significance, F(4, 37) 2.25, p.083, and time was significant, F(8, 33) 3.01, p.012. Gender was not significant as a main effect, F(4, 37) 0.56, p.692, nor when interacting with treatment, F(4, 37) 0.58, p.555, or time, F(8, 33) 0.74, p.656. Because the effects of time and treatment were of no interest given the significant multivariate interaction, and gender did not 14 484 carson et al. interact with treatment in subsequent tests, the reports on univariate tests below focus exclusively on Treatment Time effects. Univariate pre-to-post tests showed significantly superior outcomes in the mindfulness condition for optimism (LOT-R; F[1, 42] 5.82, p.020), spirituality (INSPIRIT; F[1, 42] 10.12, p.003), individual relaxation (IRI; F[1, 42] 5.41, p.025), and psychological distress (BSI; F[1, 42] 20.46, p.001). Univariate analyses were conducted to test for significant changes between posttest and 3 months follow-up. Again, all were nonsignificant, suggesting that posttreatment effects were generally maintained at follow-up. Daily Diary Analyses Diary Completion Rates Daily diary measures were completed for 2 weeks before the intervention (baseline period), and again during the final 3 weeks of the intervention (treatment period). The diary completion rate was 97% (2,985 of 3,080 potentially reportable days across 88 participants, range 69% to 100%). Analyses revealed that individual completion rates were significantly related to participants relationship status. Married partners were somewhat more likely to complete their diaries, F(1, 86) 8.47, p.005, although the mean difference was small (married M 98% vs. cohabitating M 92%). Treatment Effects on Diary Variables Model. Multilevel models integrate data from multiple levels of sampling, such as this study s two levels (within-couples level, including variables such as daily recordings of relationship happiness; and between-couples level, including variables such as treatment condition). To examine treatment effects on diary variables, multilevel regression was used to test whether the average levels of these variables were significantly different in the two groups as they progressed from the baseline to the treatment period (Affleck, Zautra, Tennen, & Armeli, 1999; G. Affleck, personal communication, October 25, 2002). As recommended by Barnett, Raudenbush, Brennan, Pleck, and Marshall (1995), the interdependency of male and female scores in these models was handled by treating gender as a repeated within-couples factor. That is, the data involved a pair of parallel scores (e.g., relationship happiness) for each couple at each of the two diary periods. Thus, a couple with complete data had 4 observations: 1 for each partner for each diary period. The within-couple predictors in these models were therefore diary period (baseline vs. treatment periods) and gender (and their interaction if significant). The sole between-couples predictor was treatment condition, along with its interactions with within-couples variables. Couples intercepts were allowed to vary freely (i.e., random effects components; Singer, 1999). Tests for autocorrelation were performed, but all results indicated that autocorrelation was 16 486 carson et al. improvements in the mindfulness versus the control group for both relationship variables (relationship happiness, relationship stress) and individual variables (stress coping efficacy, overall stress). Process Relationships of Impact of Daily Stress Model. To examine treatment effects on the day-to-day impact of relationship stress and overall stress on other diary variables, a series of multilevel analyses were planned to test for differential changes across days (baseline through treatment) in same-day associations between these stress variables and (a) daily relationship happiness, and also (b) daily stress coping efficacy. Both intra-individual (own relationship stress, own overall stress), and intracouple variables (partner s relationship stress, partner s overall stress) were a potential focus of analyses. To control for Type I error and also potential redundancy between the various stress predictors, as recommended by Bryk and Raudenbush (1992), an omnibus multivariate test was first performed for each of the two outcomes. In these models, daily observations were nested within couples, with gender treated as a repeated within-couples factor. Thus, the analyses integrated female and male pairs of parallel scores for each couple on each day of diary collection, such that a couple with complete data had 70 observations: 1 for each partner for each diary day across 2 preintervention weeks (14 days) and the final 3 weeks of the treatment period (21 days). Within-couple predictors included time (day-to-day linear effect no quadratic effect was found), gender, stress (relationship or overall, for self or partner), and all potential two-way interactions. Treatment condition was the principal between-couples predictor; also, the individual mean levels of relevant stress variables were included as control variables. Treatment condition was additionally combined in all potential interaction terms with within-couple predictors. Final models were gradually derived by dropping nonsignificant interaction terms (those indicating a trend toward significance, p.10, were retained). To control for potentially spurious within-person associations, all stress predictors were person-centered (Affleck et al., 1999; Barnett et al., 1995). Results. While controlling for other predictors, 2 the omnibus test for stress coping efficacy showed independent relationships to exist with own relationship stress (b , t 6.58, p.0001), own overall stress (b , t 14.23, p.0001), and partner s relationship stress (b , t 3.00, p.0027). Post-hoc univariate tests on these relationships were then performed using the same additional predictors (time, condition, gender, mean of stress variable), along with significant interactions (as indicated above, nonsignificant interaction terms were dropped step by step), with intercepts and stress slopes treated as random effects. Two models with 2 Because of our focal interest in the significant interactions between stress variables, in the interest of brevity only these results are reported. Please contact the first author for a comprehensive report of these outcomes. 17 mindfulness-based relationship enhancement 487 significant Time Treatment Stress interactions revealed that in mindfulness couples, stress coping efficacy showed a progressively decreasing association across time with levels of (a) own relationship stress (b , t 2.02, p.0432), and (b) own overall stress (b , t 2.91, p.0036). These findings indicate a process by which daily stress coping efficacy became increasingly resilient to, or less reactive to, the impact of daily stress factors. Process Relationships Between Daily Mindfulness Practice and Daily Outcomes Practice rates. Within the mindfulness condition, treatment period diaries included a report of the number of minutes participants had spent doing their formal mindfulness homework assignments. Out of 924 potentially reportable treatment period days, mindfulness participants completed diaries on 868 days (94% overall, range 62% to 100%), and on 631 of these days participants reported spending some time practicing their mindfulness skills (73% overall, range 10% to 100%). During this period, on average participants reported practicing their mindfulness homework for 32 minutes per day (range 10 to 51). Mean practice rates were significantly related to duration of relationship, F(1, 42) 5.59, p.023, such that the longer the relationship, the more partners practiced. Model. Analyses examined whether minutes spent in formal mindfulness exercises were predictive of same-day daily outcomes variables (relationship satisfaction, relationship stress, stress coping efficacy, and overall stress). Also, to clarify whether increases in mindfulness practice preceded and may have had a causative influence on day-to-day fluctuations in these variables, tests were conducted for lags of 1, 2, and 3 days practice. Daily observations were again nested within couples in these models, with gender treated as a repeated within-couples factor, such that a couple with complete data had 70 observations (1 for each partner for each of the 35 days of diary recordings). Mindfulness practice and gender were the within-couple predictors in these models, with mean levels of practice as between-couples control variables. In lagged models, the lagged day s level of the dependent variable was also included as a within-couples control variable. Intercepts and mindfulness practice slopes were treated as random effects. Practice rates were personcentered to control for potentially spurious within-person associations. Results. All same-day tests indicated significant associations with mindfulness practice in the expected directions; that is, greater practice was associated with increased relationship happiness (b , t 4.23, p.0001), decreased relationship stress (b , t 5.64, p.0001), increased stress coping efficacy (b , t 4.67, p.0001), and decreased overall stress (b , t 4.70, p.0001). Lagged results showed that increased mindfulness practice was also significantly predictive of improved levels on several consecutive days of relationship happiness (for following day, b , t 2.84, p.0045; for 2nd day, b , 18 488 carson et al. t 2.85, p.0043), relationship stress (for following day, b , t 3.55, p.0004; for 2nd day, b , t 2.71, p.0058), and stress coping efficacy (for following day, b , t 1.94, p.0500; for 2nd day, b , t 3.05, p.0023; for 3rd day, b , t 3.04, p.0024). For overall stress there was a marginally significant improvement on the third day (b , t 1.74, p.0823). Mean Mindfulness Practice Rates Relationship to Summary Outcomes Post-hoc regression tests were performed on data from the treatment group only to determine whether mean mindfulness practice rates (derived from diaries) were predictive of summary outcomes that had evidenced significant posttest between-group differences. Averaged residualized couple scores were the dependent variables in these tests. Results indicated mean mindfulness practice rates predicted improvements for the majority of outcomes, including autonomy (b 0.051, p.032), acceptance of partner (b 0.656, p.010), spirituality (b 0.018, p.008), individual relaxation (b 0.749, p.035), psychological distress (b 0.042, p.002), with a trend toward significance for optimism (b 0.075, p.066). Discussion The results of this study provide empirical support for a mindfulness-based relationship enhancement program designed for relatively happy, nondistressed couples. Mindfulness was efficacious in enriching current relationship functioning and improving individual psychological well-being across a wide range of measures. Because the probability of encountering ceiling effects is high when intervening with relatively happy couples (Christensen & Heavey, 1999), these findings are very encouraging. The findings also lend support to those who have advocated couples programs designed to boost individual partners stress coping skills (Bodenmann et al., 2001). Furthermore, we found empirical support for the rationale of adopting a mindful approach to enhancing stress coping skills and relational functioning in that process of change measures showed improvements in individual relaxation, acceptance of partner, confidence in ability to cope, and overall functioning across a range of domains. The mean posttest effect size across all relationship measures in this study was Because of the absence of studies aimed at strengthening relationships in relatively well-functioning couples, it is difficult to compare the results of this study with others. Nonetheless, this effect size compares favorably to Giblin, Sprenkle, and Sheehan s (1985) finding of an average 0.35 effect size for self-report instruments in prevention studies, and Hahlweg and Markman s (1988) meta-analysis finding of 0.52 for prevention studies. Moreover, since most prevention studies have not actually demonstrated enhancing effects, but rather, have helped to stave off deterioration of relationship 19 mindfulness-based relationship enhancement 489 functioning, the mean effect size of 0.54 for relationship quality improvements in the present study is noteworthy. Regarding individual well-being outcomes, the average effect size in this study was 0.59, which is similar to Speca et al. s (2000) average effect size of 0.54 for mindfulness with cancer patients. A novel feature of the present investigation relative to previous couples intervention studies was its focus on participants adherence to intervention skills. Results were encouraging, showing that most couples applied themselves well to the daily practice of mindfulness exercises (average of 32 minutes per day), and a clear dose/response relationship was observed. Future studies can seek to determine minimum amounts of effective mindfulness practice, and also focus on bolstering adherence in those who practice less. Beyond demonstrating the effects of the mindfulness intervention, this study s application of multilevel modeling makes a singular contribution to the wider body of couples research. The present multilevel results showed mindfulness brought about significant improvements in day-to-day relationship happiness, relationship stress, stress coping efficacy, and overall stress. Importantly, these findings were obtained by first calculating estimates for each couple in the sample, and then aggregating them to derive reliable results for the average couple thus avoiding the problem of overlooking the impact of couple differences, as in standard regression approaches. Moreover, the advantages offered by this statistical approach were particularly well suited for the analysis of real-time processes in participants. We found that over the course of the intervention, couples confidence in their ability to cope with stress became increasingly resilient to the effects of day-to-day stress. Furthermore, the tangible day-to-day influence of mindfulness was highlighted by the finding that greater mindfulness practice on a given day was associated, on the same day and for several consecutive days, with improved levels of relationship happiness, relationship stress, stress coping efficacy, and overall stress. Future studies could profit from using daily data collection to examine hypothesized therapeutic processes (e.g., relaxation, acceptance), as well as how partners attitudes and behaviors interactively affect one another (e.g., would same-day or next-day relationship happiness become more resilient to the negative effects of arguments). Several limitations of the present study should be noted. First, although the 3-month follow-up results offer encouragement, longer-term follow-up would be needed to examine the durability of enhancement changes. Limits also come from the fact that, like most research with couples, this study s sample was almost entirely White, well-educated, middle-class, and entirely heterosexual. Caution is in order therefore in generalizing these results to diverse populations. Additional limitations to our conclusions come from lack of control for nonspecific factors (e.g., attention from an intervention provider), the utilization of only one team of intervention leaders, reliance on self-report data, and diary collection methods (e.g., diaries are more reliable when date stamps can be confirmed; Gil, Carson, Sedway, & Porter, 2000). These issues can only be 20 490 carson et al. addressed by future attention-placebo or alternative-treatment investigations which employ more diverse samples, multiple treatment teams, additional measures (observational, psychophysiological, and even physiological) and improved diary collection procedures. Future studies also can test more refined hypotheses of how mindfulness operates, analyze predictors of treatment outcome, and determine whether modifications might be called for to suit the needs of particular types of couples. For example, considering that attrition in this study was related to number of children, strategies to accommodate children s needs could make the program more accessible to parents. In conclusion, future studies might target couples dealing with specific stressors. Mindfulness could potentially be combined with parenting skills training (Kabat-Zinn & Kabat-Zinn, 1997), or be revised for couples undergoing infertility counseling (Stanton & Burns, 1999) or those adapting to a major illness in one of the partners (Halford, Scott, & Smyth, 2000). Finally, given that the methods of Mindfulness-Based Relationship Enhancement are largely derived from the Buddhism and yoga meditation traditions (Kabat- Zinn, 1982), further efforts are needed to transpose the wealth of information these Asian psychologies contain about methods for transforming ordinary living into a richer, more mature happiness (M. Levine, 2000). Appendix In a recent methodological paper, Affleck et al. (1999) suggested that researchers reporting multilevel results make available descriptions of the linear equations that were tested, one for each level of analysis. Using the test of treatment effects on relationship happiness as an example, the following paragraphs describe the two levels employed in these models. Table 4 presents the results for the model s fixed effects after nonsignificant interaction terms were dropped. To obtain linear equations for the other multilevel results reported in this article, please write to the first author. Level 1 within-couples model. Variation within couples arises due to temporal variation within each partner, gender differences, and Gender Time interactions. The Level 1 model was formulated as Y it 0i 1i (diary period) it 2i (gender) it r it, (1) where Y it is the observed outcome (relationship happiness) t for couple i, with t 1, 2 outcomes per couple and i 1,..., 44 couples; 0i is the average daily relationship happiness for couple i across the study s two diary periods; (diary period) it is a linear time contrast coded 0 for the baseline diary period and 1 for the treatment diary period, and 1i is therefore the linear rate of change in relationship happiness across the two partners in couple i; (gender) is coded.5 for women and.5 for men, so that 2i is a couple s mean relationship happiness difference between female and male partners averaged across the two diary periods; and the final term, r it, is the residual component of relationship happiness associated with couple i during diary period t, and is UNC School of Social Work s Clinical Lecture Series University of North Carolina at Chapel Hill School of Social Work October 26, 2015 Noga Zerubavel, Ph.D. Psychiatry & Behavioral Sciences Duke University 277 CHAPTER VI COMPARISONS OF CUSTOMER LOYALTY: PUBLIC & PRIVATE INSURANCE COMPANIES. This chapter contains a full discussion of customer loyalty comparisons between private and public insurance companies A Parent Management Training Program for Parents of Very Young Children with a Developmental Disability Marcia Huipe April 25 th, 2008 Description of Project The purpose of this project was to determine Gratitude in Couples 1 Running head: GRATITUDE IN COUPLES The Effects of Gratitude on Overall Marital Happiness at the Dyadic and Individual Levels Rachel Smith University of North Carolina at Wilmington The Differential Effects of Three Mindfulness Techniques on Indicators of Emotional Well-being and Life Satisfaction Marise Fallon BPsycSci (l-ions) A report submitted in partial requirement for the degree PSYCHOLOGY DEPARTMENT GOALS, OBJECTIVES, AND MEASURES The goals and directives for the psychology major are taken directly from the work of the Task Force on Undergraduate Psychology Major competencies COUPLE OUTCOMES IN STEPFAMILIES Vanessa Leigh Bruce B. Arts, B. Psy (Hons) This thesis is submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Clinical Psychology, CRITICALLY APPRAISED PAPER (CAP) FOCUSED QUESTION Does a neurocognitive habilitation therapy service improve executive functioning and emotional and social problem-solving skills in children with fetal The Inventory of Male Friendliness in Nursing Programs (IMFNP) Background At the 2001 annual conference of the American Assembly for Men in Nursing (AAMN), a nursing student discussed his educational experiences Wellness Initiative for Senior Education (WISE) Program Description The Wellness Initiative for Senior Education (WISE) is a curriculum-based health promotion program that aims to help older adults increase 1 Experimental Design Part I Richard S. Balkin, Ph. D, LPC-S, NCC 2 Overview Experimental design is the blueprint for quantitative research and serves as the foundation of what makes quantitative research http://www.health.harvard.edu/newsweek/positive-psychology-in-practice.htm Positive psychology in practice (This article was first printed in the May 2008 issue of the Harvard Mental Health Letter.) Positive Open Journal of Social Sciences, 2016, 4, 70-76 Published Online June 2016 in SciRes. http://www.scirp.org/journal/jss http://dx.doi.org/10.4236/jss.2016.46008 The Influence of Stressful Life Events of 2014-2015 ISST CERTIFICATION REQUIREMENTS AS AN INDIVIDUAL SCHEMA THERAPIST Qualifications to apply for Certification for those completing training after December 31, 2013: To qualify for certification Progress Report Phase I Study of North Carolina Evidence-based Transition to Practice Initiative Project Foundation for Nursing Excellence Prepared by the NCSBN Research Department INTRODUCTION In 2006, ORIGINAL PAPER Validity and Reliability of the Malay Version of Duke University Religion Index (DUREL-M) Among A Group of Nursing Student Nurasikin MS 1, Aini A 1, Aida Syarinaz AA 2, Ng CG 2 1 Department Evaluating a fatigue management training program for coach drivers. M. Anthony Machin University of Southern Queensland Abstract A nonprescriptive fatigue management training program was developed that How to Develop a Sporting Habit for Life Final report December 2012 Context Sport England s 2012-17 strategy aims to help people and communities across the country transform our sporting culture, so that Master of Arts Programs in the Faculty of Social and Behavioral Sciences Admission Requirements to the Education and Psychology Graduate Program The applicant must satisfy the standards for admission into Chapter 8 - General Discussion 101 As stated in the introduction, the goal of type 2 diabetes care is to offer patients an integrated set of interventions in relation to life style, blood pressure regulation, PhD Degrees in the Clinical and Social Sciences Department Program Learning Objectives, Curriculum Elements and Assessment Plan Degree title: PhD in Clinical Psychology This degree in the Department of Measuring Empowerment The Perception of Empowerment Instrument (PEI) Copyright 1998 The Perception of Empowerment Instrument 2 ABSTRACT The concept of employee empowerment is addressed frequently in the Running head: GENDER EFFECT 1 Gender Effect of Parent-Child Relationships on Parental Health by Jazmine V. Powell A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Bachelor School-Based Intervention Using Muscle Relaxation Techniques by Roger J. Klein, Psy.D., Licensed Psychologist This article is reprinted on the SMG website with permission of Dr. Klein, who is the author Technical report How to Get More Value from Your Survey Data Discover four advanced analysis techniques that make survey research more effective Table of contents Introduction..............................................................2 Biost/Stat 578 B: Data Analysis Emerson, September 29, 2003 Handout #1 Organizing Your Approach to a Data Analysis The general theme should be to maximize thinking about the data analysis and to minimize Special Report The ACC 50 th Annual Scientific Session Part Two From March 18 to 21, 2001, physicians from around the world gathered to learn, to teach and to discuss at the American College of Cardiology Effectiveness of a Comprehensive Pain Rehabilitation Program in the Reduction of Pain Catastrophizing Michele Evans, MS, APRN-C, CNS, March 27, 2007 Mayo Foundation for Medical Education and Research (MFMER).
FAO charts a path for the agricultural sector to contribute to adaptation, mitigation and the Sustainable Development Goals Now that the Paris Climate Accord has been agreed, national strategies to achieve pledged carbon mitigation and adaptation plans take center stage. FAO has developed supplementary guidelines to the UNFCCC NAP Guidelines for “Addressing Agriculture, Forestry and Fisheries in National Adaptation Plans (NAP-Ag Guidelines”) aiming to support developing countries in making sure agriculture is both included in national adaptation plans and made more adaptive and resilient. They serve to help vulnerable countries access funding – in particular from the Green Climate Fund Readiness Programme – while at the same time promoting broad participation in the decision-making process and building needed technical capacities. Appropriate choices depend on specific context and accommodation of the views and needs of multiple stakeholders. That’s no simple task. For instance, Lake Faguibine in northern Mali has been mostly dry since the mid-1970s, offering a test case for ecological, political and social changes driven by climate change. While larger-scale stakeholders clamour for refilling the lake – and infrastructure-based adaptation – local community members tend to prefer ecosystem-based approaches such as the sustainable management of forests, which have increased in importance as the lake receded. Such cases are common and underscore the importance of weighing multiple factors in preparing NAPs that seek both to stoke development while also bolstering resilience and food security. “Medium to long-term adaptation planning is crucial to build climate resilience and food security for future generations,” said Julia Wolf, FAO Natural Resources Officer and co-author of the guidelines. “The agriculture sectors, often the economic backbone of developing countries, need to be a key driver and stakeholder. The guidelines are set out to address the key issues, entry points and steps to take.” Agriculture’s special role Agriculture, including crops and livestock, forestry and fisheries, holds a special place in the effort to keep global temperatures from rising more than two degrees Celsius above their pre-industrial levels. The sector is a major source of greenhouse gas emissions, making it both a prime target for mitigation efforts as well as a source for innovative solutions. At the same time, food production will need to be 60 percent higher in 2050 than it was in 2006 to meet the demand of a larger population. Indeed, four of the eight key climate change risks identified by the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) are linked to food security. Almost 90 percent of developing countries have included their agriculture sectors as key to their climate pledges. NAPs are considered a core vehicle to deliver on national adaptation priorities, and towards achieving countries’ adaptation action mentioned in the Nationally Determined Contributions while addressing Sustainable Development Goals (SDGs). FAO’s new guidelines, designed for national planners, agriculture, forestry and fisheries authorities and experts as well as United Nations and bilateral donors, is geared to address the specific challenges that adaptation and mitigation efforts pose in the agricultural arena – steering change at a bearable pace for those who depend on related activities for incomes, livelihoods and food security. For instance, sowing fast-growing crop cultivars that mature quickly can work wonders amid drought conditions, but only if seeds are widely available. Likewise, while cassava is an important crop in tropical environments, national programmes that seek to expand its potential must consider that higher temperatures could affect the vectors of viral diseases that affect the tuber. FAO has considerable experience integrating local knowledge with scientific expertise. An FAO project in the Lao People’s Democratic Republic aims at helping farmers and fishers using the country’s wetlands – where climate change is projected to have a major effect on the quantity and quality of water supply – to coordinate their actions in pursuit of more sustainable land-management practices. Key points Devising effective national plans begins with identifying responsible entities – often a special task force within a ministry that has a mandate to engage all relevant parties – as well as establishing a data-collecting and storage process and defining indicators for documenting progress. Cost-benefit analyses are clearly called for, flanked with recognition of co-benefits and collateral effects that any adaptation option may present. Assessing the impact on the food security and nutrition of vulnerable populations is also a critical component. Fostering sustainable agriculture practices are key to creating a low-carbon world and resilient economic growth, both essential components of a development agenda geared to achieving international agreements on hunger and poverty eradication and climate change. The new guidelines are informed by the FAO-UNDP programme, Integrating Agriculture into National Adaptation Plans, and were developed thanks to funding from the German Federal Ministry for the Environment, Nature Conservation, Building and Nuclear Safety (BMUB) and from the Kingdoms of Belgium, the Netherlands and Sweden and from Switzerland.
http://agroinform.asia/en/2017/05/12/new-guidelines-to-help-member-states-achieve-their-climate-pledges/
Those who belong to a particular religion may buy/not buy and use/not use certain goods and services. Members of a particular religion constitute what we call religious subculture. Religious beliefs and rituals may dictate the use of certain items and may discourage the consumption of others. How does subculture influence consumer Behaviour? Such values would influence general and specific consumption patterns and buying behavior. … The members of a subculture possess such values and beliefs, as also customs and traditions that set them apart from people belonging to other sub-cultures. How does religion influence consumption? Religious values delineate what consumption behaviors are allowed versus forbidden. For example, kosher laws in Judaism prohibit eating certain foods, sharia laws in Islam prohibit certain haram (prohibited) products (e.g., pork, alcohol, interest-earning banking products). How does culture and sub culture impact consumer decision making? Culture influences what feels right, normal and desirable. Retailers that ask consumers to swim against the social current are making it harder for the consumer to choose their services. It’s usually better practice to make it possible and easy for consumers to choose your product within their cultural comfort zone. What is consumption subculture? Subcultures of consumption are distinct, homogeneous groups of people united by a common commitment to a particular set of consumption items or activities. … Subcultures are intriguing social units for market research and segmentation (Zaltman 1965) due to their relative homogeneity of norms, values, and behaviors. What are two different consumer subcultures? The poor, the affluent, and the white-collar middle class are examples of material subcultures. Social Institutions: Those who participate in a social institution may form a subculture. Which is the most commonly described subculture? Religious groups are the most commonly described subcultures. By 2030, what percentage of the U.S. population will be made up of non-European ethnic groups? ______subcultures are defined as those whose members’ unique shared behaviors are based on a common racial, language, or national background. How does religion influence behavior? Our experiences, environment and even genetics form our beliefs and attitudes. In turn, these beliefs influence our behaviour, and determine our actions. Beliefs that are widely accepted become part of our culture and, in many ways, shape the society we live in. How do beliefs influence consumer behavior? Belief plays a vital role for consumers because, it can be either positive or negative towards an object. For example, some may say tea is good and relieves tension, others may say too much of tea is not good for health. Human beliefs are not accurate and can change according to situations. How does religion affect advertising? Religion plays a significant role in the way consumers perceive the advertising of controversial products. … Briefly, the more religious consumers were largely more offended by the advertisement of the 17 products studied than nonreligious consumers. What is the relationship between culture and subculture? A subculture is a self-organized tradition of shared interests, lifestyles, beliefs, customs, norms, style or tastes. A culture is a shared social tradition that may include language, social norms, beliefs, art, literature, music, traditions, pastimes, values, knowledge, recreation, mythology, ritual and religion. How does your culture influence your decision? The Influence of Culture on Health Care Decisions Culture may also affect the decision-making process. Cultural beliefs can affect how a patient will seek care and from whom, how he or she will manage self-care, how he will make health choices, and how she might respond to a specific therapy. What culture is an example of a subculture? Subcultures are part of society while keeping their specific characteristics intact. Examples of subcultures include hippies, goths, bikers, and skinheads. The concept of subcultures was developed in sociology and cultural studies. Subcultures differ from countercultures. Is TikTok a subculture? Try asking one of Gen Z’ers and they’ll tell you that TikTok is a completely new subculture. TikTok is one of the fastest growing social media platforms in the world which presents an alternative version of online sharing. It allows users to create short videos with music, filters, and some other features. What are the types of subculture? H - Hacker, see Hacker (free and open source software) and Hacker (computer security) - Hardline (subculture) - Hip hop culture, see also B-boy, Graffiti artists. - Hippie/Hippy. - Hipster, see Hipster (1940s subculture) and Hipster (contemporary subculture) - Hardcore. Why are subcultures important to society? Subculture can be important in mental health care because subcultures sometimes develop their own communication styles and social norms. … Certain behavior or values may be mistakenly pathologized by people or groups outside of that subculture. Also, certain subcultures may face discrimination from the majority group.
https://lambswar.com/talk-about-god/how-do-religious-subcultures-affect-consumption-decisions.html
Capitalism is an economic system based on private ownership of the means of production and their operation for profit. Characteristics central to capitalism include private property, capital accumulation, wage labor, voluntary exchange, a price system and competitive markets. In a capitalist market economy, decision-making and investment is determined by every owner of wealth, property or production ability in financial and capital markets, whereas prices and the distribution of goods are mainly determined by competition in goods and services markets. Economists, sociologists and historians have adopted different perspectives in their analyses of capitalism and have recognized various forms of it in practice. These include laissez-faire or free market capitalism, welfare capitalism, and state capitalism. Different forms of capitalism feature varying degrees of free markets, public ownership, obstacles to free competition and state-sanctioned social policies. The degree of competition in markets, the role of intervention and regulation, and the scope of state ownership vary across different models of capitalism; the extent to which different markets are free, as well as the rules defining private property, are matters of politics and of policy. Most existing capitalist economies are mixed economies, which combine elements of free markets with state intervention, and in some cases, with economic planning. Market economies have existed under many forms of government, in many different times, places and cultures. Modern capitalist societies—marked by a universalization of money-based social relations, a consistently large and system-wide class of workers who must work for wages and a capitalist class which owns the means of production—developed in Western Europe in a process that led to the Industrial Revolution. Capitalist systems with varying degrees of direct government intervention have since become dominant in the Western world and continue to spread. Over time, capitalist countries have experienced consistent economic growth and an increase in the standard of living. Critics of capitalism argue that it establishes power in the hands of a minority capitalist class that exists through the exploitation of the majority working class and their labor; prioritizes profit over social good, natural resources and the environment; and is an engine of inequality, corruption and economic instabilities. Supporters argue that it provides better products and innovation through competition, creates strong economic growth and yields productivity and prosperity that greatly benefits society as well as being the most efficient system known for allocation of resources. This article is derived from the English Wikipedia article "Capitalism"as of 23 Jul 2018, which is released under the Creative Commons Attribution-Share-Alike License 3.0.
https://www.freedomcircle.com/pedia/capitalism
TECHNICAL FIELD BACKGROUND ART SUMMARY OF THE INVENTION BRIEF DESCRIPTION OF THE DRAWINGS DETAILED DESCRIPTION OF THE INVENTION The disclosure relates generally to weighing vehicles in motion, and more particularly, to an improved solution for accurately weighing a vehicle while it is in motion. To date, weigh-in-motion (WIM) approaches attempt to weigh a vehicle while it is in motion by considering the vertical forces generated by the vehicle. In order for such approaches to be reliable, various attributes of the vehicle design need to be known ahead of time, such as the precise loading of each wheel, center of gravity, and/or the like. In practice, such knowledge cannot be accurately determined and utilized ahead of time, particularly in real-time applications. More importantly, adding a significant load to a vehicle can result in an often dramatic change to the attributes. For current WIM systems to be usable, various restrictions on the installation and use of the system are applied. These restrictions include: requiring absolutely smooth and level pavement before and after the WIM system; requiring no turning, braking, or acceleration by the vehicle; limiting speeds to a specific target range; etc. Even with such restrictions, the accuracy of current WIM systems fail to meet reasonable requirements in many operating conditions. For example, the accuracy of: piezoelectric load (pressure) sensors is ±10%; bending plates is ±8%; and single load cells is ±6%. For a 60,000 pound vehicle, these errors can range from 3,600 pounds up to 6,000 pounds - equivalent to the weight of a large sport utility vehicle. One approach seeks to account for oscillations that occur as a vehicle traverses a weighing system in order to provide a more accurate weight measurement. In this approach, oscillations in a single dimension are accounted for, but accurate measurement continues to require that the vehicles travel at low constant speeds with no turning or other factors. Wheelbase 2,347-4,000 mm Track width 1,416-2,000 mm Center of Gravity(Z) 540-1,000 mm Center of Gravity(X) 1,063-1,478 mm Tire width 185-315 mm Front to back weight ratio 53/47 to 66/34 Front wheel weight range 482-1784 pounds Rear wheel weight range 433-1288 pounds Even across a relatively small subsection of vehicles, numerous parameters that can affect an accuracy of measuring the weight of the vehicle can vary substantially. For example, the table below illustrates the variation in several characteristics for vehicles weighing between roughly one and three tons. When considering all types of commercial vehicles, which can range in size from a panel truck to a double-length tractor trailer, the variability in these characteristics becomes immense. DE 102 36 268 German Patent Publication Number (OPTIZ RIGOBERT) discloses a weighing and traffic sensor, for static and dynamic weighing of motor vehicles or their wheel and axle loads. The sensor has a base plate and a cover with a measurement arrangement inserted between them. The measurement arrangement comprises a module support with arms for absorbing shear forces. Sensors, in the form of strain gauges, are mounted on the arms for vertical and horizontal force measurement. EP 1 793 211 A2 European Patent Publication Number (SCHENCK PROCESS GMBH) discusses a method for determining forces exerted on a rail. The method involves providing sensor levels extending horizontally or vertically at a defined distance below a rail base. Strain gauges are arranged for simultaneous determination of forces and moments. Bridge circuits are formed from groups of the strain gauges for producing output signals that are applied to an evaluation device. US 3 871 491A (Yamanaka et al DE 103 00 087 US Publication Number ) and German Publication Number (Inwatec GmbH) describe devices for measuring vehicle loads. Aspects of the invention as defined by the claims provide a solution for evaluating an object, which accounts for various motion-related dynamic forces. In an embodiment, the object is a vehicle and the evaluation includes determining a set of static weights corresponding to the vehicle as it moves through a sensing element. The sensing element can include a load plate with vertical force sensing devices and horizontal force sensing devices located below the load plate. Analysis of measurement data acquired by the force sensing devices can enable calculation of the set of static weights corresponding to the vehicle. A first aspect of the invention provides a system comprising: a sensing element including: a rectangular load plate; a plurality of vertical force sensing devices, wherein a vertical force sensing device is located below the rectangular load plate adjacent to each of four corners of the rectangular load plate; and a set of horizontal force sensing devices located below the rectangular load plate; and a computer system configured to perform a method of evaluating an object, the method including: obtaining load measurement data from the plurality of vertical force sensing devices and the set of horizontal sensing devices, wherein the load measurement data corresponds to a load applied to the rectangular load plate; processing the load measurement data to identify a horizontal component of the load and a vertical component of the load; and evaluating the object based on the horizontal and vertical components of the load. A second aspect of the invention provides a system comprising: at least one pair of sensing elements located adjacent to each other, each sensing element including: a rectangular load plate; a plurality of vertical force sensing devices, wherein a vertical force sensing device is located below the rectangular load plate adjacent to each of four corners of the rectangular load plate; and a set of horizontal force sensing devices located below the rectangular load plate; and a computer system configured to perform a method of weighing a vehicle traveling over the at least one pair of sensing elements, wherein all wheels on an axle of the vehicle concurrently travel over the rectangular load plates of the at least one pair of sensing elements, the method including: obtaining load measurement data from the plurality of vertical force sensing devices and the set of horizontal sensing devices for each axle of the vehicle while the vehicle travels over the at least one pair of sensing elements; processing the load measurement data to identify a horizontal component of a load resulting from the passage of each wheel of the vehicle and a vertical component of the load; and calculating a set of static weights corresponding to the vehicle based on the horizontal and vertical components of the load. A third aspect of the invention provides a method of weighing a vehicle in motion, the method comprising: obtaining load measurement data from a plurality of vertical force sensing devices and a set of horizontal sensing devices for each axle of the vehicle while the vehicle travels over a set of load plates physically connected to the plurality of vertical force sensing devices and the set of horizontal sensing devices; processing the load measurement data to identify a horizontal component of a load resulting from the passage of each wheel of the vehicle and a vertical component of the load; and calculating a static weight for at least one of: an axle of the vehicle or the vehicle based on the horizontal and vertical components of the load. Other aspects of the invention provide methods, systems, program products, and methods of using and generating each, which include and/or implement some or all of the actions described herein. The illustrative aspects of the invention are designed to solve one or more of the problems herein described and/or one or more other problems not discussed. FIG. 1 illustrates various parameters and forces relevant to determination of wheel weight and vehicle weight in a dynamic system according to an embodiment. FIG. 2 shows an illustrative environment for weighing a vehicle in motion according to an embodiment. FIGS. 3A 3B and show top views of illustrative WIM environments according to embodiments. FIG. 4 shows an illustrative WIM environment for weighing a rail vehicle according to an embodiment. FIGS. 5A 5B and show illustrative designs for a vertical load sensor and a horizontal load sensor, respectively, according to an embodiment. FIG. 6 shows various illustrative measurements of a wheel traveling over a sensing component according to an embodiment. FIG. 7 shows an illustrative process for weighing a vehicle in motion according to an embodiment. These and other features of the disclosure will be more readily understood from the following detailed description of the various aspects of the invention taken in conjunction with the accompanying drawings that depict various aspects of the invention. It is noted that the drawings may not be to scale. The drawings are intended to depict only typical aspects of the invention, and therefore should not be considered as limiting the scope of the invention. In the drawings, like numbering represents like elements between the drawings. As indicated above, aspects of the invention provide a solution for evaluating an object, which accounts for various motion-related dynamic forces. In an embodiment, the object is a vehicle and the evaluation includes determining a set of static weights corresponding to the vehicle as it moves through a sensing element. The sensing element can include a load plate with vertical force sensing devices and horizontal force sensing devices located below the load plate. Analysis of measurement data acquired by the force sensing devices can enable calculation of the set of static weights corresponding to the vehicle. As used herein, unless otherwise noted, the term "set" means one or more (i.e., at least one) and the phrase "any solution" means any now known or later developed solution. As described herein, to date, weigh-in-motion (WIM) approaches are limited in the type of vehicle and/or movements of a vehicle that are allowed during the weighing process in order to provide for accurate measurements of a vehicle's weight. The inventors propose to provide a WIM solution, which detects and accounts for various factors, such as dynamic forces acting on a vehicle, that influence the apparent static weights corresponding to the vehicle during its passage over a set of sensors and cause significant errors in prior art approaches. In an embodiment, the solution will account for various horizontal forces, which have largely been unaccounted for in the prior art approaches. In this manner, aspects of the invention can provide a WIM solution, which can provide accurate (e.g., within approximately one percent or better) measurement of the weight of various types of vehicles without knowledge of the make and/or model of the vehicle, let alone its cargo load, passenger weight distribution, and/or the like, prior to its entry into a sensor area for the WIM solution. Aspects of the invention can provide accurate weight measurements for vehicles traveling at a variety of speeds, performing any of various normal roadway maneuvers, and spanning a considerable range of characteristics. A solution described herein can be implemented in various types of applications. In an embodiment, a WIM system is provided for screening commercial vehicles traveling on a roadway for selection for further inspection or other purposes. In another embodiment, a WIM system is provided for monitoring vehicles entering a sensitive area to determine, for example, if the vehicle is carrying dangerous or illicit cargo (e.g., an improvised explosive device (IED)). A solution described herein can incorporate, for example, one or more of the following innovations: three axis sensing, which enables the solution to account for both static and dynamic forces associated with a moving vehicle; compensation for non-constant velocity of the vehicle; error reduction in all three dimensions with variable vehicle behavior in multiple modes; multiple sensors to acquire data corresponding to vehicle parameters affecting the weight measurement (e.g., speed, wheel base, track width, and/or the like) and the use of such data in calculating the vehicle weight; inclusion of sensor(s) to acquire data corresponding to environmental factors affecting the weight measurement (e.g., tiltmeter, anemometer, and/or the like) and the use of such data in calculating the vehicle weight; and/or the like. FIG. 1 illustrates various parameters and forces relevant to determination of wheel weight and vehicle weight in a dynamic system according to an embodiment. As illustrated, in general, a vehicle 2 rides on a set of wheels 4A-4D. While four wheels 4A-4D on two axles are shown for the vehicle 2, it is understood that the vehicle 2 can include any number of wheels and any number of axles. Furthermore, it is understood that the wheels 4A-4D can comprise any type of wheels including, for example, roadway wheels (e.g., tires), rail wheels, airplane wheels, and/or the like. To this extent, it is understood that the vehicle 2 can comprise any type of vehicle 2 capable of traveling along a surface using any type of wheel-based solution. 4A 0 <msub><mi>WT</mi><mrow><mn>4</mn><mi mathvariant="normal">A</mi></mrow></msub><mo>=</mo><msub><mi>WT</mi><mn>0</mn></msub><mfenced separators=""><mi mathvariant="normal">d</mi><mo>/</mo><mfenced separators=""><mi mathvariant="normal">d</mi><mo>+</mo><mi mathvariant="normal">c</mi></mfenced></mfenced><mfenced separators=""><mi mathvariant="normal">b</mi><mo>/</mo><mfenced separators=""><mi mathvariant="normal">b</mi><mo>+</mo><mi mathvariant="normal">a</mi></mfenced></mfenced> <msub><mi>WT</mi><mrow><mn>4</mn><mi mathvariant="normal">A</mi></mrow></msub><mo>=</mo><msub><mi>WT</mi><mn>0</mn></msub><mfenced separators=""><mi mathvariant="normal">d</mi><mo>/</mo><mi>TW</mi></mfenced><mfenced separators=""><mi mathvariant="normal">b</mi><mo>/</mo><mi>WB</mi></mfenced> Regardless, when the vehicle is stationary, the static weight of the vehicle 2 is distributed across the wheels 4A-4D as static wheel weights for the wheels 4A-4D. Determination of the static wheel weight on a specific wheel can be determined by calculations dependent on the total weight of the vehicle 2 and the location of a center of gravity (CG) 3 for the vehicle 2. For example, each wheel 4A-4D can be located a certain distance from the CG 3 as measured along a track width of the vehicle 2 (indicated as distances a, b) and a certain distance from the CG 3 as measured along a wheelbase of the vehicle 2 (indicated as distances c, d). The following equation can be used to calculate a static wheel weight on the wheel 4A, WT: where WT is the total static weight of the vehicle 2. For a vehicle 2 having four wheels 4A-4D as illustrated, an effective track width of the vehicle 2, TW, is the sum of distances c and d and an effective wheelbase length of the vehicle 2, WB, is the sum of distances a and b. In this case, Equation 1 can be rewritten as: While these equations presume the same track width TW for each axle, it is understood that the equations can be readily changed to accommodate axles of differing track widths TW. In a dynamic context, e.g., when the vehicle 2 is moving, any combination of several factors can cause the apparent weight on a wheel 4A-4D to differ from the static weight on the wheel 4A-4D. For example, the vehicle 2 may be under an acceleration force, which can cause the vehicle 2 to tilt back, thereby increasing the apparent weight on the rear wheels of the vehicle 2. Similarly, the vehicle 2 may be under a deceleration force (e.g., due to braking), which can increase the apparent weight on the front wheels of the vehicle 2. Even a relatively low level of acceleration/deceleration can produce a several percent difference in the perceived weight on a wheel 4A-4D. Additionally, a vehicle 2 that is turning will exert lateral steering forces (which can be in either direction). It is understood that as used herein, the term "acceleration" is inclusive of increasing speed, decreasing speed, and changes in direction of the vehicle 2. Various other forces can be present regardless of any operation of the vehicle 2. For example, any rolling wheel 4A-4D is subject to a rolling friction force between the wheel 4A-4D and a surface 6 on which it is rolling. Furthermore, any moving vehicle 2 will encounter some level of aerodynamic resistance and wind forces can act on the vehicle 2 from any direction. In addition, the surface 6 may be at an angle, Θ, which will cause a tilting apparent lateral force equal to a product of an actual vertical weight and the sine of Θ and the apparent vertical force perceived at the surface 6 will be equal to a product of the actual vertical weight and the cosine of Θ. While not shown, it is understood that the surface 6 also could be on an incline/decline, which would result in similar forces as the lateral force described herein. As indicated by the above discussion, there are a wide variety of forces that may affect the apparent wheel weights of a moving vehicle 2. A fully detailed statement of the problem would also need to include other factors, such as suspension interaction (e.g., camber, oscillation, etc.), cargo/passenger weight distribution shifting, and/or the like. To this extent, a general rigorous solution to such a fully detailed statement of the problem may not be feasible. Additionally, the discussion above may be ill-posed since the resultant forces produced by small changes in vehicle parameters (such as a location of the center of gravity 3) can be large, resulting in a great sensitivity to noise in the data. Such noise can result in large errors, which can cause large errors in estimation of the static weights. Furthermore, the discussion is underdetermined because the number of known variables exceeds the number of independent equations describing the problem. FIG. 2 The inventors propose a practical WIM solution capable of reaching reasonable accuracy (e.g., one percent or better) using a combination of one or more new approaches and technologies. shows an illustrative environment 10 for weighing a vehicle in motion according to an embodiment. To this extent, environment 10 includes a computer system 20 that can perform a process described herein in order to weigh the vehicle as it travels past a sensing component 34 as described herein. In particular, the computer system 20 is shown including a WIM program 30, which makes the computer system 20 operable to weigh the vehicle by performing a process described herein. The computer system 20 is shown including a processing component 22 (e.g., one or more processors), a storage component 24 (e.g., a storage hierarchy), an input/output (I/O) component 26 (e.g., one or more I/O interfaces and/or devices), and a communications pathway 28. In general, the processing component 22 executes program code, such as the WIM program 30, which is at least partially fixed in the storage component 24. While executing program code, the processing component 22 can process data, which can result in reading and/or writing transformed data from/to the storage component 24 and/or the I/O component 26 for further processing. The pathway 28 provides a communications link between each of the components in the computer system 20. The I/O component 26 can comprise one or more human I/O devices, which enable a human user 12 to interact with the computer system 20 and/or one or more communications devices to enable a system user 12 and/or a sensing component 34 to communicate with the computer system 20 using any type of communications link. To this extent, the WIM program 30 can manage a set of interfaces (e.g., graphical user interface(s), application program interface, and/or the like) that enable human and/or system users 12 to interact with the WIM program 30. Furthermore, the WIM program 30 can manage (e.g., store, retrieve, create, manipulate, organize, present, etc.) the data, such as the WIM data 40, using any solution. In any event, the computer system 20 can comprise one or more general purpose computing articles of manufacture (e.g., computing devices) capable of executing program code, such as the WIM program 30, installed thereon. As used herein, it is understood that "program code" means any collection of instructions, in any language, code or notation, that cause a computing device having an information processing capability to perform a particular action either directly or after any combination of the following: (a) conversion to another language, code or notation; (b) reproduction in a different material form; and/or (c) decompression. To this extent, the WIM program 30 can be embodied as any combination of system software and/or application software. Furthermore, the WIM program 30 can be implemented using a set of modules 32. In this case, a module 32 can enable the computer system 20 to perform a set of tasks used by the WIM program 30, and can be separately developed and/or implemented apart from other portions of the WIM program 30. As used herein, the term "component" means any configuration of hardware, with or without software, which implements the functionality described in conjunction therewith using any solution, while the term "module" means program code that enables a computer system 20 to implement the actions described in conjunction therewith using any solution. When fixed in a storage component 24 of a computer system 20 that includes a processing component 22, a module is a substantial portion of a component that implements the actions. Regardless, it is understood that two or more components, modules, and/or systems may share some/all of their respective hardware and/or software. Additionally, it is understood that some of the functionality discussed herein may not be implemented or additional functionality may be included as part of the computer system 20. When the computer system 20 comprises multiple computing devices, each computing device can have only a portion of the WIM program 30 fixed thereon (e.g., one or more modules 32). However, it is understood that the computer system 20 and the WIM program 30 are only representative of various possible equivalent computer systems that may perform a process described herein. To this extent, in other embodiments, the functionality provided by the computer system 20 and the WIM program 30 can be at least partially implemented by one or more computing devices, each of which includes any combination of general and/or specific purpose hardware with or without program code. In each embodiment, the hardware and program code, if included, can be created using standard engineering and programming techniques, respectively. Regardless, when the computer system 20 includes multiple computing devices, the computing devices can communicate over any type of communications link. Furthermore, while performing a process described herein, the computer system 20 can communicate with one or more other computer systems using any type of communications link. In either case, the communications link can comprise any combination of various types of optical fiber, wired, and/or wireless links; comprise any combination of one or more types of networks; and/or utilize any combination of various types of transmission techniques and protocols. FIG. 1 FIGS. 3A 3B As discussed herein, the WIM program 30 enables the computer system 20 to weigh a vehicle 2 () as it moves past a sensing component 34. To this extent, and show top views of illustrative WIM environments 10A, 10B according to embodiments. Each WIM environment 10A, 10B includes a sensing component 34 comprising a pair of sensing elements 50A, 50B located in a path of travel of a vehicle 2 traveling along a surface 6. Each sensing element 50A, 50B can have a corresponding sensing region SR within which wheels 4 of the target vehicle 2 should roll over the sensing element 50A, 50B. The sensing elements 50A, 50B are located in the path of travel of the wheels 4 and dimensioned such that over an entire range of possible wheelbases WB and track widths TW for the target vehicle 2, all wheels 4 on any axle of the vehicle 2 will travel on one of the sensing elements 50A, 50B in a corresponding sensing region SR as the vehicle 2 travels past the sensing component 34. Additionally, only the wheels 4 on a single side of a single axle of the vehicle 2 will be present on a sensing element 50A, 50B at any given time. While aspects of the invention are shown and described with respect to vehicles 2 having two axles, each with a single wheel 4 on either side, it is understood that aspects of the invention can be directed to vehicles 2 having any number of axles and any number of wheels 4 on a side of an axle. For multiple wheels on a single side of an axle, the sensing elements 50A, 50B can be dimensioned such that both wheels travel across the sensing elements 50A, 50B within the sensing region SR. Furthermore, the sensing elements 50A, 50B can have a width (e.g., as measured in a direction the vehicle 2 is traveling) that is sufficient for the sensing device(s) included in each sensing element 50A, 50B to acquire at least a target number of measurements for vehicles 2 traveling at any speed of travel within a range of speeds of travel. The sensing elements 50A, 50B can be formed of any type of material capable of supporting a vehicle 2 having a weight within a target range of vehicle weights to be processed by the WIM environment 10A, 10B, such as metal. In an embodiment, the sensing elements 50A, 50B are configured to acquire measurement data for vehicles 2 weighing between one and three tons, having the characteristics described herein, traveling up to thirty miles per hour, and having a maximum acceleration (deceleration) of approximately 0.2 times the gravitational constant. In this case, a nominal width of each sensing element 50A, 50B in the direction of travel can be approximately twenty-two inches and a length in the transverse direction can be approximately thirty-seven inches. The sensing elements 50A, 50B can be configured to acquire measurement data at approximately four kilohertz, which can provide at least approximately ninety data points for each wheel 4 of a vehicle 2 traveling at the maximum speed through the sensing region SR. Uncertainty in the measurement data acquired by the set of sensing devices included in each sensing element 50A, 50B can be caused by an abrupt transition between the surface 6 and a top surface of the sensing elements 50A, 50B. Such a transition can cause a substantial spike in acceleration and oscillation forces, and also can result in damage to the sensing element 50A, 50B. In the WIM environment 10A, the sensing elements 50A, 50B are emplaced in the surface 6 such that a top surface of the sensing elements 50A, 50B is substantially level with the surface 6, thereby providing substantially flat transitions for the wheels 4 of the vehicle 2 as they roll from the surface 6 to the sensing element 50A, 50B and from the sensing element 50A, 50B to the surface 6. FIG. 3B Alternatively, as shown in , the top surface of the sensing elements 50A, 50B can be located at a different level than the surface 6, e.g., as part of a portable or temporary emplacement of the sensing elements 50A, 50B on the surface 6. In this case, the WIM environment 10B can include a plurality of ramps 52A-52D to provide a substantially smooth transition to/from the surface of each sensing element 50A, 50B. For example, the plurality of ramps 52A-52D can be configured to provide a lead-in to and lead-out from the sensing elements 50A, 50B that are sufficiently gradual and gentle so as to minimize any oscillations and transient signals that are added due to the physical set up of the sensing elements 50A, 50B themselves. The length and grade of the ramps 52A-52D can be selected based on the height of the sensing elements 50A, 50B and one or more attributes of the vehicles 2 traveling past the sensing elements 50A, 50B using any solution. In an embodiment, the ramps 52A-52D can be approximately six feet long for every one inch of height with the ramp/road and the ramp/sensing element interfaces having shapes contoured/blended to reduce (e.g., minimize) an acceleration shock, which can cause transient forces due to excitation of the suspension of the vehicle 2. The ramps 52A-52D can be formed of any suitable material, such as metal, high strength polymer, and/or the like. Furthermore, the ramps 52A-52D and/or the sensing elements 50A, 50B can be affixed to the surface 6 by, for example, a high friction or a "toothed" contact surface. Regardless, each sensing element 50A, 50B includes a set of sensing devices, each of which can acquire data corresponding to the vehicle 2 as it passes over the sensing element 50A, 50B and communicate data corresponding to the vehicle 2 for processing by the computer system 20 using any wired and/or wireless communications solution. In an embodiment, the set of sensing devices for each sensing element 50A, 50B includes at least one vertical force sensing device 54 and at least one horizontal force sensing device 56. In a further embodiment, each sensing element 50A, 50B includes four vertical force sensing devices 54, one of which is located at each of the four corners of the sensing element 50A, 50B, and one horizontal force sensing device 56 located in a central portion of the sensing element 50A, 50B. It is understood that each sensing element 50A, 50B in a WIM environment 10A, 10B can be configured in a substantially identical manner. Alternatively, a WIM environment 10A, 10B can include sensing elements 50A, 50B having a plurality of different configurations of sensing devices. For example, a WIM environment 10A, 10B can include multiple sensing components 34, each of which includes a pair of sensing elements 50A, 50B having the same configuration of sensing devices, which can be the same or differ from the configuration of the other sensing component(s) 34. When multiple sensing components 34 are included in an environment 10A, 10B a spacing between the sensing components 34 can be selected based on a range of acceptable wheelbases WB for the target vehicles 2. For example, the spacing can be selected such that the wheels 4 from both axles of the vehicle 2 are substantially concurrently traveling over the sensing elements 50A, 50B of each sensing component 34. While conceptually shown near the surface 6 in close proximity to the sensing components 34, it is understood that the computer system 20 can be located anywhere. To this extent, in an embodiment, one or both of the sensing elements 50A, 50B can include a computing device of the computer system 20. In an embodiment, the sensing device(s) in each sensing element 50A, 50B of the environment 10A, 10B can be configured to communicate with the computer system 20 using a wireless communications solution. Furthermore, the sensing device(s) can operate without requiring wiring external from the sensing element 50A, 50B (e.g., using battery power). Attributes of the surface 6, the sensing element 50A, 50B, and/or the deployment environment may vary in various deployments and/or over time. These variations can impact the measurement data acquired by the sensing device(s). In a portable/temporary emplacement, such as that shown in environment 10B, such variations can be unpredictable. To this extent, the sensing component 34 can include one or more ancillary sensing devices for acquiring data corresponding to the deployment location and/or environment. For example, temperature can affect the behavior of various sensing devices, such as load cells. Furthermore, temperature can affect a stiffness and response of various components of a suspension of a vehicle 2. To this extent, a sensing element 50A, 50B can include one or more temperature sensors 51, which can provide temperature data for processing by the computer system 20 as part of a WIM process described herein. According to the invention a sensing element 50A, 50B comprises a tiltmeter 53. A tilt of a surface of the sensing element 50A, 50B as small as a half of degree can introduce a difference of approximately one percent in the measured weight of a vehicle 2. The tiltmeter 53 can acquire and provide data corresponding to a difference between the angle of the surface of the sensing element 50A, 50B and the horizontal level to the computer system 20, which can use the data as part of a WIM process described herein. In an embodiment, the temperature sensor 51 and/or the tiltmeter 53 are affixed to a surface of a load plate of the sensing element 50A, 50B. Furthermore, the sensing component 34 can include an anemometer 55 and a wind direction sensor 57 for acquiring data corresponding to the wind speed and direction, which can be provided to the computer system 20 for processing. The computer system 20 can use the wind data to quantify and account for wind effects on the measurement data acquired by the sensing component 34, which can affect the aerodynamic component. For example, if the wind is blowing from the rear of the vehicle 2, the perceived aerodynamic effect can drop off significantly. Rather than being merely counterbalancing forces, a wind from the rear of a vehicle 2 can effectively drop an apparent velocity of the vehicle 2. As an example, a vehicle 2 traveling at sixty miles per hour may normally experience one hundred twenty pounds of aerodynamic resistance. However, with a rear wind of approximately thirty miles per hour, the effective velocity of the vehicle 2 drops to thirty miles per hour. As aerodynamic forces increase by a square of the speed, a reduction by a factor of two in effective velocity will result in a reduction by a factor of four in resistance. To this extent, the computer system 20 can account for wind coming from any direction, which can affect the measurement data acquired by the sensing elements 50A, 50B. In an embodiment, the anemometer 55 and/or the wind direction sensor 57 are located transversely from the sensing element 50A, 50B at a distance of at least approximately three feet from the surface 6. Furthermore, the anemometer 55 and/or the wind direction sensor 57 can be located at a height above the surface 6, which is typical of a vertical center of gravity location of the target vehicles 2 to be measured using the sensing component 34. While the vehicle 2 is shown including four wheels 4 on two axles, it is understood that an embodiment can be directed to any type of vehicle 2 having any number of wheels 4 in any configuration. Furthermore, while the environments 10A, 10B are directed to measuring a vehicle 2 traveling on a roadway, it is understood that an embodiment can be directed to other types of wheeled vehicles, such as a rail vehicle. FIG. 4 To this extent, shows an illustrative WIM environment 10C for weighing a rail vehicle 2 according to an embodiment. The rail vehicle 2 can be any type of rail vehicle operating in various types of rail environments, such as freight, high speed transit, passenger/local transit, and/or the like. Furthermore, while the rail vehicle 2 is shown traveling along two rails 5A, 5B, it is understood that the rail vehicle can travel along any number of rails 5A, 5B. In any event, the sensing component 34 is shown including a supporting foundation 60 on which a set of sensing devices 62, 64 are located. Each sensor 62, 64 can be placed such that it is located between a rail 5A, 5B and the supporting foundation 60. The supporting foundation 60 can be formed of any material having sufficient rigidity to not flex appreciably during the passage of the rail vehicles 2 of a train, unlike ordinary ballast 7, which permits the rails 5A, 5B to flex appreciably during the passage of the rail vehicles 2 of a train. In an embodiment, the supporting foundation 60 comprises reinforced concrete of a thickness and type normally used in constructing other similar supporting platforms, such as hard concrete "aprons" for railyard service shops, airport runways, and/or the like. Each rail 5A, 5B is shown including a pair of vertical force sensing devices 62 with a horizontal force sensing device 64 there between. In this configuration, the sensing devices 62, 64 can acquire data corresponding to a weight of the rail vehicle 2 as each rail wheel 4 passes over the supporting foundation 60. Subsequently, the sensing devices 62, 64 can provide data corresponding to the weight of the rail vehicle 2 for processing by a computer system 20, which can be located a safe distance from the rails 5A, 5B. In an embodiment, a total spacing between the first and last sensing devices 62 on a given rail 5A, 5B is selected such that only a single rail wheel 4 will be located there between as the rail vehicle 2 passes through the sensing component 34. However, it is understood that an embodiment of the sensing component 34 can include any number of, type(s) of, and placement of sensing devices. FIGS. 3A 3B 4 In an embodiment, each of the horizontal and vertical force sensing devices shown in , , and comprises a load sensor. In general, when subjected to real-world stresses containing both vertical and horizontal components, a load sensor can be vulnerable to "crosstalk." To this extent, in order to reliably apply computational methods to extract the static forces from the complex dynamic forces, it is desirable for the vertical and horizontal components of the stresses resulting from a load to be isolated from each other. FIGS. 5A 5B FIGS. 5A 5B and show illustrative designs for a vertical load sensing device 54 and a horizontal load sensing device 56, respectively, according to an embodiment. In particular, and show side and projected views of the corresponding load sensing device 54, 56, respectively. As a wheel rolls over each load sensing device 54, 56, the load sensing device 54, 56 is subjected to a force having a vertical and a horizontal component. To this extent, each load sensing device 54, 56 is configured to limit an amount of crosstalk interference in the measurement data acquired by the corresponding load sensing device 54, 56. FIG. 5A FIG. 2 In , the vertical load sensing device 54 includes a load plate 70 on which the wheels 4 of a vehicle 2 roll as the vehicle 2 travels past the sensing component 34 (). Below the load plate 70 is a vertical (Z axis) load cell 72. The load cell 72 has a contact plate 74, which has an interface 76 with the load plate 70 that significantly reduces the horizontal load transferred to the vertical load cell 72. The load plate 70 can be constrained such that the load cells 72 only deflect under external forces and the deflection is on the order of approximately one to five mils maximum. For a load cell 72 formed of metal, the elasticity of the metal can cause the load cell 72 to spring back to an original position when the external forces are removed. As a result, substantially none of the horizontal component is transferred to the vertical load cell 72. However, there is no gap between the load plate 70 and the contact plate 74 in the vertical direction. As a result, the full vertical component of the force is transferred to and therefore transmitted as a measured vertical load by the vertical load cell 72. FIG. 5B In , the horizontal load sensing device 56 includes a load plate 80 on which the wheels 4 of the vehicle 2 roll. Below the load plate 80 is a horizontal (X-Y axis) load cell 82. The load plate 80 and the load cell 82 are connected via an arm 84 projecting from the load cell 82 into a sleeve 86 connected to the load plate 80. The arm 84 and sleeve 86 are dimensioned such that the arm 84 can move up and down along the interface 88, but the arm 84 does not move horizontally with respect to the sleeve 86. Since the load plate 80 is able to move in the vertical direction with respect to the load cell 82, no appreciable vertical force experienced by the load plate 80 is transferred to the load cell 82. As no horizontal movement is permitted due to the arm 84/sleeve 86 interface, the full horizontal component of the force is transferred to and therefore transmitted as a measured horizontal load by the load cell 82. FIG. 3A FIG. 3B FIG. 4 FIG. 2 As described herein, the load sensing devices 54, 56 can be implemented in an environment, such as WIM environments 10A (), 10B (). However, it is understood that the load sensing devices 54, 56 are only illustrative of various types of load sensors that can be utilized. For example, in a rail-based WIM environment 10B (), a portion of each rail 5A, 5B can act as the load plate. In this case, the load sensing devices 62, 64 can comprise a load cell, which is connected to the corresponding portion of the rail 5A, 5B using a connection solution similar to those described with respect to load sensing devices 54, 56. As the portion of the rail 5A, 5B comprises an elongated rectangular shape, a single vertical load sensing device 62 can be located below the rail 5A, 5B and define two adjacent "corners" of the rectangular load plate. Furthermore, it is understood that an amount of actual movement in any of the X, Y, Z directions is very small and does not imply or require a substantial allowance for movement. It also is understood that either of the interfaces 76, 88 can be lubricated, constructed with low friction surfaces, and/or the like. In any event, the computer system 20 () can determine and account for forces caused by friction using an analytic solution, empirical solution, and/or the like. Using an approach described with respect to the load sensing devices 54, 56, aspects of the invention permit the acquisition of accurate and independent measurement of the horizontal and vertical components of the force applied by the passing vehicle 2. It is understood that using similar approaches, other refinements are possible. For example, an embodiment can include load sensing devices that isolate the X and Y components of the force applied by the passing vehicle 2. FIG. 2 As described herein, knowledge of specific attributes and dimensions of a vehicle 2 are important for calculating the wheel 4 weights. However, in practice, information such as the vehicle speed, wheelbase, track width, and/or the like, is often not available for a vehicle 2 passing over the sensing component 34. In an embodiment, use of a particular configuration of the sensing devices in the sensing component 34 enables the computer system 20 () to process data acquired by the sensing devices to extract several attributes of the vehicle 2 during or shortly after acquisition of the data. FIG. 6 FIG. 6 SC SC CP CP For example, shows various illustrative measurements of a wheel 4 traveling over a sensing component 34 according to an embodiment. As the wheel 4 travels along the surface 6 (from left to right in ), it passes over the sensing component 34. The sensor component 34 comprises a known width, W. While the wheel 4 must traverse the entire width W, the wheel 4 is only fully on the sensing component 34 for some smaller distance, which is dependent on the width of the wheel contact patch, W. The width of the wheel contact patch W can vary with tire inflation, loading, tire diameter and width, and/or the like. As a result, as the wheel 4 travels over the sensing component 34, it traverses a relatively short transition distance TW during which the wheel 6 transitions from the surface 6 to the sensing component 34, travels a distance D over which the wheel 4 is fully on the sensing component 34, and subsequently traverses a second, generally symmetrical transition distance TW as the wheel 4 returns to the surface 6. As the wheel 4 rolls across the sensing component 34, the load on the sensing component 34 varies in a manner similar to a curve 90. In particular, the load increases as the wheel 4 traverses the initial transition width TW, reaches a substantially steady state as the wheel 4 traverses the distance D, and decreases as the wheel 4 traverses the second transition width TW as it returns to the surface 6. Note that while the transition widths TW are substantially the same, the corresponding portion of the curve 90 are not necessarily symmetric inverses of one another. For example, if the vehicle is braking while passing over the sensing component 34, the decreasing portion of the curve 90 may be longer and flatter as it will extend over a longer period of time than the earlier portion of the curve 90 during which the vehicle was not undergoing braking. As described herein, the sensing component 34 can include various types of sensing devices for acquiring data corresponding to the wheel 4 and the corresponding vehicle, such as data corresponding to a load placed on the sensing component 34 by the wheel 4. In an embodiment, such sensing devices are configured to acquire the data at a high rate of sampling to permit a sufficient number of data points to be acquired by each sensing device during the passage of the wheel 4. For an ability to extract and remove dynamic components from measurements, an accurate measurement of the changes in the dynamic forces seen by the wheel 4 can be important. To acquire such measurements, a sample rate can be selected based on the expected frequencies of the target components. Using the Nyquist sampling theorem, a sample rate should be at least twice the highest frequency of interest, and it can be useful to permit some oversampling to allow for averaging and noise/error correction. SC In an illustrative embodiment, the width W is approximately three feet and each transition width TW is approximately six inches, thereby making the distance D approximately two feet. For a vehicle traveling approximately sixty miles per hour, the wheel 4 will cross the distance D in approximately 0.0227 seconds (1/44 of a second). In this case, assuming a frequency of one hundred hertz for the maximum contributing component and Nyquist rate sampling with five times oversampling, a sampling rate of one thousand hertz is required, which will provide approximately twenty-two data points as the wheel 4 traverses the distance D. Such a sampling rate can be readily provided by various sensing devices and computing devices. Using sensing devices with higher sampling rates can enable faster vehicle travel over the sensing component 34. FIGS. 3A 6 FIG. 2 Referring to and , the computer system 20 () can derive a number of attributes of the vehicle 2 and wheels 4 from the data acquired by each sensing element 50A, 50B. For example, the computer system 20 can derive a location of the wheel 4 on the sensing element 50A, 50B based the load data acquired by each vertical force sensing device 54. In particular, when a wheel 4 is directly over a vertical force sensing device 54, the vertical force sensing device 54 will see substantially all of the load from the wheel 4, while the sensing devices 54 on the opposing side will see nearly none of the load. Similarly, if the wheel 4 passes directly between two of the vertical force sensing devices 54, each vertical force sensing device 54 will see approximately half of the load. To this extent, the computer system 20 can compare and evaluate the load changes seen on all four of the vertical force sensing devices 54 as a wheel 4 traverses the sensing element 50A, 50B to determine the location of the wheel 4, which can also identify the direction of travel of the wheel 4 over the sensing element 50A, 50B (e.g., straight across or at an angle). Since the relative locations of each sensing element 50A, 50B can be precisely known after installation, the computer system 20 can use a combination of the positions of two wheels 4 concurrently on each sensing element 50A, 50B to determine the track width TW of the vehicle 2. SC SC SC SC The computer system 20 can determine a speed of the vehicle 2 based on an amount of time the wheel 4 takes to traverse the width W. Since the width W is known, the speed calculation can be found by dividing the width W by the time it takes for the wheel 4 to traverse the width W. By comparing the speed calculations for multiple axles of a vehicle 2, the computer system 20 can determine the acceleration of the vehicle 2. Furthermore, the computer system 20 can determine the wheelbase WB for the vehicle 2, e.g., from an average speed between the axles and the time between the wheels 4 of the axles traversing the sensing elements 50A, 50B. The computer system 20 can use various attributes of the vehicle 2 to extract one or more dynamic components of the forces exerted by the vehicle 2. For example, aerodynamic forces (e.g., drag) vary with a square of the speed of the vehicle 2, e.g., a vehicle 2 experiencing thirty pounds of aerodynamic drag at thirty miles per hour will experience approximately one hundred twenty pounds of drag at sixty miles per hour. To this extent, by accurately calculating the speed of the vehicle 2, the computer system 20 can accurately remove aerodynamic factors from the measurement data. For lower speeds (e.g., between approximately five and approximately thirty miles per hour), the computer system 20 can use a plot of the curve 90 versus an amount of time for the wheel 4 to pass through the distance D to accurately estimate a speed of the vehicle 2 while it passed through the distance D. Furthermore, at the lower speeds, the effect of drag can be ignored. For vehicles 2 traveling at lower speeds (e.g., between approximately five and approximately thirty miles per hour), it is possible for the vehicle 2 to undergo several changes of acceleration as the vehicle 2 passes the sensing component 34. For example, a shift of an automatic transmission can take approximately 0.4 seconds. A vehicle having a wheelbase WB of twelve feet and traveling at five miles per hour will take approximately two seconds for the wheels 4 to completely travel through a sensing component 34 having sensing elements 50A, 50B of widths of three feet. For a vehicle attempting to accelerate or decelerate quickly during this time, the transmission could theoretically shift up to four or five times (although the speeds required would likely preclude an average speed so low). Each shift can initiate accelerations of up to approximately 0.25 time gravitational acceleration for a short period. As a result, to accommodate very low speed operation of the vehicles 2, an embodiment can include one or more other solutions for accounting for shifting, jerky braking or acceleration, and/or the like, which a vehicle 2 may undergo as it traverses the sensing component 34. For example, an embodiment can include an acoustic or radar-based speed measurement device, which acquires multiple measurements of the speed of a vehicle 2 as it travels past the sensing component 34. Such a speed measurement device also can be included for vehicles 2 traveling at higher speeds, although such a device may not be necessary as described herein. Absent another component for measuring the speed of a vehicle 2 operating at a low speed, an embodiment can require the vehicles 2 to maintain a specified minimum speed. To this extent, for some applications, such as a sensing component 34 embedded in a roadway or on a bridge deck, the minimum speed can be reasonably assumed during normal traveling conditions for the vehicles 2. FIG. 2 In an embodiment, the computer system 20 resolves a set of static weights corresponding to a vehicle 2 from measurement data corresponding to dynamic forces caused by the vehicle 2 moving past the sensing component 34 using a solution comprising a combination of theoretical and empirical approaches. Initially, the computer system 20 can construct a model of the vehicle 2 moving past the sensing component 34, which can be stored as WIM data 40 (). The model can include all of the forces and factors, which are presumed to be significant in the particular application, and can include various sub-models. For example, a pre-existing vehicle performance model can be obtained from a third party, such as a vehicle simulation product (e.g., CARSIM® provided by Mechanical Simulation Corporation), and utilized as a sub-model in the model. Similarly, a sub-model can be created from finite element modeling performed on a designed sensing element 50A, 50B to determine its response to various types of loads. In any event, the computer system 20 can use the model to provide data predicting the responses to be seen by the sensing component 34 under various proposed test conditions. <mi mathvariant="normal">Δ</mi><msub><mi>W</mi><mi>F</mi></msub><mo>=</mo><mfrac><mrow><msub><mi>A</mi><mi>Y</mi></msub><mo>∗</mo><msub><mi>W</mi><mi>S</mi></msub></mrow><msub><mi>t</mi><mi>W</mi></msub></mfrac><mfenced open="[" close="]" separators=""><mfrac><mrow><msub><mi>h</mi><mn>2</mn></msub><msub><mi>K</mi><mrow><mi>F</mi><mo>′</mo></mrow></msub></mrow><mrow><msub><mi>K</mi><mi>F</mi></msub><mo>+</mo><msub><mi>K</mi><mi>R</mi></msub><mo>−</mo><msub><mi>W</mi><mi>s</mi></msub><msub><mi>h</mi><mn>2</mn></msub></mrow></mfrac><mo>+</mo><mfrac><mrow><mi>L</mi><mo>−</mo><msub><mi>a</mi><mi>S</mi></msub></mrow><mi>L</mi></mfrac><mo>∗</mo><msub><mi>Z</mi><mi mathvariant="italic">RF</mi></msub></mfenced><mo>+</mo><mfrac><msub><mi>W</mi><mi mathvariant="italic">uF</mi></msub><msub><mi>t</mi><mi>W</mi></msub></mfrac><msub><mi>Z</mi><mi mathvariant="italic">WF</mi></msub> F R F RF WF 2 s Y W S The constructed model can include various computations, which account for the various forces that can be present as a vehicle 2 passes by the sensing component 34. For example, the model can include the following equation to consider the lateral forces induced by turning the vehicle 2, which will induce an apparent change in weight as follows: where ΔW is the change in apparent weight on a given wheel, K and K are the rear and front roll stiffnesses, respectively, Z and Z are the roll center heights of the axles of the vehicle 2, h is the height of the center of gravity above the nominal roll axis, a is the location of the sprung mass center of gravity, A is the transverse acceleration, t is the track width, W is the static weight of the vehicle 2, and L is the wheel base. As illustrated, numerous variables are involved in such a calculation, and different vehicles 2 will have different stiffnesses, centers of gravity, roll centers, etc. In any event, an environment, such as the environment 10A, can be physically constructed and the computer system 20 can obtain WIM data 40 for various types of vehicles 2 traveling over the sensing component 34 at various speeds and performing various driving operations (e.g., steering, braking, etc.). The computer system 20 can perform a comparison of the acquired WIM data 40 with data derived from the model. One or more iterations of design, modeling, and testing can be performed in order to arrive at a target congruency level between the model and the real world measurements. Such an iterative process can be used to determine, for example, how much of a difference variation in a parameter (e.g., roll stiffness) makes on the overall measurements, whether a usable function for average roll stiffness can be derived and used across the weights of various vehicles or will additional information, such as a general category of vehicle (e.g., sport utility vehicle, panel truck, tractor trailer, hatchback, etc.) be required to obtain an estimate of roll stiffness, and/or the like. Similarly, such an iterative process can derive the effect of variations in other attributes that are not easily modeled. For example, the effect of different cargo configurations can be examined by keeping other variables constant while passing differently loaded vehicles 2 over the sensing component 34. Extracting partial or complete usable models for use by the computer system 20 in determining one or more relevant parameters may require non-algorithmic approaches. For example, a neural network can be instantiated and trained to recognize a particular target phenomena across a wide variety of situations. Regardless, it is understood that various approaches can be utilized to obtain a complete model for use by the computer system 20 in evaluating and processing measurement data for vehicles 2 during operation in the environment 10A. Construction of a well-known representation of the causes and effects of various types of dynamic effects can be used by the computer system 20 to create a solution to the "inverse problem." That is, given the signals having all the resultant dynamic effects, and given the data from the system on the conditions, the computer system 20 can determine which dynamic effects were responsible for which portion of the signal and remove them, leaving only the static forces. In an embodiment, the computer system 20 can use a neural network, a Bayesian network, a Kalman filter, and/or the like, to recognize the effects from the signals derived from the WIM data 40 received from the sensing component 34. Furthermore, in addition to filtering raw data acquired by various sensing devices in the sensing component 34 to remove spurious noise, the computer system 20 can break down the data into different components relevant to the various solutions the computer system 20 uses to detect and recognize the various dynamic contributions to the detected apparent weight. For example, the computer system 20 can apply high, low, and/or band-pass filters, Fast Fourier Transforms, wavelet decomposition, and/or the like, to break the acquired data into different components. FIG. 7 FIG. 2 FIGS. 3A 7 FIG. 2 shows an illustrative process for weighing a vehicle in motion according to an embodiment, which can be implemented by the computer system 20 (). Referring to and , in action 102, a vehicle 2 approaches and enters the sensing component 34. As described herein, the computer system 20 can obtain various WIM data 40 () regarding the vehicle 2, such as a speed (e.g., from an acoustic or radar speed device), a number of axles, identification data (e.g., an image of the vehicle 2), and/or the like, using any solution. FIG. 6 FIG. 3B In action 104, the computer system 20 can acquire WIM data 40 from the sensing component 34, e.g., from the various sensing devices located in the sensing elements 50A, 50B. In action 106, the computer system 20 can perform one or more types of filtering, e.g., noise remove, curve smoothing, and/or the like, on some or all of the WIM data 40. In action 108, the computer system 20 can separate out/identify the WIM data 40 corresponding to the distance D () from the WIM data 40 corresponding to the lead-in and lead-out transition widths TW using any solution. However, it is understood that the computer system 20 can retain the WIM data 40 corresponding to the transition widths TW, as such data can include information useful in the WIM process. For example, in an embodiment including ramps 52A-52D (), the computer system 20 can process a transient or "bounce" present in the data from when the wheel 4 went up the ramp and reached the top to derive information regarding a stiffness of the various components of the suspension. Regardless, in action 110, the computer system 20 can process the WIM data 40 corresponding to the distance D to extract (e.g., using filters, averages, and/or the like) the relevant components for analysis. In action 112, the computer system 20 can apply a recognition engine to the filtered/extracted WIM data 40. In an embodiment, the computer system 20 can apply various types of analysis methods on the WIM data 40. Such analysis methods can be performed concurrently, e.g., using one or more specialized parallel-processors, digital signal processors, and/or the like. Once the various contributing components have been recognized in the WIM data 40, in action 114 the computer system 20 can compute the dynamic contributions from each of the recognized components, e.g., using the final models instantiated in the computer system 20. In action 116, the computer system 20 can compute the static wheel weights, e.g., by removing all of the computed dynamic component contributions from the apparent wheel weight. In action 118, the computer system 20 can compute a static weight of an axle (e.g., by summing the static wheel weights for each wheel 4 on the axle), the total static weight of the vehicle 2 (e.g., by summing the static wheel weights for each of the wheels 4), and/or the like. In action 120, the computer system 120 can store the calculated static weight(s) corresponding to the vehicle 2 along with some of the other data as a WIM record for the vehicle 2 in the WIM data 40 for later processing. Furthermore, the computer system 20 can evaluate the static weight(s) corresponding to the vehicle 2 with one or more ranges of acceptable weights and trigger any actions, if necessary. For example, the computer system 20 can generate a signal for processing by another system indicating that the vehicle 2 requires further inspection, e.g., due to a calculated static weight exceeding a maximum threshold, and/or the like. Subsequently, in action 122, the computer system 20 and sensing component 34 can exit an active mode until another vehicle 2 approaches/enters the environment 10A. While the embodiments described herein have primarily described various components and solutions for weighing a vehicle 2 in motion, it is understood that an embodiment can include various other ancillary components as would be recognized by one of ordinary skill in the art of vehicle detection and evaluation. For example, an embodiment of an environment described herein can include a set of sensing devices for detecting a vehicle 2 arriving at and/or departing from the sensing component 34. In this case, the computer system 20 and/or sensing device(s) in the sensing component 34 can be completely powered down in the absence of any vehicle 2 for which to acquire measurement data. Furthermore, data from such sensing devices can enable the computer system 20 to determine when something has gone wrong, such as a vehicle 2 not exiting from the area of the sensing component 34. Additionally, it is understood that one or more sensing devices and/or a sensing element 50A, 50B can include various types of safety mechanisms. For example, a load sensing device, which has a limited range of accurate sensing, can include a stop to prevent damage to the sensor should the load exceed the range. A sensing element 50A, 50B also can include multiple sensing devices of a similar type, but having different overlapping ranges of accurate sensing, which can enable the sensing element 50A, 50B to acquire accurate data over a wider range. U.S. Patent No. 7,355,508 Similarly, embodiments described herein can be implemented as part of a larger inspection system, which is configured to acquire various types of measurement and evaluation data for the vehicle 2. For example, in a commercial vehicle inspection application, an embodiment can provide a dynamic weigh station for trucks, which can eliminate a need to periodically divert and effectively stop for significant periods of time a large number of the commercial vehicles passing through an inspection area. In this case or in similar applications, a system can include a solution for detecting and classifying the vehicles passing the inspection location on the roadway so that the system does not acquire or process measurement data for irrelevant vehicles 2, such as passenger cars. One solution can incorporate a "smart video" system, such as that described in , which can classify the vehicles 2 passing a sensing component 34 can identify those vehicles 2 that meet a commercial vehicle criterion. Similarly, an embodiment can be implemented as part of a comprehensive railroad inspection system, which is configured to evaluate various operating conditions of the railroad vehicles (e.g., wheel condition, brake condition, and/or the like). While the embodiments shown and described herein are directed to weighing a vehicle 2 in motion, it is understood that aspects of the invention can be directed to other applications. For example, in an embodiment, a system can include a sensing component 34 similar to that shown and described herein, which is attached to a bridge structure, such as an overpass, railroad bridge (e.g., girders or trusses), and/or the like. In this case, the computer system 20 can process the data acquired by the sensing component 34 as part of a load determination/monitoring solution for the bridge. In an embodiment, a component of the bridge structure itself can operate as a load plate as described herein. In addition to detecting and tracking potential overloading, the computer system 20 can process the data using additional or modified processing and/or data from other sensing devices, such as modified load cell modules affixed to other portions of the bridge structure, to accurately characterize the response of the bridge structure to various types of loads. Information regarding the actual, real-time response of a bridge structure to various types of loads can be useful in determining the best way to design bridges to support specific loads, e.g., neither overdesigning or under-designing, and also can detect incipient failure modes which were not anticipated in the original design, especially if the bridge structure is aging or has been modified from the original design. FIG. 2 While primarily shown and described herein as a method and system for weighing a vehicle in motion, it is understood that aspects of the invention further provide various alternative embodiments. For example, in one embodiment, the invention provides a computer program fixed in at least one computer-readable medium, which when executed, enables a computer system to weigh a vehicle in motion. To this extent, the computer-readable medium includes program code, such as the WIM program 30 (), which enables a computer system to implement some or all of a process described herein. It is understood that the term "computer-readable medium" comprises one or more of any type of tangible medium of expression, now known or later developed, from which a copy of the program code can be perceived, reproduced, or otherwise communicated by a computing device. For example, the computer-readable medium can comprise: one or more portable storage articles of manufacture; one or more memory/storage components of a computing device; paper; and/or the like. FIG. 2 In another embodiment, the invention provides a method of providing a copy of program code, such as the WIM program 30 (), which enables a computer system to implement some or all of a process described herein. In this case, a computer system can process a copy of the program code to generate and transmit, for reception at a second, distinct location, a set of data signals that has one or more of its characteristics set and/or changed in such a manner as to encode a copy of the program code in the set of data signals. Similarly, an embodiment of the invention provides a method of acquiring a copy of the program code, which includes a computer system receiving the set of data signals described herein, and translating the set of data signals into a copy of the computer program fixed in at least one computer-readable medium. In either case, the set of data signals can be transmitted/received using any type of communications link. FIG. 2 In still another embodiment, the invention provides a method of generating a system for weighing a vehicle in motion. In this case, a computer system, such as the computer system 20 (), can be obtained (e.g., created, maintained, made available, etc.) and one or more components for performing a process described herein can be obtained (e.g., created, purchased, used, modified, etc.) and deployed to the computer system. To this extent, the deployment can comprise one or more of: (1) installing program code on a computing device; (2) adding one or more computing and/or I/O devices to the computer system; (3) incorporating and/or modifying the computer system to enable it to perform a process described herein; and/or the like.
A strong self-identity is essential when it comes to intimacy, asserting yourself and exercising good boundaries with your family, friends, and partner. It can also guard against feeling manipulated or resentful in personal relationships. But what is self-identity and where does it come from? Our self-Identity is formed during childhood and adolescence. Ideally, this was a time when you explored your interests, balanced responsibilities, received encouragement, and were given feedback from others that confirmed your self-worth. Sadly, you may have had a difficult childhood or adolescence and missed out on these experiences. For instance, home or school may not have been a place of warmth but a place where you sometimes experienced neglect, criticism or even abuse. Understandably, you might have become more concerned with avoiding rejection or unwanted attention than in the important task of discovering who you are. For example, you may have cared for an emotionally needy parent at a young age, been mistreated by a family member, or disappeared into the background so that your hopes for recognition weren’t once again dashed. What all these situations have in common, is that self-identity was guided more by what others needed you to be and not by what you chose to become. These experiences can lead to self-identity issues. Unsurprisingly, people with a history of abuse or neglect have a difficult time with both their sense of identity and interpersonal boundaries. How do you know if you have a strong self-identity? Recent research led by Erin Kaufman of the University of Utah suggests three components are important for self-identity. These are ownership of values, commitment to values, and self-worth. They found people who struggle with self-identity show the following characteristics: - Opinions and behavior change depending on who they are around - Knowledge of values is lacking and opinions are not held for long - There is a sense of feeling broken, empty or not known by anyone 3 Questions to determine whether you have Self-identity Issues - Do I rely on others to feel real? Do you feel better when you’re copying someone else’s opinions or ideas? Are you a different person when you’re with different people? This is sometimes referred to as being a “social chameleon”. However, there is a difference between skillfully adapting to different social situations, and feeling like a different person in each situation. Healthy identity involves bringing forth the same version of yourself and feeling confident in who you are. Unhealthy identity is about changing who you are to fit the situation or group you are with. - Do my opinions and interests change often? Do you always know what is important to you? If someone were to describe you, would you know if they were right or wrong? While it is healthy to change and grow in response to life experiences, there are usually a set of values, interests or preferences to which people always feel connected. You need to know what you like, what you enjoy and think of a topic or situation. This will help guide your decisions and pursuit of things that bring meaning to your life. - Do I feel empty or broken? Do you feel like an empty shell? Do you feel lost or not know who you are? An absence of knowing yourself can be a sign of mental health problems and difficulty managing emotions. 5 Tasks That Help Build Self-Identity Every person in the world has an identity – something unique that distinguishes them from everybody else. Fortunately, you can still discover and cultivate a healthy self-identity. You can begin to strengthen your identity and your interpersonal boundaries by exploring the following (journaling your responses is even better!): - Intellectual – you are entitled to your ideas, opinions, beliefs, preferences and philosophy on life, as are others. What are these? - Emotional – you are entitled to your feelings and reactions, as are others. How do you feel about things that are happening in your life? - Physical – you are entitled to your space, as are others. How do you feel about your surroundings, privacy, belongings and the people who impact these? - Social – you are entitled to your friends and to pursuing your interests, as are others. Who and what things look interesting to you? - Story – your life is an unfolding story. What are some of your favorite memories and accomplishments? What are your hopes for the future? Benefit from Self-Identity The research team from Utah found that a strong self-identity helped people navigate major life tasks, achieve intimacy with others, be autonomous, and find a place in society. If you are feeling confused over who you are, or, find your behavior and opinions changing frequently, guidance from a professional therapist may help. Contact me to discuss where to go from here.
https://www.reconnectcounseling.com/the-importance-of-self-identity-for-setting-relationship-boundaries/
Best Practices for Commercial Waste Management Every business inevitably generates waste, and managing it is critical to any business. Now that much of the world is shifting towards more eco-friendly practices, companies need to ensure they have responsible waste disposal systems that comply with environmental protection laws while maintaining their day-to-day operations. Implementing proper waste reduction and management strategies will make it easier for commercial establishments to keep their waste from piling up. This article will discuss several waste management practices that can be implemented in a commercial setting. What Is Commercial Waste Management? Commercial waste management refers to collecting, storing, transporting, and disposing waste generated by businesses, organizations, and other commercial entities. It can include anything from paper and cardboard to electronics and hazardous materials. The primary goal of commercial waste management is to get rid of unwanted materials efficiently and cost-effectively while ensuring that commercial waste does not harm the environment. Types of Commercial Waste Understanding the different types of commercial waste can help businesses make more informed decisions about managing their trash. According to environmental regulations, commercial waste may contain toxic or hazardous materials that require extra care. Some commercial waste can be recycled, and proper disposal is the key to reducing the amount of trash a business will dump in landfills. The common types of commercial waste include: Solid Waste Any waste material in solid form encompasses solid waste, including paper, glass, metal, and plastic. This type of waste is common in the commercial sectors, particularly manufacturing and construction. Most solid waste can be recycled, but a few may be hazardous and would need proper disposal. Liquid Waste Waste in liquid form is called liquid waste. Examples of liquid waste include sewage, oil, chemicals, and pesticides. Liquid waste is more difficult to recycle than solid waste. Some liquid waste is recyclable, like oil and sewage, and others can’t be recycled, like chemicals and pesticides, and should be appropriately disposed of. Gaseous Waste Methane, carbon dioxide, halogenated hydrocarbons, and chlorine are some examples of gaseous waste. Like liquid waste, gaseous waste is difficult to recycle, and most gaseous waste can harm the environment. One way to dispose of them is to burn them off in an incinerator. Toxic Waste This type of waste is harmful to humans or the environment and must be disposed of properly to avoid danger. It comes in many different forms – solid, liquid, or gaseous. Examples of toxic waste include chemicals, pesticides, and hazardous oil. Toxic waste must never be recycled and should be sent to hazardous waste facilities for proper handling. Types of Equipment for Hauling Commercial Waste Businesses generate significantly larger volumes of waste daily, more than households do, and commercial operations require a much more comprehensive disposal system. One of the most popular waste management processes commercial establishments uses are dumpster rentals. Dumpsters are waste containers designed to hold large amounts of trash safely. Ideally, commercial establishments can hire a dumpster rental company to provide customized solutions to their commercial waste disposal needs. Companies can arrange with the dumpster rental or waste management company the types of waste to dispose of, the kind of dumpster to use, where to place them, when to pick them up, and how to sort their commercial trash. By taking advantage of these services, businesses can ensure their waste is disposed of responsibly while preserving the health and safety of their employees, customers, community members, and, ultimately, our planet. Best Practices Companies Must Observe To Reduce And Properly Manage Their Waste Creating a waste management plan for a commercial setting can be taxing. Good thing that there are waste management and dumpster rental firms that companies can partner with to help them with proper commercial waste collection and disposal. Aside from seeking experts’ help, there are also a few easy ways companies can help reduce garbage and manage their commercial waste properly. Company-wide Waste Management Initiatives Big or small businesses must have company-wide waste reduction and management programs that they can implement for strict compliance for their entire organization. Here are some excellent programs that can get you started: - Run a recycling program. Educate employees about nearby recycling facilities in your community to encourage them to recycle not only when in the workplace but also when at home. - Take on the zero waste challenge. Urge employees to be mindful of the waste they generate in the office and think of ways to minimize the waste they produce individually. You may even incentivize any department or team that produces the least amount of waste on a recurring basis. - Try composting. If your business produces a lot of food leftovers or organic waste, composting is a suitable way to reduce organic scraps. Some companies have commercial green rooftops or greenhouses that would benefit from nutrient-rich soil from composting! - Sustainable packaging. If you are in a business that requires attractive packaging as an important part of your manufacturing and sales operations, consider going for sustainable materials. To reduce waste, you can swap your single-use containers for reusable plastics or recyclable packaging materials. - Go paperless. Encourage employees and vendors to minimize paper use and work with soft copies as much as possible. There are emails and cloud-based systems to store documents. Using less paper or going paperless can reduce waste and save trees from being constantly cut to produce paper and other materials. - Refillable printer cartridges. One impactful way to reduce office waste is to refill printer cartridges. These cartridges are made of plastic and toxic liquid ink. Since they contain toxic materials, their disposal costs are generally higher. Refilling them instead of buying new ones helps reduce waste and cut the cost of their disposal. - Stop single-use plastic cutlery and water bottles. Provide kitchen items, cutlery, and drinking cups or bottles that are reusable instead of disposable. You can also install a water filter and encourage employees to bring reusable bottles. This initiative is economical and can help the environment by reducing plastic waste. - Practice proper waste labeling and disposal. Initially, it can confuse employees about which waste goes into which bin. Putting understandable and straightforward labels on every trash bin is helpful. You can even send out an email blast with pictures of waste and how to segregate them properly. Involve Your Customers in Your Eco-Friendly Initiatives Spread your waste-management initiatives not only among your employees but also to your customers. Letting your customers know that you are after eliminating waste for sustainability will give your company a great reputation and may inspire people to participate in the effort. Companies can create social media or email campaigns that promote sustainability or incentivize customers who participate and support the effort. For example, if you are a business that requires a repeat purchase of goods, you may run a refill drive where customers can reuse their product bottles or packaging and purchase a refill instead of a new bottle. There are a lot more ways to promote sustainability efforts among customers. Assigning a task force within your company to come up with creative ideas will surely be a fun and fruitful experience because they are not only helping market your brand but also helping the environment by supporting the sustainability drive. Donate Surplus Items One person’s trash is another person’s treasure. Companies can donate reusable things instead of throwing them away. Perhaps call local charities to take some of these unwanted items for the business but still valuable to end users. Bottomline Overall, commercial waste management is an important part of running any business; it is essential for preserving the health and safety of employees and customers while simultaneously reducing environmental impacts. Businesses should take advantage of available resources to ensure their waste is managed responsibly and effectively. From dumpster rental services to composting and recycling programs, companies have plenty of options for managing their waste in a way that is cost-efficient, sustainable, and safe. Get a Reliable Waste Management Partner for Your Business With all that you have learned so far in this article, you can expedite proper waste management by working with a reliable dumpster rental and waste management company like Cobblestone Container Service. They have over 50 years of collective experience in commercial waste management and can help get the job right at all times. Call them at 877-853-2922 to learn more about their services. With the help of professional dumpster rental and waste management services, businesses can keep their workspaces clean while taking steps toward reducing pollution and being conscious of their environmental impact.
https://todayspast.net/best-practices-for-commercial-waste-management/
Controversial papers are about the pros and cons of a specific topic. When it comes to science controversial topics, they encompass both the positive and negative features of a scientific invention and issue. In controversial science papers, students have to read and study the current scientific researchers so as to know more about their chosen topic and clearly present their findings to their listeners. Such papers decide your future grades. Therefore, it is imperative to pick a good topic for your controversial science paper. Choosing A Good Controversial Science Topic A lot of students acknowledge that controversial topics are usually harder than other research topics. A controversial science topic should be related to current events in the world. It should be interesting to you and should not be too complex to write about. Moreover, the topic should be able to provide honest, useful, and realistic information to your audience. You should opt for the scientific issues that are yet not solved in our society. Additionally, you must figure out if your chosen scientific concept or issue is good or bad instead of just discussing it. Make sure you look at the popular scientific concerns from unusual angles. Controversial Science Topic Ideas Here, you can find interesting controversial topics in science that can help you get an idea of what to write about in your science paper or essay in your school or college. These brilliant ideas are not the usual hackneyed and clichéd topics that you find on other websites; rather they are out of the box. Take a look at these controversial science theme proposal examples. - Discuss whether evolution ties the entire biology together - It is important to conduct animal experiments - It is important to carry out embryonic stem cell analysis - Discuss whether vaccination is the origin of autism - Discuss whether the Large Hadron Collider will ruin the Earth - Is cold fusion genuine? - Is atomic energy harmful? - Discuss whether humans are the major contributors to climate change - Are GMOs harmful in any way? - Is it possible that lifestyle and mental diseases could be as a result of a person’s genital makeup? - Discuss the value of constant exercise on one’s mental health - Discuss the effects of lack of sleep on people’s performance at work - Discuss the value-adding food supplements to meals - Discuss whether there are working ways to connect obesity and malnutrition - Discuss whether species evolving through natural selection may be referred to as the main biology principle - Discuss the possibility of overuse of technology reducing the human brains abilities - Discuss how long non-renewable sources will last - Discuss whether the benefits of nuclear power surpass its potential dangers - Discuss whether all countries ought to impose the maximum number of kids every person ought to have - Discuss the possibility of completely using other sources of fuel instead of petroleum - Discuss the impact of human activities on global warming - Discuss whether microorganisms manufacturing would solve the limitations in raw metals - Discuss whether the ancient people way of feeding would be ideal for the existing human beings with reference to meat consumption - Discuss whether there exists a working alternative to the space junk challenge - Discuss whether human beings ought to seek for methods to live in other planets or space - Discuss the economical methods of trapping and utilizing carbon dioxide - Discuss the main obstacles facing the manufacturing of renewable plastics - Discuss whether the dangers of using nanotechnology in medicine is greater than the advantages - Discuss the potential dangers of making the honey bee territories lose their strength - Discuss the other options that may be used instead of antibiotics in agriculture - Discuss the dangers and benefits of employing drones in wars - Discuss the health demerits of taking genetically modified products - Discuss the idea of smokers suffering from lung cancer getting a reimbursement - Discuss PEAT and the fight against animal mistreatment as the basis to make people vegetarian - Discuss the use of antibiotics at meat industries - Discuss whether vaccinations ought to be compulsory for admission to public school - Discuss whether vaccinations are effective or more painful than useful - The presence of aliens - Discuss drug experimentation for welfare candidates - Discuss the use of harmful organisms for warfare - Unidentified flying objects - America’s fight against obesity - Stem cell analysis - The supernatural world - Discuss the treatment for the mentally challenged - Discuss eradicating value size alternatives at junk meals joints - Natural sources and alternative sources of energy - Can natural sources of energy replace the use of petroleum - What are genetically modified foods and their effects to our well being - Discuss whether genetic modification is advantageous or destructive - Is it possible for the community to do away with plastics - Discuss whether the advantages of stem cell transplants outdo the costs - Influence of bots in manufacturing on human jobs - Do you think technology is a floor that improves people’s social segregation? - Discuss which is much valuable between space and ocean exploration - Does AI pose any risk to mankind? - Discuss whether organizations ought to be permitted to reserve their scientific secrets - Learning ought to center on mathematics and science and less on music and art - Discuss whether Google is the ultimate tool for looking up information online whether we ought to explore other options - Do mobile phones have any harmful impact on the user? - Discuss the likelihood of science and religion thriving together - Is it right for kids to use smartphones - Discuss the advantages and disadvantages of fully computerized cars - Is it possible for the regulations to maintain the pace of the trends in technology - Technology destroys our lives in the time ahead - Discuss whether it is right to produce mechanized humans by fusing technology into human bodies - Discuss whether medical investigations present a risk and the possible means of preventing the risks - Do you think there ought to be a maximum number of medical examinations performed on people?
https://topicsbase.com/controversial-science-topics.html
The invention discloses a device and method for machine visual analysis based on naked-eye stereoscopic display. The device and the method can overcome defects that information transmission safety is poor and robustness is low in the existing machine vision field, and can also carry out naked-eye three-dimensional display with a large wavelength scope. The device comprises naked-eye stereoscopic display equipment, machine visual collection equipment and machine visual analysis equipment, wherein the naked-eye stereoscopic display equipment is used to display a three-dimensional image which shall be displayed as required, and a display waveband comprises subsets of invisible light wavebands; the machine visual collection equipment is used to collect the three-dimensional image; and the machine visual analysis equipment is used to carry out the machine visual analysis according to image data collected by the machine visual collection equipment.
Topic: Stained Glass When we have had the stained glass of idealism shattered--whether in families or institutions or simply by facing the realities of life and the suffering that it brings--we always have stories for which there are no words adequate to accurately convey the story. That's what Leslie Van Gelder says in Weaving a Way Home. I think she's right. Such has been my experience with quite a few of my own personal journeys or mini-journeys. Seems to me that those who fail to understand this phenomenon end up believing that they are the only individuals in the world who cannot adequately convey to others what they have experienced or suffered. Truth is, if Gelder is right (and I really believe that she is), we may do well to stop struggling with the idea that we are "the only ones in such pain and isolation." It helps when I remember that, for I can look at others with a different set of eyes and with arms more open to the world, even if I cannot comprehend the myriad of unique stories that I'm half afraid to hear from others at times. Or simply cannot fully relate to because I've not been in their unique situation. At times when I am more able to bridge the gap between myself and another person, I find that I am asking questions that invite sharing while having a heightened sensitivity to what other can teach me. I become so interested in learning and rejoice in the ways the world is opening up to me, as never before.
https://nocolluding.tripod.com/blog/index.blog?topic_id=1096359
How-to Increase an Interview into a Research Paper Not all students have the critical competence previously established in 1 language that will enable a very simple bridge into second language learning. They learn the identical manner. Consequently, students that are engaged in writing-to-learn will grow more successful readers. Schools need to be accredited to be eligible to take part in federal student aid programs. High school is composed of Grades 10-12. Different school have various degrees and vary in what they’re known. Many public schools provide pre-kindergarten programs. Teachers develop policies which are applicable to everyone. The truth is, you might discover the publishing procedure is invigorating. You have to attempt to convince your instructor to enable you to begin the book earlier than the rest portion of the class. Students are assumed to function as adults if they’re in school. They participated in reading-to-learn will also be ready to write nicely. They nowadays are more likely to have travelled overseas by age of 16 and have easy access to a world of information via the internet. Design a personalized graph, so that every pupil can record if they’re employing the strategies and rate the effectiveness. Students may also obtain immediate feedback regarding the fact of the perceptions, thereby alleviating the issue of over-confidence. Three other students wish to do a job that won’t be very challenging and won’t earn a very decent grade. Each pupil should set certain goals with measurable outcomes. Literature authored by many creators from the world’s additional area, learners unveiled to numerous countries countries and religions. At a simulation, directed by means of a set of parameters, pupils undertake to address issues, adapt to issues arising on their scenario and get an awareness of the particular circumstances which exist within the boundaries of the simulation. Students trained in sociology also understand how to help others understand how the social world operates and how it may be altered for the better. Parents are mostly more receptive in case the conventional tests to which they’re accustomed aren’t being removed. Because every kid is different, NAGC recognizes that there isn’t any one ideal program for educating gifted students. At-risk Infant or Toddler Term and Definition At-risk toddler or infant normally means a person under 3 decades old who’d be in danger of experiencing a significant developmental delay if early intervention services weren’t provided to the person. The process of evaluation and approval of designs is dependent on the cost estimates. The procedure is exactly the same when adding amounts which are a couple of digits. Individuals might connect with as many scholarships simply because they like given that they like. The methods and way of design automation vary, dependent on the character and aim of the object designed. Critical listening means that you’re not just hearing but considering what it is you’re hearing. The term doesn’t apply to children that are socially maladjusted, unless it’s decided they have an emotional disturbance. Summer term is known to extend from the look at this service very first day after the close of the spring quarter to the preceding day before the start of the autumn quarter. Operational definitions are so specific and objective they can describe the exact same behaviour in a variety of settings and at a few times, even if different folks are observing the behaviour. All important design companies have their own computer facilities extended in their specific industries. Although there’s no single correct method to come up with portfolio apps, in all them students are anticipated to collect, select, and reflect. Sometimes they need to attend a couple of sessions to obtain a correct insight to the program framework. Programs for teaching self-advocacy abilities will need to assess whether the student is really implementing the plans. Fortunately, there are bridge applications accessible to help make the transition a lot easier. As stated above, after you begin your programme of study at QMUL, your charge status is extremely not likely to change. It’s never suggested to start a programme of research in case you don’t receive a guaranteed means of paying your tuition fees and living costs for the whole duration of this programme.No comments No comments yet. Be the first.
http://thetalik.net/2020/03/27/how-to-increase-an-interview-into-a-research-paper/
How can we use social media to understand the past? Learning goals Students will be able to: - Successfully create a pinterest page so that they may search pinterest.com. - Locate three separate credible websites on pinterest.com, twitter.com and wordpress.com that address the period 1200-1400 C.E. - Summarize the content on those three sites. - Correctly link to three sites in the D2L discussion board. - Explain why you found your three websites to be credible. Background Our module “Conversations” will focus on the period from 1200-1400 CE. The conversation part will be addressing how social media shapes our understanding of the history of this period, often called the late middle ages. Our first week we'll work on understanding this period by using social media and our second week we'll use social media to help others understand this period. Four topics will prove useful to you as you navigate this period: the Mongols, Mansa Musa, the plague (known as the Black Death), the Renaissance. So, why are we focusing on social media when studying history? Well, for one, social media allows us to practice what we call public history. History written by historians for other historians is very small in scope and in audience. Museums, plaques at public parks, blogs about minor subjects that just happen to fascinate people; these are all public history. A historian's book published by a university press will get maybe 200 copies printed. A good history blog will get at least 200 hits a day, and a good twitter feed can have a 1 million followers. If we think history matters (and I do) we have to be attentive to where it will have the biggest impact. For example, John Green, who is an author and video blogger, has 4.7 million followers on his twitter feed. In many ways public history matters far more than any professional history. So, to begin, you need to get comfortable operating in three social media platforms: pinterest.com, wordpress.com, and twitter.com. Pinterest requires an account to search, but wordpress and twitter don't. WordPress.com is a web log hosting website. WordPress is also blogging software that anyone can use to publish their own site. Lots of historians and history-minded people post their research, analysis, and an odd-assortment of material on blogs. For example, consider the wordpress site History Behind Game of Throne. Or consider this blog that recounts when a pope wrote a mongol khan about possible converting to Christianity (the khan declined.) Pinterest is a web log that focuses on images, which the site calls “pins.” Twitter is a mico-blogging site. It allows users to post blogs in 280 characters, with a limited number of pictures. A “feed” is all the tweets from a particular user. I. Assignment - Find a pinterest page, twitter feed, and wordpress blog that are a) credible and b) address a world history subject between 1200-1400 CE. Each media type may focus on a specific country or a particular group. For example, Mansa Musa left a variety of records about his famous hajj. - Post your three links in three different discussion posts and under each link post a two sentence summary of the media and one sentence evaluation of the credibility of the site. Please review the posts before posting your own an try to avoid duplication. Tips: - To limit a google search to a particular domain type :.domain name. For example, if I wanted to search for “Mansa Musa” only in wordpress blogs, I'd type “Mansa Musa:.wordpress.com” - Twitter uses a particular grammar that takes a while to get used to. Topics are “tagged” with a # . For example searching for #Mongols will get all the most recent tweets about that topic. People or institutions who are writing on twitter “tweeting” are tagged with @. So, I'm @historyjack. - To find good historical material on twitter, say on the Mongols, you may need to find good historians of the that subject. - Pinterest is the shallowest (has the least historical) content of these three media types. It has hoards of non-credible sites, which pinterest call “boards.” Be prepared for lots of images that are not attributed (cited). No citations = not credible. - There are many wordpress sites that are hosted by high school or college students. Unless those students have all citations for all their work and can demonstrate expertise, those sites are not credible. I note this as many general searches turn up popular and long-standing blog posts that are nonetheless not credible for our standards. - f you want to know who owns a website, use the website easywhois.com. To know what software created a website, plug the URL into builtwith.com. To know what websites link to a website, type in "link:yourwebsite" . Knowing who owns a website, how it was built, and who is linking to a website can help you evaluate the credibility of websites. WordPress.com, twitter.com, and pinterest.com won't tell you much, but if you find a self-hosted wordpress site, these tools may be useful. Grading criteria Student: - Located three separate credible websites on pinterest.com, twitter.com and wordpress.com that address the period 1200-1400 C.E. - Summarized the content on those three sites in three paragraphs. - Correctly linked to three sites in the D2L discussion board. - Explained why you found your three websites to be credible in your three paragraphs.
https://jacknorton.org/courses/world-history-1-1101-spring-2019-2/assignments-1101-spring-19/conversations-1-assignments-1101/
Ancient Greece was a very different time and place than our modern world, and the Homeric Epics are very different from the movies and books that we watch and read today. This is a foreign country, and a guide is helpful to truly understand their culture and mindset. Professor Elizabeth Vandiver is an excellent guide through the world of the Homeric Epics, the Iliad and the Odyssey. She is a captivating lecturer, she weighs in on the scholarly debates on these epics, she brings the ancient Greeks to life, her enthusiasm for the Ancient Greek culture is contagious. The Iliad and the Odyssey were engaging character studies, and Professor Vandiver brings out the conflicts and interactions between the various characters in these epics, and discusses what they tell us about the ancient Greeks. How does Professor Vanderbilt bring to life the struggles of the ancient Greeks, their fears, their anxieties, their hopes, their dreams, their joys, their frustrations? Homer shaped Greek culture, to read and recite Homer, for the Greeks, was the essence of being Greek. Scholars believe the Iliad and the Odyssey were first composed as oral tradition, before the introduction of writing, and some scholars believe that the Greek alphabet was developed so these epics could be written down and preserved. How was it that the bards were able to recite from memory epic poetry chanted over several days of a religious festival? What was it like to listen to these bards? The Iliad was about war, and about the warriors and their mighty deeds as they fought the war, but it was also about how war affected Greek culture, how it affected the families and wives and children of the Greeks and Trojans. How different was this ancient warrior culture? The Iliad had a very different type of hero, Achilles, who in his rage against his king decided to sit out the war, until the Trojans fought to the edge of the water and started burning the Greek ships. What does this tell us about our hero, and about the Greek’s conflicting attitude towards war? The hero of the Odyssey, Odysseus, wily Odysseus, angers Poseidon, the god of the sea, and is forced to wander for many years before returning. All of the Greek heroes are forced to wander for angering the gods for their outrages committed in the sacking of Troy. What were his adventures during these wanderings, and what do they tell us? Odysseus is forced to think on his feet, weaving tall tales to mislead his enemies, much like he came up with the idea of the Trojan Horse to fool the Trojans so the Greek army could sack their city. Odysseus constantly fabricates long and intricate stories about who he is and how he came to be wherever he is at the moment. How do these wily tales affect his adventures? Greek heroes suffer from their acts of hubris, their acts of arrogance, their overreaching that angers the gods. What were these acts of hubris by Achilles in the Iliad, and by Odysseus in the Odyssey? What do they tell us about our heroes, and about Greek culture? Odysseus returns to his native Ithaca in disguise, all the noble men in the city have been courting his wife the past three years, since they believe that Odysseus has been lost at sea and will never return. How did the prolonged absence of the men King Odysseus brought with him to fight the war, with Odysseus being the only survivor of the long trip home, cause the dysfunction in the society of Ithaca? How is Odysseus, with only the help of his son and a few faithful servants, able to overcome the hundred plus suitors who are plotting to kill his son, and Odysseus too, if they see through his disguise? These are very intriguing questions that teach us a lot about the ancient Greeks, and also ourselves, and Professor Vandiver points out many of these questions that maybe we would not see due to our lack of familiarity with the Greek world, and she does so in a very interesting and captivating way in her lectures.
http://www.seekingvirtueandwisdom.com/great-courses-iliad-and-odyssey-of-homer/
In El Dorado County, keeping students and schools safe is a top priority for every educational leader and law enforcement agency. Increasingly, families are asking about planning and preparedness for school emergencies, as well as prevention efforts. As a county, we have developed strong partnerships between schools, law enforcement, and school community partners to improve campus and community safety. Within El Dorado County there are fifteen school districts that oversee more than 65 schools, as well as several charter schools. The El Dorado County Sheriff’s Office serves the majority of schools in the County, with seven full-time deputies assigned to the School Resource Officer (SRO) program. The SRO Deputies respond to calls for service at the high schools located in the County along with middle and elementary schools. In addition, the City of Placerville Police Department and South Lake Tahoe Police Department serve the schools in their local jurisdiction as well. With guidance and technical assistance from law enforcement partners, public schools in El Dorado County have developed school safety plans that are regularly reviewed and updated. Plans include procedures for lockdowns, evacuations, active shooter, wildland fires, earthquakes and more. Each campus maintains plans that are tailored to their site and address critical needs at that campus. Safety plans include communication protocols for connecting with law enforcement and messaging to families. Plans are formally reviewed on an annual basis to ensure that current safety protocols are addressed. Fire and lockdown drills are scheduled throughout the school year to ensure students and staff are familiar with what is expected of them in the event of an emergency. And, for the past several years, law enforcement and other public agencies have held active shooter drills on school campuses to familiarize themselves with campus layouts and school safety plans. All schools strive to create a healthy school climate where students feel welcomed and connected. With funding recently provided by the Mental Health Student Services Act Grant, El Dorado County Office of Education and community partners will begin to expand access to mental health services for children and youth, including countywide student assessments and campus-based mental health services. It will also allow schools to connect families to ongoing mental health services with local agencies when needed. As an additional prevention strategy, we are developing School Threat Assessment Teams at several sites who will be trained to recognize potential threats to schools and students. These multidisciplinary teams will include school officials, law enforcement, mental health professionals, and others who work together to evaluate situations and intervene when necessary to connect students to mental health supports and other assistance as needed. And finally, it is important to acknowledge the role of students and families in looking out for one another and creating safer schools. They may be among the first to recognize warning signs of someone at-risk of hurting themselves or others. We encourage students and families to speak to a teacher, counselor, SRO, or another trusted adult to get help. In summary, we would like to assure the community that we have protective systems in place, we are prepared to respond, and we will continue to be proactive in our prevention efforts to keep our schools safe and secure. Respectfully,
https://edcoe.org/spotlight/a-commitment-to-safe-schools-in-el-dorado-county-1654813817
I'm doing my best to make my eye patch look good, but I think they're just one of those accessories that don't work on anyone. I was bending into the dark abyss behind the front door, under the coat rack overloaded with bags, where the stroller is kept out of the rain, shoes pile up, dog hair clumps and the broom and dust pan are stored, searching for my shoes. I didn't see the broken handle of the dust pan and it went right into my eye. I didn't drop the baby but I did enhance her vocabulary with some colorful expletives. Nicole, my hero sister in law came to my rescue. I suppose I will forever be grateful that she insisted I get a cell phone as I used it to call her. I used my big girl calm voice to debrief her on the situation and she was coming through the gate in less than five minutes. A few minutes more and my brother arrived to take me to urgent care in the next town over. Less than thirty minutes passed and I had an eye patch and a prescription for anti-inflammatory eye drops. It hurts. I'm sure I'll have a black eye in the morning but it may take me a few days to really get the pirate jargon perfected. Arg.
https://www.erinparkerphoto.com/blog/2014/7/27/how-i-became-a-pirate
Extracts from this document... Introduction Date of Experiment: 20th November, 2004 Analysis of Commercial Vitamin C Tablets Aim: To employ iodometric titration to determine the content of vitamin C in commercial tablets using volumetric analysis and compares it with the manufacturers' specifications. Introduction Vitamin C is an essential substance for maintaining good health and it is proved to be the agent which prevents scurvy. Most animals can synthesize their own vitamin C, but some, such as human cannot. Owing to the increasing concern for one's health since the last century, vitamin C tablets become the most popular supplyment to normal diets. In this experiment, the vitamin C content of a commercial tablet is determinded and compared with the maunfacturers' specification. Vitamin C is water-soluble and is an enantiomer of ascorbic acid. (Commercial vitamin C is often a mixture of ascorbic acid and other ascorbates.) Ascorbic acid, C6H8O6, is a reducing agent that reacts rapidly with iodine (I2) in acidic medium to produce iodide ion (I-) and dehydroascorbic acid, as shown in the following equation: + I2(aq) -----------> + 2H+(aq) + 2I-(aq) Ascorbic acid (Vit. C) Dehydroascorbic acid However, since iodine is only slightly soluble in water, ascorgic acid should not be titrated directly by a standard iodine solution, since the end point of titration is not o obvious. ...read more. Middle 4. The solution in the volumetric flask was made up to 250 cm3 and the flask was shaken gently. A portion of the vitamin C solution was poured out from the flask into a dry and clean 100 cm3 beaker. 5. A 25.00 cm3 pipette was first rinsed with distilled water and then with the Vitamin C solution. 6. 25.00 cm3 of the vitamin C solution was pipetted from the 100 cm3 beaker into a clean conical flask. 7. 5 cm3 of 1.0M potassium iodate(V) solution was added into the vitamin C solution in the conical flask using a 10 cm3 measuring cylinder. 8. Lastly, 25.00 cm3 of the previously prepared standard potassium iodate(V) solution was transferred to the same conical flask. 9. The solution was immediately titrated with sodium thiosulphate solution in the burette, just as in part (B), step 9-13. 10. The volume of sodium thiosulphate used in each titration was recorded and the average volume was calculated. Results and Calculations Mass of weighing bottle and potassium iodate(V): 4.647g Mass of weighing bottle: 4.000g Mass of potassium iodate(V) weighed: 0.674g 0.674g potassium iodate(V) = 0.674g � (39.1+127+16x3)g mol-1 = 3.148 x 10-3 mol Concentration of the prepared standard potassium iodate(V) solution: 3.148 x 10-3 mol � 0.25dm3 = 0.0126 mol dm-3 Table 1 (for part B) ...read more. Conclusion It is believe that the sodium thiosulphate used for each titration in part C will be greater, since the vitamin C content decreases upon exposure to air. 6) Cooking means heating or boiling the food. When vegetables are cooked, the vitamin C they contain is heated vigourously. Knowing that boiling temperatures will destroy vitamin C, the amount of vitamin C in the vegetables will be definitely reduced upon cooking. Further Discussion (i) Acidification of the vitamin C sample also serves to stabilize the ascorbic acid, which will other wise decompose and be undetectable. (ii) As stated in the introduction part, iodine has a limited solubility in water. It dissolves well in the solution of potassium iodide only because it will react with I- to form the very soluble red-brown complex, triiodide ion, I3-. So it is reminded that the iodine generated from the redox reaction of iodide and iodate is actually in the form of the triiodide ions in the presence of excess KI due to the I2 + I- I3- equilibrium. (iii) Ascorbic acid can undergo air oxidation requiring that the procedure be performed with minimal delay. (iv) The structure of ascorbic acid (centered around a five-membered ring of four carbons and one oxygen atom) includes two adjacent alcohol(OH) functional groups. Conclusion (1) Neutralization is an exothermic reaction. (2) Stronger the acid or alkali, greater in magnitude will be the enthalpy change ?H obtained. ...read more. This student written piece of work is one of many that can be found in our GCSE Aqueous Chemistry section. Found what you're looking for?
https://www.markedbyteachers.com/gcse/science/to-employ-iodometric-titration-to-determine-the-content-of-vitamin-c-in-commercial-tablets-using-volumetric-analysis-and-compares-it-with-the-manufacturers-specifications.html
Founded in 1907 Ust-Kamenogrosk branch was opened in 2001 in the framework of cooperation of Kazakhstan and Russia in the sphere of education and science. Students of Ust-Kamenogorsk branch of Plekhanov Russian University of Economics actively involved in regional, republican, international conferences, Olympiads, competitions, projects. Future lawyers are actively involved in the work of Student Legal clinics, providing legal support to the poor. The students of the specialty "applied Informatics" participate in the conferences "1C rating", IT-technologies. Future economists participate in the auctions, games, Exchange simulator and GMS. Regarding the results, students of the branch are invited to practice in such major companies as JSC "CentroCredit Bank", JSC "Freedom Finance", JSC "Sberbank", JSC "ForteBank". Experienced teachers - Doctors and Candidates of Sciences – train competitive specialists, demanded in labour market of the Republic of Kazakhstan and abroad. Ust-Kamenogorsk branch is a dialogue platform for the discussion of the Eurasian integration with the countries-participants, the positioning of the positive experience of the Assembly of people of Kazakhstan abroad. Students of the branch lead an active student life. Annually active students of the branch travel to Moscow to participate in the festival of the branches, where the most active, successful and creative young people gather. Students take part in the activities of the city and region. In the tradition of the branch field trips are annually performed to "Horse yard", "Altai Alps", Sibiny. The functioning of the branch enables the youth of Kazakhstan receive the Russian education on the territory of their Republic, to contribute to the development and strengthening of partner and friendly relations of Kazakhstan and Russia.
https://www.rea.ru/en/org/branches/ust-kamenogorskiy-branch/Pages/about.aspx
Welcome to Million Expressions! We design creative art-based activities integrated with Social Emotional Learning that promotes Self-Expression in children. Our Areas of Focus Self-Awareness Help kids identify and be aware of their own feelings, emotions, thoughts, and how it influences their behaviour. Creative Intelligence Encourage kids to "think out of the box", be creative & develop their enthusiasm in learning. Self-Confidence Emphasizes on building the confidence and self-esteem in the kids and feel more valued and worthy. Self-Management Promotes a growth mindset in kids and helps them to manage their thoughts and emotions in a positive way. Problem Solving Skill Help kids to identify and evaluate the problem, think aloud and come up with possible solutions. Interpersonal Skill Values the power of communication and helps maintain a healthy and rewarding relationship with self and with others.
https://millionexpressions.com/
DECORATION, the use of consciously designed patterns to embellish building surfaces and objects for aesthetic effect, one of the most characteristic features of art and architecture in Islamic Persia. Both the quantity and quality of surface patterns attest the esteem they enjoyed among artists and patrons alike. Despite the obvious importance of decorative or ornamental schemes in Persian Islamic art, few attempts have been made to deal with this phenomenon in a comprehensive way. During the 19th century such authors as Alois Riegl, Owen Jones, and Oscar Wilde used examples from Persian and other Islamic art in their attempts to explain the appeal of decorative or ornamental patterns in aesthetic, psychological, and even physiological terms (Gombrich, pp. 51-59). Riegl sought, through studying the internal formal evolution of decorative forms, to integrate patterns from Persian carpets with their ancient, and particularly classical, antecedents; his work provided a model for later studies by Ernst Kühnel and Maurice Dimand (Gombrich, pp. 180-90; Grabar, p. 39). Despite progress in identifying or classifying the features of Persian decorative patterns, however, few scholars have attempted to explain why particular designs were used in specific periods, regions, or circumstances, even though it can be observed that in a given area or epoch the form and character of ornament are often consistent within a particular craft and sometimes even among different media, despite the varied techniques in which they are executed. Such consistency raises the questions how these clearly differentiated vocabularies of ornament arose, why they were consciously perpetuated, and whether or not certain types of ornament conveyed specific meanings or general moods to an observer. The introduction of new decorative modes or techniques often followed such major historical shifts as the Islamic conquest in the mid-7th century, the Mongol invasion in the first half of the 13th century, and the establishment of European trading companies in the 17th century. Nevertheless, in a broad sense the ornamental tradition of Islamic Persia was pluralistic and cumulative. A newly introduced feature might acquire its own distinctive niche within the existing repertoire and be used in conjunction with previously established types of decoration, each of which retained its visual identity, thus contributing to a distinctive historical rhythm of episodic innovation against a stable background. Individual decorative elements can often be traced over several centuries during which time their appearance shows only minor variations. In such a tradition ornament became both a vehicle of continuity and the source of subtle variations on familiar themes. The persistence of distinct visual categories over long periods may have been related to a broader cultural appreciation of normative structures also apparent in Persian literature, in which poetic forms or vocabularies of imagery were repeated, with minor variations, sometimes for centuries (Yarshater, pp. 18-20). Despite this conservatism, patterns and designs may be classified not only typologically but also geographically and chronologically. Consequently, specific designs can be diagnostic of historical epochs and regional divisions, as well as indicating transfer of decorative themes from one medium or region to another. A systematic investigation of the history and use of decoration in Persia should thus provide insight into a variety of economic and social factors for which written documentation is often scanty, including the training, organization, and migration of craftsmen and the relative economic importance or social status of various crafts. In this article, the historical development of the Persian ornamental repertoire will be surveyed, with the purpose of providing a foundation for addressing these more general questions. Treatment of this development will be divided into two basic epochs, from the Islamic conquest to the Mongol invasion and from the latter to the mid-19th century. Although there was considerable continuity between these two epochs, Mongol rule brought to a close a period of gradual internal artistic evolution and opened an era in which change was increasingly stimulated by the importation of foreign decorative themes and techniques, often at the instigation of the ruling dynasty. From the advent of Islam to the Mongol conquest, ca. 750-1250. This period can be subdivided into two phases. In the first, approximately from 750 to 1050, a distinctive artistic culture, in which pre-Islamic and Islamic elements were fused, developed in Persia. This process was centered in the east, especially Khorasan and Transoxania. In the second phase, from 1050 to about 1250, the cultural center of gravity shifted westward, first to central and then to northwestern Persia, even though the east retained considerable artistic vigor until the Mongol invasion. Phase 1. The political fusion of former Sasanian territories with those of the city-states of Khorasan and Transoxania under Islamic rule created a new cultural region with a mixed legacy of artistic traditions from both areas, as well as from more distant Asian regions like India and China. Moreover, as this region was tied administratively to Iraq, it was also affected by trends that developed there, particularly in Baṣra and Baghdad. For the first Islamic centuries the organization and content of decoration can be established through examination of metalwork, ceramics, and architecture. The Sasanian practice of putting royal portraiture on coinage and metalwork almost ceased, but such other royal emblems as the mythical bird Sīmoṟḡ and birds and animals bearing ribbons or garlands became major decorative themes, appearing, often in roundels, on metalwork, ceramics, textiles, and even architecture of Islamic date (Harper, pp. 16-19). A small gold ewer bearing the titles of the Buyid Abū Manṣūr (ʿEzz-al-Dawla) Amīr Baḵtīār (356-67/967-78) and decorated with roundels suggests that the synthesis of Sasanian themes, Sogdian techniques, and Arabic inscriptions characteristic of eastern Persian metalwork was also popular in western Persia and Iraq (Lowry; Plate XI). An amalgamation of pre-Islamic and Islamic decorative features is also evident in a group of slip-painted ceramics from 10th- and 11th-century Khorasan, especially those painted in black with touches of red on a pure white ground. On the most impressive examples designs echoing the vegetal ornament of Sogdian metalwork are combined with Arabic inscriptions (Raby, pp. 187-99, figs. 12, 18, 20; Plate XII). Those inscriptions range from wishes of good fortune or good health for an anonymous owner to edifying aphorisms and proverbs and even Hadith (Shishkina and Pavchinskaya, pp. 53-56). The practice of using inscriptions as the principal embellishment on ceramics appears to have originated at Baṣra in Iraq during the 9th century, possibly with the support of the ʿAbbasid caliphs; one potter signed as the caliph’s craftsman: ṣāneʿ amīr-al-moʾmenīn). It was, however, more fully refined in the wares of Khorasan and Transoxania (Qūčānī, pp. 94-95; Keall and Mason). Phase 2. Persian art and architecture from the mid-11th to the mid-13th century are notable for the intricacy and elaboration of geometric, calligraphic, and vegetal ornament. Although the same categories of decoration were used throughout Persia, there were regional differences in their application, particularly on buildings. In Khorasan, Transoxania, and Sīstān attention was focused on the exteriors of buildings. Portals, minarets, and entire facades were framed or articulated with contrasting areas of geometric and calligraphic ornament (Hutt and Harrow, pls. 10, 14, 64-66, 76-78, 80-82; Pope, pp. 96-98). In northeastern Persia the outer walls of tombs were articulated with niches and often covered with a decorative veneer of intricate geometric patterning executed in brickwork or unglazed and glazed terracotta strips (Hutt and Harrow, pls. 12, 60-61, 126-27; Seherr-Thoss, pp. 74-85). In central Persia mosques were often left virturally unadorned, except for inscription bands around the bases of domes or carved stucco ornament, often of great intricacy, on prayer niches (meḥrāb; Pope, pp. 106-29, 146-62). This tradition of stucco ornament was probably ultimately derived from the undulating grapevines used in ʿAbbasid Iraq, but in 12th-century examples the organic unity of the individual leaf was nearly lost in the lacy network of geometric units that covered the surface, creating patterns within patterns (Plate XIII). Paradoxically, as individual elements of vegetation became more abstract, the vines to which they were attached were endowed with ever greater energy, being woven together to create a dense network on several levels (Shani, pp. 67-74). This interweaving of two or more distinct strands of vegetation in a composition on multiple levels, in which individual forms and structures are complementary, was widely used in later centuries, especially on carpets and polychrome ceramic revetments. A taste for intricate decoration is also evident in 12th- and 13th-century metalwork from Khorasan, where bronze or brass was inlaid with silver and copper in figural, vegetal, calligraphic, and geometric patterns. The inclusion of symbols for heavenly bodies, the sun, moon, planets, and constellations of the zodiac, underscores the link between metal vessels and cosmological themes (Melikian Chirvani, 1982, pp. 55-135). Some of the finest pieces bear inscriptions stating that they were made in Herat, and some are signed by more than one craftsman. The most important craftsman was probably the naqqāš, or designer, who evidently planned and executed the inlaid decoration (Ettinghausen, 1943, pp. 193-99). The city of Kāšān achieved preeminence in ceramic production during the 12th and 13th centuries; both tableware and architectural revetments were produced there in several decorative techniques, including molding and underglaze and overglaze painting (Etting-hausen, 1936). Luster-painted tiles and tablewares from Kāšān exhibit a wide decorative repertoire and were highly prized and widely exported. Some are ornamented with intertwined arabesques, vine patterns in which stems and leaves grow one from the other, resembling those in stucco carving; others resemble inlaid metalwork in prominence of inscription bands and geometric schemes. Most striking, however, are the depictions of courtly life: enthroned figures with attendants, retinues of horsemen, or couples conversing (Watson, pp. 45-109 and passim). The inscriptions that are such a prominent feature of both tiles and vessels are also varied. Although koranic quotations occur only on architectural revetments, poetry, some of it composed by the potters themselves, appears on both tiles and tableware. Modern commentators have often pointed out the lack of correspondence between the themes of the poetry and the scenes depicted (e.g., a tile with wrestlers inscribed with verses about a hunt from the Šāh-nāma;Watson, pp. 122-31, 146-56; Bahrami, pp. 75-81, 90-95, 114-22, 126-30). Several of the craftsmen responsible for the decoration of these objects signed with the epithet naqqāš (Watson, pp. 180-81). The most elaborate compositions on polychrome wares, for example, the scene of the siege of a fortification on a platter or a continuous narrative drawn from the Persian national epic on a beaker, both in the Freer Gallery of Art, Washington, D.C., suggest a link between the designers of ceramic vessels and the artists who executed wall paintings or manuscript illustrations (Simpson, pp. 15-24). From the Mongol invasion to ca. 1850. This period can also be divided into two phases. The first, approximately from 1250 to 1650, was characterized by successive links to the artistic traditions of China. The second, approximately from 1650 to 1850, was marked by a fascination with things European, known initially via India, then directly through European contacts. An intrinsic conservatism in the artistic process slowed the pace of change, and the degree of change varied from medium to medium. Nevertheless, after ca. 1250 innovation was primarily stimulated by foreign taste and imported techniques. Furthermore, one addition to the decorative repertoire was often followed by others from the same source, so that both the sinicization and europeanization of Persian taste were incremental processes. At the same time, however, the new elements were as much assimilated as imitated, creating hybrid Sino-Persian and Euro-Persian decorative idioms. Phase 1. In the 13th and early 14th centuries the formulation of a new decorative vocabulary was accompanied by a change in the structure of patronage fostered by the Mongol conquest. During the first Islamic centuries Persian art appears to have rested largely in the hands of individual urban craftsmen who learned and transmitted their skills within an established artisan tradition. Beginning in the Il-khanid period (654-754/1256-1353), the initiative seems to have shifted gradually to various courts. Members of these courts began to participate in the design and production of art and architecture, though the degree and character of court-sponsored artistic production appears to have varied from one dynasty or ruler to another and certain crafts were more affected than others. In general, however, the transfer of design or production of crafts to a court appears to have fostered a harmonization of designs among various media, a development probably dependent on the primary role of the naqqāš, or painter-decorator, in creating patterns to be executed in various media. Over time a court atelier could build up an archive of patterns and designs, thus providing for continuity between generations of artists or even, in periods of political turmoil, from one court or dynasty to another. These court repositories may also have included objects of foreign origin. When court-based design and the importation of foreign taste and techniques coincided, the impact of a given innovation was thereby multiplied. The initial Mongol invasion brought a virtual cessation of artistic production and architectural patronage from the 1220s to the 1260s, but after the consolidation of Il-khanid control the less devastated areas in central and western Persia began to revive. Structures were repaired and new building projects begun, particularly after the conversion of Ḡāzān Khan (694-703/1295-1304) to Islam in 694/1295. Extensive use of glazed-ceramic revetments was an innovation of the period, and carved or molded plaster ornament reached a new level of elaboration, though in both media the patterns continued pre-Mongol traditions (Wilber, pp. 79-87; Plate XIV). The Kāšān ceramic workshops also resumed production, initially returning to their familiar decorative repertoire; in the 1270s new themes of Chinese origin were introduced on luster-painted tiles manufactured for the palace of Abaqa Khan (663-80/1265-82) at Taḵt-e Solaymān in Azerbaijan: the dragon, the phoenix, the crane, the deer, the lotus, and distinctive cloud forms and floral motifs (Naumann, pp. 80-98; Watson, pp. 131-49, 190-91; cf. EIr. V, p. 320 pl. XXIX). The absorption of these themes into the Persian decorative repertoire was selective and gradual. Most immediately popular was the lotus, which appears in several distinct configurations: as an isolated blossom, a floral spray set within a polylobed frame, or alternating with a six-petaled flower or trilobed buds attached to a vine. Typically the lotus appears in a distinct and often inconspicuous zone within an ensemble that otherwise continues local pre-Mongol traditions (Baer, pp. 15-16 figs. 9, 11a-11b, 13). The longevity of pre-Mongol decorative schemes is well illustrated by a silk textile, now in the Erzbischöflichen Dom- und Diozesanmuseum, Vienna, bearing the name and titles of Abū Saʿīd (717-36/1317-35) on which the field decoration bears a strong resemblance to pre-Mongol metalwork of Khorasan (Wardwell, pp. 108-11 figs. 45-46). The advent of the Timurids (771-912/1370-1506) marked a new stage both in the development of court-based artistic production and in the assimilation of Chinese decorative themes to Persian taste. Many 15th-century designs were also widely used in the Safavid period (907-1145/1501-1732). Although initially Tīmūr had hoped to add China to his empire, his successors were content to cultivate commercial links and diplomatic exchanges, in order to procure coveted goods from China. Fortunately, the formative stages of Timurid taste coincided with the reign of the second Ming emperor, Yung-lo (1398-1424), who actively promoted contacts with the Near East, a policy blocked by his immediate successor but revived on a limited scale by the fourth emperor, Hsuan-te (1425-35), before it was definitively abandoned by his successors (Hok-Lam, pp. 232-36, 301-03). By 840/1435, however, a sufficient quantity of Chinese goods had already reached Persia to permit the unimpeded progress of a second, broader phase of sinicization. Chinese silks, porcelain, paper, and other goods had a profound impact on the decorative traditions of Persia, but once again the adoption of new designs was gradual and highly selective (Crowe, pp. 168-78). In 15th-century manuscripts Chinese blue-and-white ceramics are often depicted in use, and imitations were made in 15th-century Mašhad and during the Safavid era in several regions of the country. From the 15th to the 17th century Persian blue-and-white ceramic vessels emulated the forms and decoration of late 14th- and early 15th-century Ming wares (Bailey, pp. 179-90; Mason and Golombek, pp. 465-74; Rogers, pp. 122-23, 127-29). Adaptation and absorption of Chinese designs continued on several fronts. The lotus scroll became a vine, thus emulating the arabesque, with which it was often contrasted or intertwined; the two elements in such combinations were called by 15th-century authors ḵatāʾī and eslīmī respectively (O’Kane, 1992, pp. 76-78, pl. 14) and emerged as major features of tile revetments in the Timurid and Safavid periods. In the 15th century they were often executed in cut-tile mosaic as focal points in larger ensembles, in which large areas of wall surface were covered with revetments simulating ornamental brickwork, known as bannāʾī decoration. Simple geometric designs and pious phrases in square Kufic script (cf. EIr. IV, pp. 686-88 figs. 43-46) were widely used on Timurid bannāʾī panels (O’Kane, 1987, pp. 59-78; Golombek and Wilber, I, pp. 117-36). In Safavid architecture, however, Sino-Persian vegetal ornament is clearly dominant. Large areas on the surfaces of major religious monuments, including the exteriors of domes, were covered with painted tiles decorated with intricate networks of vegetation on several levels (Scarce, pp. 282-86; Hutt and Harrow, pls. 40, 51, 64-65, 69, 91). Decorative schemes incorporating three or even four systems of interwoven eslīmī and ḵatāʾī fill the main fields in some 16th- or 17th-century carpets (Ettinghausen, 1979, pp. 18-19 figs. 19-24). In another decorative scheme of Chinese inspiration elaborate versions of lotus or peony blossoms were combined with plume-like leaves with serrated edges, in order to create a clump or scroll often inhabited by birds, dragons, or other creatures of Chinese derivation. This decorative theme was widely used in Ottoman court design, where it was known as sāzqalamī (reed-pen style), beginning in the 1520s; it was associated there with a painter from Tabrīz known as Šāhqolī (Denny, pp. 103-06; Necipoglu, pp. 148-54). In Safavid Persia this decorative vocabulary was most often used in ḥall-kārī (lit., “pulverized work”), a type of manuscript illumination in which finely ground gold or silver particles suspended in a solution of glue and water were used as a painting medium (Ṣādeqī Beg, pp. 40, 74 ll. 95-96; Dickson and Welch, I, p. 264; Rogers, pp. 31-32, 123-24; Plate XV, central panel). Patterns in this style were probably also used in other contexts at the Safavid court; in a manuscript of the Šāh-nāma copied for Shah Ṭahmāsb (930-84/1524-76) they are depicted in both wall paintings and throne decoration, and they can be found on a ruby-and-turquoise-encrusted gold vessel, apparently of Persian manufacture, now in the Art Museum of Georgia at Tiflis (Dickson and Welch, II, pls. 14, 16, 52; Javakhishvili and Abramishvili, pl. 216). During the 17th century the elaborate blossoms and feathery leaves of the ḥall-kārī repertoire were transformed into a continuous vine and used for panels of wall decoration, as well as field designs for carpets (Scarce, pp. 286-90; Beattie, pp. 27, 50-56). Yet another decorative repertoire with a Chinese pedigree that became prominent in the 15th century was an idealized landscape, in which features of the garden and the royal hunting preserve were combined; it is inhabited by both mythological creatures like the dragon and phoenix and more familiar birds and animals (Plate XV, margin). Frequently they are locked in combat with each other or with human figures (Aslanapa, pp. 59-91). In these settings creatures of Chinese origin are integrated into an indigenous scheme centered on the clash of predator and prey, a combination that in the late 16th century Ṣādeqī Beg Afšār (pp. 45, 76 ll. 120-21; Dickson and Welch, I, p. 265) identified as gereft o gīr (lit., “caught and catch”). Despite the theme of conflict, this Chinese hunting preserve was very popular on various media from the 15th to the 17th century and sometimes appears to have acquired paradisiac connotations (Soucek, pp. 7-13). It appears frequently in wall paintings depicted in 15th-century manuscripts, as well as in decorative ensembles of the Safavid period (Lentz and Lowry, pp. 182-83, 191-99; Luschey-Schmeisser). Just as the Sino-Persian repertoire reached a peak of popularity during the early 17th century a new design vocabulary connected with plants and gardens appeared. It, too, consisted of several distinct yet interdependent modes: the individual flowering plant, the flower-filled trellis, a flowering plant with a bird or butterfly or both, and a miniature garden with flowers and birds. The European source of all these motifs is apparent in the use of modeling and shading to suggest a third dimension, but each was also adapted to Persian taste in a hybrid decorative idiom. Phase 2. The historical coincidence of the reign of Shah ʿAbbās I (996-1038/1588-1629) with a period of European economic expansion was catalytic for the development of Euro-Persian decoration. Eager to expand the markets for Persian silk, over which he had a monopoly, the shah sought the cooperation of Armenian merchants, traditionally active in the silk trade, and concluded agreements with various European groups. In order to finance their purchases of silk, both Armenians and Europeans sold imported goods in Persia, particularly European and Indian textiles. This trade was particularly intense during the middle decades of the 17th century, when the effective demise of the Persian state monopoly allowed Armenian and European merchants greater freedom in procuring and selling goods. In time this large-scale importation of foreign goods would undermine the position of traditional Persian artisans, who found it increasingly difficult to compete against them. Initially, however, the new goods stimulated Persian craftsmen to new accomplishments. The new decorative vocabulary appeared in different contexts. Luxury textiles and lacquer-painted bookbindings and objects can be connected with the taste of Persian rulers and their close associates, but the inclusion of these new floral designs on carpets and ceramics probably reflects a broader popularity, stimulated by familiarity with both European and Indian goods. European modes of drawing clumps of plants entered the repertoire of artists at the Mughal court and appear in many different materials. The flower-filled lattice was also widely used in Mughal art and architecture (Skelton, pp. 42-45, 67-69, 75-76, 78-81, 83-90). In Persia some ceramic and carpet decoration blends elements from both the Sino-Persian and Euro-Persian modes. For example, luster-painted vessels with miniature landscapes, often attributed to mid-17th-century Isfahan, incorporate not only the traditional repertoire of animals, birds, trees, rocks, and pools from the Chinese landscape but also oversized clumps of iris from the new vocabulary of Euro-Persian ornament (Lane, pp. 102-04; Watson, pp. 163-69). A similar insertion of oversized europeanizing flowers into the traditional theme of a Sino-Persian garden is found on some blue-and-white ceramic vessels and in wall paintings at the Čehel Sotūn at Isfahan, where a traditional Chinese hunting park was painted over with large bird-and-flower paintings in a modeled style (Allen, pp. 58-59; Gray, pp. 324-26 fig. 220). A full gamut of designs ranging from purely Sino-Persian to completely Euro-Persian appears on carpets attributed to Kermān in the 16th to 18th centuries (see CARPETS ix-x). In the most conservative schemes only two Sino-Persian designs, ḵatāʾī and sāz scrolls and a stylized garden with animal combats, appear (Housego, pp. 118-23; Beattie, pp. 33-39). In others horizontal rows of flowering plants are set within a network of intertwined ḵatāʾī on several levels (Beattie, p. 73 no. 47). More common, however, are carpets with designs characteristic of 17th-century Mughal taste, with staggered horizontal rows of plants or a plant-filled lattice (Beattie, pp. 48-49, 80-81 nos. 12-14, 55-57). Despite this wide diffusion of Euro-Persian decoration, specific types were linked to court circles. For example, even though the practice of arranging flowers in a lattice frame was probably known in 17th-century Persia, its subsequent popularity is often linked to Nāder Shah Afšār (1148-60/1736-47), not only because he brought back considerable booty from his Indian campaign but also because the scheme was used in the decoration of his palace. In Shiraz under the Zand dynasty (1163-1209/1750-94) the theme remained popular for carved stone revetments, tilework, and textiles (Housego, pp. 130-34). Similarly, an enthusiasm for “bird and flower” decoration is often associated with the court painter Šafīʿ ʿAbbāsī, who was active during the middle decades of the 17th century and who designed both textiles and album paintings. Works attributed to him often show a single plant around which a bird or butterfly hovers (Welch, pp. 90-91, 99-100 nos. 58, 64; Bier, pp. 174-75 nos. 18-20). The most influential variant of this theme, used in lacquer painting, was one in which flowers of different species, often with one or more birds, are grouped in a dense cluster. By the 1670s painters at the Safavid court were decorating objects with such designs, which continued to be common during the 18th and 19th centuries (Plate XVI). Sometimes there is only a single clump of flowers, but in other examples blossoming plants are linked in a vine spray or grouped in a miniature garden, the latter often with a singing nightingale silhouetted against a full-blown rose, hence the appelation gol o bolbol (rose and nightingale; Diba, pp. 244-45, 252 figs. 2, 11; Robinson, pp. 177-79 figs. 157, 160, 167-69). Even as the new hybrid forms of Euro-Persian design grew more prominent in court circles, the older traditions of vegetal, geometric, and calligraphic ornament remained in use. The latter two types predominated in the decoration of Qajar religious architecture, and the arabesque was frequently engraved on metalwork during the 17th-19th centuries (Melikian Chirvani,1982, pp. 260-355; idem, 1983, pp. 311-32; Scarce, pp. 290-94). As late as the 19th century the Persian decorative repertoire retained its characteristic diversity, with new elements added and many earlier ones continuing. This accumulated heritage furnished inspiration for various revivals of Persian artistic and handicraft traditions in the later 19th and 20th centuries. L. Ainy, Central Asian Art of Avicenna Epoch, Dushanbe, 1980. J. Allen, Islamic Ceramics, Oxford, 1991. O. Aslanapa, “The Art of Bookbinding,” in B. Gray, ed., The Arts of the Book in Central Asia, Paris, 1979, pp. 59-91. E. Baer, “The Nisan Tasi. A Study in Persian-Mongol Metal Ware,” Kunst des Orients 9, 1973-74, pp. 1-46. M. Bahrami, Gurgan Faïences, Cairo, 1949. G. A. Bailey, “The Dynamics of Chinoiserie in Timurid and Early Safavid Ceramics,” in L. Golombek and M. Subtelny, eds., Timurid Art and Culture. Iran and Central Asia in the Fifteenth Century, Leiden, 1992, pp. 179-90. M. H. Beattie, Carpets of Central Persia, Westerham, Kent, 1976, pp. 27, 50-56. A. M. Belenizki, Mittelasien. Kunst der Sogden, Leipzig, 1980. C. Bier, ed., Woven from the Soul, Spun from the Heart, Washington, D.C., 1987. K. A. C. Creswell, Early Muslim Architecture II, Oxford, 1940. Y. Crowe, “Some Timurid Designs and Their Far Eastern Connections,” in L. Golombek and M. Subtelny, eds., Timurid Art and Culture. Iran and Central Asia in the Fifteenth Century, Leiden, 1992, pp. 168-78. W. Denny, “Dating Ottoman Turkish Works in the Saz Style,” Muqarnas 1, 1983, pp. 103-21. M. B. Dickson and S. C. Welch, The Houghton Shah-nameh, Cambridge, 1981. R. Ettinghausen, “Evidence for the Identification of Kāshān Pottery,” Ars Islamica 3, 1936, pp. 44-76. Idem, “The Bobrinsky ‘Kettle.’ Patron and Style of an Islamic Bronze,” Gazette des Beaux-Arts 24, 1943, pp. 193-208. Idem, “The Taming of the Horror Vaccui in Islamic Art,” Proceedings of the American Philosophical Society 123, 1979, pp. 15-28. E. H. Gombrich, The Sense of Order, Oxford, 1979. O. Grabar, The Mediation of Ornament, Washington, D.C., 1992. B. Gray, “The Tradition of Wall Painting in Iran,” in R. Ettinghausen, and E. Yarshater, eds., Highlights of Persian Art, Boulder, Colo., 1979, pp. 313-29. P. O. Harper, The Royal Hunter, New York, 1978. Hok-Lam Chan, “The Chien-wen, Yung-lo, Hung-hsi, and Hsuan-te Reigns,” The Cambridge History of China VII. The Ming Dynasty, 1368-1644, pt. 1, ed. F. W. Mote and D. Twitchett, Cambridge, 1988, pp. 128-204. J. Housego, “Carpets,” in R. W. Ferrier, ed., The Arts of Persia, New Haven, Conn., 1989, pp. 118-49. A. Hutt and L. Harrow, Islamic Architecture. Iran I, London, 1977. A. Javakhishvili and G. Abramishvili, Jewellery and Metalwork in the Museums of Georgia, Leningrad, 1986. E. J. Keall and R. B. Mason, “The ʿAbbasid Glazed Wares of Siraf and the Basra Connection. Petrographic Analysis,” Iran 29, 1991, pp. 51-66. J. Kröger, “Décor en stuc,” in L. Vanden Berghe and B. Overlaet, eds., Splendeur des Sassanides, Brussels, 1993, pp. 63-65. A. Lane, Later Islamic Pottery, 2nd ed., London, 1971. G. D. Lowry, “On the Gold Jug Inscribed to Abu Mansur al-Amir Bakhtiyar ibn Muʿizz al-Dawla in the Freer Gallery,” Ars Orientalis 19, 1989, pp. 103-15. I. Luschey-Schmeisser, “Ein neuer Raum in Nayin,” AMI, N.S. 5, 1972, pp. 309-14. R. B. Mason and L. B. Golombek, “Differentiating Early Chinese-Influence Blue and White Ceramics of Egypt, Syria, and Iran,” in E. Pernicka and G. A. Wagner, eds., Proceedings of the XXVIIth International Symposium on Archaeometry, Heidelberg, 1990, pp. 465-74. A. S. Melikian Chirvani, “La plus ancienne mosquée de Balkh,” Arts Asiatiques 20, 1969, pp. 3-20. Idem, Islamic Metalwork from the Iranian World, London, 1982. Idem, “Qajar Metalwork. A Study in Cultural Trends,” in E. Bosworth and C. Hillenbrand, eds., Qajar Iran, Edinburgh, 1983, pp. 311-28. R. Naumann, Takht-i Suleiman, Munich, 1976. G. Necipoğlu, “From International Timurid to Ottoman. A Change of Taste in Sixteenth-Century Ceramic Tiles,” Muqarnas 7, 1990, pp. 136-70. B. O’Kane, Timurid Architecture in Khurasan, Costa Mesa, Calif., 1987. Idem, “Poetry, Geometry and the Arabesque. Notes on Timurid Aesthetics,” Annales Islamologiques 26, 1992, pp. 63-78. A. U. Pope, Persian Architecture, New York, 1965. ʿA. Qūčānī, Katībahā-ye sofāl-e Neyšābūr, Tehran, 1364 Š./1985. J. Raby, “Looking for Silver in Clay. A New Perspective on Samanid Ceramics,” in M. Vickers, ed., Pots and Pans, Oxford, 1986, pp. 179-203. B. W. Robinson, “Lacquer, Oil-Paintings and Later Arts of the Book,” in Treasures of Islam, Geneva, 1985, pp. 176-205. J. M. Rogers, Islamic Art and Design. 1500-1700, London, 1983. Ṣādeqī Beg Afšār Tabrīzˊi, Qānūn al-ṣowar, ed. A. Yu. Kaziev as Ganun ös-söuvär (Traktat o zhivopisi) (Qānūn al-ṣowar [Text and illustrations]), Baku, 1963. J. Scarce, “Tilework,” in R. W. Ferrier, ed., The Arts of Persia, New Haven, Conn., 1989, pp. 271-94. S. P. Seherr-Thoss, Design and Color in Islamic Architecture, Washington, D.C., 1968. R. Shani, “On the Stylistic Idiosyncrasies of a Saljuq Stucco Workshop from the Region of Kashan,” Iran 27, 1989. [G. V. Shishkina and L. V. Pavchinskaya,] Terres secrètes de Samarcande, Paris, 1992. M. S. Simpson, “The Narrative Structure of a Medieval Iranian Beaker,” Ars Orientalis 12, 1981, pp. 15-24. R. Skelton, The Indian Heritage. Court Life and Arts under Mughal Rule, London, 1982. P. Soucek, “The New York Public Library Makhzan al-asrār and Its Importance,” in Ars Orientalis 18, 1988, pp. 1-37. V. Voronina, Architectural Monuments of Middle Asia, Leningrad, 1969. A. Wardwell, “Panni Tartarici. Eastern Islamic Silks Woven with Gold and Silver,” Islamic Art 3, 1988-89, pp. 95-173. O. Watson, Persian Lustre Ware, London, 1985. D. Wilber, The Architecture of Islamic Iran. The Ilkhanid Period, Princeton, N.J., 1955. E. Yarshater, “The Development of Iranian Literatures,” in E. Yarshater, ed., Persian Literature, New York, 1988, pp. 3-37. Priscilla P. Soucek, “DECORATION,” Encyclopaedia Iranica, VII/2, pp. 159-197, available online at http://www.iranicaonline.org/articles/decoration (accessed on 30 December 2012).
http://www.iranicaonline.org/articles/decoration
This study examines the effect of cashless policy on the Nigerian economic growth. Nigeria has continued to evolve in different realms. The economy is being reformed, the institutions are being reshaped and legislations are being re-examined so as to reposition the nation to take its rightful position in the international community. As a way of fast-tracking the Nigerian economy so as to he among the first 20 world economies come 2020, Nigeria has proposed that come 2012 it will adopt the cashless economic system. In carrying out the study the researcher adopted descriptive survey research design in order to make use of primary data using questionnaire as an instrument for data collection. The questionnaires were distributed to a sample size of 310 respondents randomly selected and the data collected were analyzed and the two hypotheses formulated were tested using non-parametric tool of chi square to determine the relationship and effect of the cashless policy on the Nigerian economy. The result of the findings have shown that cashless policy has positive relationship with and effect on Nigerian economy but it would require huge amount of capital on technology and other facilities needed for smooth operation of the policy. The researcher therefore recommended that government in collaboration with CBN and other financial institutions should provide adequate infrastructures, e-payment facilities, securities, steady power supply and adequate enlightenment of people on the benefits and proper usage of the system for effective implementation of the policy in order to ensure economic growth and development in Nigeria otherwise there would be economic decline. CHAPTER ONE INTRODUCTION 1.1 Background to the Study The recent evolution of technology for financial transactions poses interesting questions for policy makers and financial institutions regarding the suitability of current institutional arrangements and availability of instruments to guarantee financial stability, efficiency and effectiveness of monetary policy. Over the course of history, different forms of payment systems have been in existence. Initially, ‘trade by barter’ was common; however, the problems of barter such as the double coincidence of wants necessitated the introduction of various forms of money (Swartz et al, 2004). Nevertheless, analysts have been predicting the complete demise of study instruments and the emergence of potentially superior substitute for cash or monetary exchanges, that is, ‘cashless society’. Unlike the barter system which involves the exchange of one good for another, a cashless environment refers to one in which transactions are carried out with minimal exchange of physical cash. It implies that the payment instrument is not physical cash but other instruments such as cheques, electronic transfers, e-payment and so on. The rapid advancement in electronic distribution channels has produced tremendous changes in the financial industry in recent years, with an increasing rate of change in technology, competition among players and consumer needs as argued (Hughes, 2001). Since Nigeria‘s Independence in 1960, there have been different governments, constitutional reforms, change in economic policies and banking reforms, mainly directed at enhancing social welfare and achieving developmental goals but there has been no substantial positive change in Nigeria‘s Human Development Indicators. This also calls to question the effectiveness of the cash-less policy of the Central Bank of Nigeria (CBN). At the end of the 1980s, the use of cash for purchasing consumption goods in the US has constantly declined (Humphrey, 2004). Hence, most LDCs (Less Developed Countries) like Nigeria are on the transition from a pure cash economy to a cash-less ‘one for developmental purposes’. Little wonder why the Central Bank of Nigeria recently introduced a cashless policy. Thus, as part of its regulatory functions, the Central Bank of Nigeria, issued a circular dated April 20, 2011 in which it conveyed to operators and the banking public its decision to introduce a cash less banking policy into the Nigerian financial system with effect from January 1, 2012 using Lagos as the pilot programme that is the policy kick-starts from Lagos and eventually all over the other states in the nation. To enforce the implementation, the Central Bank had, in a circular April last year, declared that “commencing from June 1, 2012, a daily cumulative limit of N150,000 and N1,000,000 on free cash withdrawals and lodgements by individuals and corporate customers respectively with deposits money banks shall be imposed.” Following public outcry, the daily cash withdrawal and deposit limit was raised to N500,000 and from N1,000,000 to N3,000,000 for corporate accounts. According to CBN, the new cashless policy was introduced for a number of key reasons, including, To drive development and modernization of our payment system in line with Nigeria‘s vision 2020 goal of being amongst the top 20 economies by the year 2020. An efficient and modern payment system is positively correlated with economic development, and is a key enabler for economic growth. To reduce the cost of banking services (including cost of credit) and drive financial inclusion by providing more efficient transaction options and greater reach and to improve the effectiveness of monetary policy in managing inflation and driving economic growth. In addition, the cash policy aims to curb some of the negative consequences associated with the high usage of physical cash in the economy, including: high cost of cash: high risk of using cash, high subsidy, informal economy and inefficiency & corruption (CBN, Website, 2011). Regarding this context, the study seeks examine the cashless economy by exploring its impact on the Nigerian economy. EFFECT OF CASHLESS POLICY ON THE NIGERIAN ECONOMY (A CASE STUDY OF EBONYI STATE) 1.2 Statement of the Problem As more payment systems have been introduced, pundits have been predicting the emergence of a cash less society’. Today, we still pay with cash and checks, but several other payment instruments, such as credit and debit cards, are widely used. The use of paper money is more declining, but at a rather slow pace. As it were, Nigeria is a country heavily dominated by cash and there are some factors that negatively affect the choice of cash over non-cash instruments, some of these include time spent in counting and verifying cash, susceptibility to loss, time spent in the banking halls, amongst others (Nnanwobu et al, 2011). A cash-based economy is one which is characterized by the psychology to physically hold and touch cash a culture informed by ignorance, illiteracy, and lack of security consciousness and appreciation of the merit of digital payment (Ovia, 2002). Cash, as a payment system, attracts lots of negative consequences such as high cost of handling cash, risks of using cash and keeping them in houses which eventually lead to high rate robbery, financial loss in the case of fire and flooding incidents. High cash usage results in lots of money outside the formal economy, thus limiting the effectiveness of monetary policy in managing inflation and encouraging economic growth. Also high cash usage enables corruption, leakages, money laundering, counterfeiting, mis-management, mutilation and depreciation in value if not invested. Some or most of these factors are one which exists in the Nigerian economy today thus creating gap for this current study. In Nigeria today, infrastructure is a major problem that hinders the money deposit banks from attaining full potential in terms of certain policy implementations and its impact on financial transactions in the banking industry. The infrastructure in Nigeria over the years has not been reputable and thus has given way to ineffectiveness to the sincerity in financial transactions in the banks. The level of technology in the nation is rather poor and increasing at a slow pace and as such hasn’t given room for major development and policy implementations that may have risen. The technology available for carrying out banking transactions are not as effective as they ought to be therefore leaving people with no other choice than to keep cash in their houses in order to avoid having to spend lots of time in the banking halls due to low servers, interrupted power supply, bad internet services. Illiteracy and the low level of education of people does nothing else than leave people in the dark and therefore results into the inability of the people to understand when developments are being put into place. Many people do not see the need to keep their money in the banks or invest them due to the lack of understanding they have and also insufficient publicity and awareness measures are what have being in existence which if dealt with would at least reduce the lack of understanding of many and make them see viable reasons why they should keep their money in the banks and invest them other than keep them in their houses as a route to the safety of many lives and better growth of the economy and as such increase the standard of living. This of course, is the motivation behind this study. As a matter of fact, the demand for money is being taken in terms of demand deposits in banks and liquid assets outside the banks that is the average willingness of people to either hold money in cash or keep it as demand deposits in the banks effects the activities of commercial banks in controlling the amount of money in circulation, which in turn determines the hold of the CBN on the economy in terms of monetary policy implementations. The analysis of banking innovations and the response of the public towards them would help determine the hold of the Central Bank of Nigeria (CBN) on the extent to which they have been able to foster financial transactions in money deposit banks across the nation. The introduction of E-commerce has made room for various tools in transacting business, although not all of these tools have been fully utilised. The new policy adopted is such that has been made to affect the whole economy and to put in full use all of these tools which include the monetary and fiscal policies, and in turn will maximise the effort of the e-commerce innovation. 1.3 Objectives of the Study The general objective of this study was to examine the impact of Cashless policy on Nigerian economic growth. However, the specific objectives were: - To determine the degree of the relationship between cashless policy and Nigerian economy. - To ascertain empirically the impact of cashless policy on Nigeria economic growth. 1.4 Research Questions In order to carry out this study effectively these research questions were made: - To what degree does cashless policy relate to the Nigerian economy? - To what extent does the policy effect the Nigeria economic growth? 1.5 Research Hypothesis The following research hypotheses were formulated and tested for the study: Ho– Cashless policy does not relate to the Nigerian economy. H1-Cashless policy relate to the Nigerian economy. Ho -Cashless policy has no effect on the Nigeria economic growth. H1-Cashless policy has effect on the Nigeria economic growth. 1.6 Significance of the Study This study will be of immense benefit to the following persons: It would add the new knowledge generated to the existing knowledge of the researcher. It will increase the volume of literature in the institution’s library. It will serve as a reference material to people who would want to carry out further research study on this topic in future. It will also assist bankers, business analysts and policy makers on monetary policy formulation and effective decision making. It will help the general public who may have time to go through the findings and recommendations of this study to gain knowledge as regard to the benefits and challenges of introducing the policy in Nigerian economy. 1.7 Scope and Limitation of the Study This study is geographically limited to Nigeria. It would have include both human and material resources drawn from banking sector for effective study due to large population involved, it is limited to Abakaliki metropolis in Ebonyi State, one of the 36 States of the federation. However, the major constraints of this study are the attitudes of some respondents who deliberately and out of bias refuse to disclose some relevant information needed for successful completion of this study; there was insufficient fund to be able to gather enough data and materials needed for this study due to non-reliable source of income of the researcher and time given to carry out this empirical study was very short and therefore inadequate comparing to the nature of this empirical study. Despite that the researcher endeavoured to make effective use of the available resources at her disposal to ensure that this study became successful. 1.8 Definition of Terms Access Products — Products that allow consumers to access traditional payment instrument electronically, generally from remote locations. ATM Card — An ATM(Automated Teller Machine) card is also known as a bank card, client card, key card, or cash card, is a payment card provided by a financial institution to its customers which enables the customer to use an automated teller machine (ATM) for transactions such as: deposits, cash withdrawals, obtaining account information, and other types of banking transactions, often through interbank networks. CBN- Central Bank of Nigeria. Chip Card — Also known as an integrated circuit (IC) Card. A card containing one or more computers chips or integrated circuits for identification, data storage or special purpose processing used to validate personal identification numbers, authorize purchases, verify account balances and store personal records. Electronic Data Interchange (EDI) — The transfer of information between organizations in machine readable form. Electronic Money — Monetary value measured in currency units stored in electronic form on an electronic device in the consumer’s possession. This electronic value can be purchased and held on the device until reduced through purchase or transfer. Internet Banking- This is a product that enables the Bank leverage on the Internet Banking System Module in-built on the new Banking Application (BANKS) implemented by the Bank to serve the Internet Banking needs of the Bank’s customers. Mobile Banking – This is a product that offers Customers of a Bank to access services as you go. Customer can make their transactions anywhere such as account balance, transaction enquiries, stop checks, and other customer’s service instructions, Balance Inquiry, Account Verification, Bill Payment, Electronic fund transfer, Account Balances, updates and history, Customer service via mobile, Transfer between accounts etc. Payment System — A financial system that establishes that means for transferring money between suppliers and of fund, usually by exchanging debits or Credits between financial institutions. Point Of Sale (P05) Machine – A Point-of-Sale machine is the payment device that allows credit/debit cardholders make payments at sales/purchase outlets. It allowed customers to perform the following services Retail Payments, Cashless Payments, Cash Back Balance Inquiry, Airtime Vending, Loyalty Redemption, Printing mini statement etc. Smart Card — A Card with a computer chip embedded, oh which financial health, educational, and security information can be stored and processed. Transaction Alert – Our customers carry out debit/credit transactions on their accounts and the need to keep track of these transactions prompted the creation of the alert system by the Bank to notify customers of those transactions. The alert system also serves as notification system to reach out to customers when necessary information need to be communicated. Western Union Money Transfer (WUMT) – Western union Money transfer is a product that allowed people with relatives in Diaspora who may be remitting money home for family up-keep, Project financing, School fees etc. Nigerian Communities known for having their siblings gainfully employed in other parts of the world are idle markets for Western Union Money Transfer.
https://www.projectwriters.ng/effect-of-cashless-policy-on-the-nigerian-economy-a-case-study-of-ebonyi-state/
What are Permanent Disabilities Caused by a Car Accident?September 6, 2021 A car accident can be over in mere seconds or minutes, but sometimes the consequences can last for a lifetime. Victims of car accidents who end up with permanent disabilities because of their injuries can face major, overwhelming life changes that leave them unable to work or carry out the daily activities of living without some sort of assistance. This puts them in very difficult positions and can make even the simplest activities challenging on a daily basis. What are the Most Common Disabilities Caused by Car Accidents? Spinal cord injuries (SCI) and traumatic brain injuries (TBI) are the most common kinds of disabilities in these situations, and most people understand the seriousness of these conditions. When the spine is injured, it can lead to partial or total paralysis, depending on the part of the spine that was injured. In an acute SCI, the spine could be in shock, leading to a loss of reflexes, feeling, and muscle movement. Additional symptoms can appear once the initial swelling subsides and include breathing problems, weakness, loss of bladder and bowel function, and a loss of feeling and voluntary movement elsewhere in the body. The higher the injury is on the spinal cord, the more severe the person’s symptoms usually are. Injuries to the neck or first and second vertebrae (C1, C2) or the mid-cervical vertebrae (C3, C4, C5) can cause respiratory problems. Lumbar vertebrae injuries can impact muscle and nerve control for sexual function, the bladder, bowel, and legs. A loss of function to the arms and legs is called quadriplegia; a loss of function in the lower body and/or legs is paraplegia. Either can be partial, an incomplete injury; or there can be no movement or felling at all, a complete injury. In the most severe cases, victims may end up in wheelchairs, restricted to their beds, or need assistive devices for breathing. TBI caused by car accidents often result when the head receives a violent blow or jolt. This can happen from a forceful impact or an object piercing through brain tissue. Mild TBI can affect brain cells temporarily, but more serious ones cause long-lasting physical damage and can be fatal. The more serious symptoms include a loss of consciousness; nausea; persistent, painful headaches; a loss of coordination; profound confusion; personality changes; and coma. Severe joint damage and amputations are two other common permanent disabilities that can result from car accidents. After these kinds of devastating injuries, many people are not able to use their limbs in the same as capacity as before the accident. Even the best treatment cannot help enough in certain cases. Other permanent disabilities include blindness, deafness, burns, severe back injuries, and emotional trauma. Living with a Permanent Disability Although every person’s situation is different, these kinds of injuries usually lead to ongoing, costly medical treatment and procedures, physical therapy, medications, the need for caregivers, long-term care, and future expenses that can be quite unpredictable. It may be possible to qualify for disability benefits through the Social Security Administration (SSA) if the problems have lasted for a year or more and have prevented the person from working, but these benefits are not exceptionally high. To meet the eligibility requirements, applicants can check the SSA’s Blue Book for a Listing of Impairments. For back injuries, some of the qualifications include compression of a nerve root, muscle atrophy with weakness, limitation of movement, spinal arachnoiditis, lumbar spinal stenosis, and other degenerative diseases. Car accident-related soft tissue injures to muscles, nerves, the skin, tendons, and ligaments, and burns are also listed, but approval depends on many other factors as well. There is also a category for bone fractures, but the breaks must be serious enough to cause an extreme impairment. Significant anxiety may also qualify a person to receive SSA disability, and there are some other categories as well. The SSA will establish whether the person’s residual functional capacity allows them to still work. They will analyze the hospitalization records and review the medical treatment, any complications, and medications being taken. There will be other assessments of the person’s mental and physical capacity as well. Can I be Compensated for a Permanent Disability Caused by a Car Accident? Permanent disabilities turn people’s lives upside down, and their families and friends are affected as well. Besides the problems associated with daily activities, the emotional toll and mounting medical bills all compound the situation. Filing a personal injury claim can be the first step toward seeking compensation to cover current and future medical costs, a loss of current and future income, lost earning potential, loss of enjoyment of life, and pain and suffering. Personal injury cases can be filed by the injured party or their spouse, child, or parent as a wrongful death lawsuit. These cases are filed against negligent parties who cause accidents and their insurance providers. If they lose a case, they may both be responsible for damages. Of course, the most important to do after the accident is to seek out and receive the proper medical treatment. Doing this as soon as possible is essential, as negligent parties often claim that the injuries were not a result of the accident. For many people, the symptoms may not be obvious at first, but it does not pay to wait to be evaluated. Also, the medical providers should be informed that the injuries were caused by the accident, and make sure that the injuries are documented. It is possible that the insurance company will offer a settlement, and in many situations these may be acceptable. The initial amount will depend on their determination of who was at fault, state insurance laws, and other factors. It is never a good idea to accept an initial settlement right away; insurance companies are in the business to make profits and they can and will make modest offers. Even if the money is badly needed and the amount looks good enough, it is wiser to wait and consult with a professional. You may want to work with an experienced car accident lawyer from the beginning to make sure that your rights are protected. Lawyers gather information to investigate personal injury cases, such as statements made to law enforcement, police reports, medical records and bills, evidence from the scene including photographs, and witness interviews. If the settlement is not appropriate, there may be negotiations with the insurance provider and the liable party. Otherwise, you may opt to file personal injury lawsuits. These can go on for years when they are complicated, but it can be worth it when the damages awarded are fair and just. How Long do Car Accident Lawsuits Take? Settlement negotiations and personal injury lawsuits can take long amounts of time, so patience will be required. Once a lawsuit is filed, the discovery process begins and there will be additional investigations. One or both sides may attempt to settle the suit before it goes to trial by starting a mediation. This can save all those involved a lot of time, and it can also be less costly in the long run. Should a trial ensue, a judge or jury will listen to the case and decide if there are to be any damages awarded to the plaintiff by the opposing party. The amount is based on how well the injured party can prove liability and damages. Complex cases take more time, but shorter ones can last a few weeks or even only a couple of days. Baltimore Car Accident Lawyers at LeViness, Tolzman & Hamilton Support Permanently Disabled Car Accident Victims If you or someone you care about has suffered a permanent, life-altering disability due to someone else’s negligence in a car accident, the experienced, compassionate Baltimore car accident lawyers at LeViness, Tolzman & Hamilton are ready to offer legal guidance. We are committed to treating each client with the highest level of respect while providing unsurpassed service with accountability, ethical behavior, and the best result. Call us today at 800-547-4LAW (4529) or contact us online for a free consultation. Our offices are conveniently located in Baltimore, Columbia, Glen Burnie, and Prince George’s County, where we represent victims throughout Maryland, including those in Anne Arundel County, Carroll County, Harford County, Howard County, Montgomery County, Prince George’s County, Queen Anne’s County, Maryland’s Western Counties, Southern Maryland and the Eastern Shore, as well as the communities of Catonsville, Essex, Halethorpe, Middle River, Rosedale, Gwynn Oak, Brooklandville, Dundalk, Pikesville, Parkville, Nottingham, Windsor Mill, Lutherville, Timonium, Sparrows Point, Ridgewood, and Elkridge.
https://www.bestmarylandaccidentlawyers.com/permanent-disabilities-car-accident/