content
stringlengths 71
484k
| url
stringlengths 13
5.97k
|
---|---|
Exercises and tips for creating this document to share with your loved ones.
Writing an ethical will can feel like an overwhelming process. How does one begin to summarize a lifetime of lessons and beliefs? Where should the focus of the message be?
Rabbi Jodie Gordon, a Reform rabbi who leads the Hevreh of the Southern Berkshires and teaches a class there on ethical will writing suggests beginning simply with one’s thoughts and a pen and paper (or computer). She adapted the following exercises from the book What Will They Say About You When You’re Gone by Rabbi Daniel Cohen to help her congregants craft their ethical wills.
In addition to Cohen’s book, Rabbi Gordon also recommends reading Having the Last Say, Bruce Feiler’s “This Life” column: The Family Stories That Bind Us, and Susan Garland’s “Your Money” column: Telling Their Life Stories, Older Adults Find Peace in Looking Back.
Exercise A: Reflect on courageous choices you have made over a lifetime.
1) What are three core values in your life?
2) Identify three courageous choices you made that were driven by your values in your personal, work and communal life.
3) Recall an experience when your values were challenged and you fought to uphold them.
4) Think about private ways you remain connected to your past.
5) Identify a past and present moral dilemma in your life and chart the impacts of making a courageous decision versus a convenient decision.
Exercise B: Reflections on how you prioritize your life.
1) What gives my life meaning?
2) If I had my life to live over again, what would I do with it?
3) What ideals, if any, would I be willing to die for?
4) What would bring me more happiness than anything in the world?
Exercise C: Focus on creating memories for your family.
1) Great companies possess a statement of values or mission. What is yours? Craft a family declaration of values.
2) Family heirlooms: Look around your house. Do you have an object that was passed down to you that reflects the legacy of your family? Write down the story for your children and grandchildren.
3) Craft a family genealogy chart. The process may reveal stories, connections and a sense of generational transcendence.
Exercise D: Explore inspired storytelling.
1) Describe an out-of-the-ordinary event in your life. What led to it?
2) Describe an experience when initial disappointment was transformed into gratitude. How did your understanding of the event change? Why?
3) Identify three significant events in your life and trace their path to fruition. Were the events by chance or design?
4) Identify three people who have made a difference in your life. How did you meet them? By chance or design?
5) Share an experience in which you saw God’s hand in your life. | https://www.myjewishlearning.com/article/writing-an-ethical-will-how-to-get-started/ |
For the month of January I began to write daily about my journey to self-love, the lessons I have learned. We are into week two. I’m sharing them on Facebook. Look for me @authenticlivingwithlinda or @lindacodlin25.
SELF-LOVE
The power of “AND”
On the road from I can’t to I can is the little word “AND”.
I learned to use the word ‘AND’ after a negative I can’t statement to give my brain the opportunity to change the thought into something slightly better and believable.
“I can’t write a blog post”, became…
“I can’t write a blog post AND I’m going to write one paragraph on one subject I know and love.”
The I can’t became an action I could take.
Then I added a second ‘AND’ to the “I can’t publish it because it’s not good enough.” to make it AND “I’ll check for basic spelling and punctuation errors then I’ll publish it” “Near enough is good enough, and imperfect action is better than no action.”
Then I faced my fear and sent it off.
The miracle of a little “AND” that gave me space to move from a negative self defeating statement to a more helpful statement that allowed me to take imperfect action.
The lesson in this is to accept your I can’t and move it into the land of I can by adding “AND” to it with an action that you can complete, even if the action is done poorly in your opinion, you have moved from I can’t to I can.
“AND” gives your brain something to focus on, it goes to work to find an action to add to your and, creating forward motion. To read more head over to Jan 5th @lindacodlin25 on Facebook.
Creating a new way of thinking and believing can be done with repetition.
I have found that AFFIRMATIONS are the quickest way to reprogramme my brain to allow me to move forward into the beliefs I want to hold about myself.
Briefly, an affirmation is a statement you make when you agree with something. The Affirmations you want to use are spoken as a positive fact. To be effective it needs to be written in and stated in the positive present tense.
The two most important words the human brain understands is “I AM”
I AM tells your brain exactly who you are and what you want to do, have and be.
When-ever you use the words I Am or I’m… The brain says, “Oh! I understand you want me to find ….. ,” and it begins to search for evidence and scenarios that back up your I am statement.
Always give your brain positive, and forward looking things to focus on. Your brain needs specific direction, give it specific direction with your I am statements, and AFFIRMATIONS.
The way you see yourself has a direct impact on how you are able to love yourself. By adding one I am statement into your day, you can move toward loving and accepting yourself more.
Write ‘I AM ENOUGH.” on your mirror and read it at least five times a day for thirty days, let the words permeate into your mind and soul.
Choose an area of your life to want to improve, write one or two affirmations in the positive present tense onto a small wallet sized card and carry them with you. Read them every time you think negatively about yourself, feel the energy you want to create as you read your statement of what it is you want to have, be and do.
I cover affirmations on Days 6 & 7 on my Facebook Page, @authenticlivingwithlinda @lindacodlin25.
As human’s we will have negative emotions, our brains are wired to look out for danger, to protect us. Our brain is constantly scanning for things it perceives will harm us. As we have grown up we have been feeding our brain with information of what to look out for.
Our negative thoughts are our friends, they are here to keep us safe. When we ask them what it is they are trying to keep us safe from we get to uncover some of our hidden fears and road blocks that need to be eliminated to create the freedom we want to experience.
For you to enjoy the joys of life, you also need to experience the sadder side of life. To constantly experience one side of life without having the contrast of the other side is to diminish the ability to be human.
All human emotion is valid, it is what we choose to do with those emotions that makes the difference to how we live our lives.
On day 8 I cover “The Pollyanna Syndrome” @lindacodlin25 on Facebook.
For you to love yourself completely you will at some point need to acknowledge all your emotions, and investigate what they are in your life for. All emotion, ‘negative’ and ‘positive’ have a reason for being in your life, your job is to work out what it’s there for and whether it is still working for you or not.
This leads into day 9. Learning to retell your story to create acceptance and love for yourself.
The more resist our thoughts and feelings, the more they will pop up somewhere else in our lives to cause havoc. Resisting our feelings takes a lot of energy, trying to live up to other people’s expectations is hard work. Denying the effect situations had on our emotions causes our bodies stress.
We have a choice in how we respond to the past events of our lives. We get to choose to tell a better story. We are not our past! Our past is only in our memory, It is a story we tell ourselves about what happened, and usually when we struggle to love ourselves this story doesn’t help us to move forward into the future easily.
We get to decide what we want our past to mean, what story we are telling ourselves about the event.
If you can think a thought, you can change a thought. All thoughts are optional.
When we learn to separate the fact of an event from the emotion of that event, we create space to see the event without the emotions that cloud the facts. Which gives us space to be able to retell the story in a way that helps us to let go of the hurt, guilt and shame that holds us in bondage.
When we learn to let go of the hurt, guilt and shame we open up room in our hearts and souls for forgiveness and love of ourselves.
As the Month of January continues I will continue to write of my experience in learning to love and accept myself.
Head over to Facebook and look me up @lindacodlin25 and I’ll walk you through what I did to find my true self.
Until next week my friends, I hope to see you everyday this month.
Linda Codlin.
#authenticlivingwithlinda
Email me at [email protected]
Find me on Facebook @authenticlivingwithlinda @lindacodlin25
Connect with me for a one on one coaching appointment.
Self-Love is within you.
Self-love is accepting all of you- even the not so nice attributes.
Self-Love is knowing you are enough.
Meet with me everyday for The Month of SELF-LOVE @lindacodlin25 in Facebook.
If you want any help with anything I am covering in this month email me at [email protected] for a coaching session. | https://authenticlivingwithlinda.com/2021/01/11/the-month-of-self-love-week-two/ |
Higgs wrote a second short paper describing what came to be called the Higgs model Higgs Boson, the God Particle as coined by Leon Lederman back in 1993, is the particle that made up the Higgs Field. All figures including auxiliary figures are available at this https URL: Subjects:
Feel free to send us your own wallpaper and we will consider adding it to appropriate category.
On 4 July 2012, physicists at CERN, Europes particle-physics laboratory, declared victory in their long search for the Higgs boson.
A famous story in the annals of physics tells of a 5-year-old Albert Einstein, sick in bed, receiving a toy compass from his father.
When the particles scattered from Big-bang crosses the Higgs field, the former attains mass and slow down as their weight increases. Peter Higgs best-known paper on the new particle was initially rejected.
Our paper is just one of many steps towards such a discovery.
In a paper published in Physics Letters on 15 September 1964 (received on 27 July 1964), Peter Higgs showed that Goldstone bosons need not occur when a local symmetry is
This, coupled with the measured interactions of the new particle with other particles, strongly indicates that it is a Higgs boson. This also makes the particle the first elementary scalar particle to be discovered in nature. The following are examples of tests used to confirm that the discovered particle is the Higgs boson:
In March 2013, the Higgs
In 2018 CMS reported an excess in the light Higgs-boson search in the diphoton decay mode at about 95GeV based on Run 1 and first year Run 2 data. Physicist Peter Higgs first posited the existence of the Higgs Field and the Higgs Boson in 1964. The standard model of particle physics hypothesized about Higgs Boson in 1964. Posted by EditorDavid on Saturday July 02, 2022 @06:34PM from the life-sciences dept.
Reuse & Permissions. U. : 2014-10-27: Thesis supervisor(s) Kobel, Michael ; Stoeckinger, Dominik: Note Presented 20 Nov 2014: Subject category Particle Physics - Phenomenology:
93-year-old Peter Higgs was awarded a Nobel Prize nine years ago after the Large Hadron Collider experiments finally confirmed of the existence Higgs boson particles he'd predicted back in 1964.
In celebration of this anniversary, a special all-day
Instead, it consists of so-called techni-quarks, which we believe are elementary," Frandsen said. WHEN SCIENTISTS at CERNs Large Hadron Collider reported that they had likely found the Higgs boson, physics went mainstream. The Higgs boson has, and will continue to have, an impact on our lives, in ways you may not have imagined.
Pictured above is the scientific paper in which he laid out that case task dataset model metric name metric value global rank remove The particle that we now call the Higgs boson first appeared in a scientific paper written by Peter Higgs in 1964.
A third paper on the subject was written later in the same year by Gerald Guralnik, C. R. Hagen, and Tom Kibble. It might be rather hard to read if youre not working in the domain though.
In particular,
Comments: 34 pages in total, author list starting page 18, 6 figures, 3 tables, published in PLB. Higgs field is the invisible energy field that exists throughout the universe.
The Higgs boson's existence would have profound importance in particle physics because it would prove the existence of the hypothetical Higgs fieldthe simplest of several proposed explanations for the origin of the symmetry-breaking mechanism by which elementary
Higgs Boson Outdraws Internet Cats, Kim Kardashian. established whether the Higgs boson is solitary or whether other states populate the EWSB sector.
This paper is the first comprehensive presentation of the Higgs physics reach of CLIC operating at three energy stages: s=350GeV,
APS and the Physical Review Editorial Office Continue to Support Researchers.
The last 500,000 examples are used as a test set. The combined local significance of the excess was $2.8\,\sigma$.
I explain all this in greater detail in the paper: "The Higgs Boson vs There is an interest in using deep learning methods to obviate the need for physicists to manually develop such features.
93-year-old Peter Higgs was awarded a Nobel Prize nine years ago after the Large Hadron A second paper focusing on the works from Gritsan's team, and specifically on the Higgs boson lifetime, One Higgs boson can have a mass of 125 GeV/c2 (the most common mass for a Higgs); others may have a mass that is a bit lower or higher and, on rare occasions, a Higgs boson is created that has a mass twice that size. Keywords: Higgs Boson, Machine Learning, Gradient Boosting 1.
After observation of the Higgs boson, a measurement of the W boson mass provides a stringent test of the model. Thesis note Bachelor : Dresden, Tech. Symmetry looks at little-known facts about the elusive particle.
P article physicists at CERN discovered the Higgs boson in 2012.
(Image credit: niallkennedy, CC BY-NC 2.0) On the surface, it may seem that the Higgs boson does not affect everyday life.
3.71885 on the private leaderboard, making us the top 2% in the Higgs boson challenge.
In the Standard Model, the Higgs particle is a massive scalar boson with zero spin, even (positive) parity, no electric charge, and no colour
We measure the W Read full story 1. The new paper, published in the journal Nature, comes exactly ten years after ATLAS announced the discovery of the Higgs boson. Higgs Boson Wallpaper.
Physicists Robert Brout and Franois Englert just barely beat him to it. The excess is compatible with the limits obtained in the ATLAS searches from the diphoton search channel.
Higgs Boson : The God Particle.
The Higgs boson is a cornerstone of the Standard Model; this particle gives other particles their mass and creates the distinction between the weak nuclear force and the 3 Questions: Marking the 10th anniversary of the Higgs boson discovery. Paper. Assuming that Higgs bosons exist, this reaction is predicted to occur by the mechanism of Fig.
The Higgs mechanism, which consists of the Higgs field and its
CRITICISM OF HIGGS BOSON BASED NOT ON LAWS DOES MUCH TO RETARD THE PROGRESS OF PHYSICS.
No scientific paper in history has had so many collaborators.
To show the connection, Id like to
MangoDating.com providing a rich tapestry of ethnicity, interests, goals, ambitions,
Reflecting on a decade of exploring the Higgs boson, Newman notes that the research "continues to motivate us to think harder and design upgraded detectors and accelerator COVID-19 has impacted many institutions and He shared the 2013 Nobel Prize in Physics with several people but some of them had predeceased him.
The Higgs boson was the last missing component of the model. Archived.
Current work analyzes the spread of single rumors, like the discovery of the Higgs boson or the Haitian earthquake of 2010 (), and multiple rumors from a single disaster event, like the Boston Marathon bombing of 2013 (), or it develops theoretical models of rumor diffusion (), methods for rumor detection (), credibility evaluation (17, 18), or interventions to curtail the
28, while the bosons themselves can be detected via their decay products, which are quite distinctive.For the mass range under consideration, the Higgs boson is predicted to decay initially almost exclusively to b b , where the quarks would be observed as jets containing short-lived
16 19 October 1964.
It is
The Higgs Boson and Higgs field are named after Peter Higgs who wrote about the field theory in 1964.
Close.
The mass of the W boson, a mediator of the weak force between elementary particles, is tightly constrained by the symmetries of the standard model of particle physics. These three
This is also true of particle physics analyses.
The paper-thin device weighs roughly the same as a dime, and can be used to cover surfaces like walls and ceilings, writes Tran. The existence of the Higgs boson in the Higgs field causes particles to interact and increase their masses, hence the name God particle, which has revived great interests in scientific circles.
The Higgs boson or Higgs particle is a proposed elementary particle in the Standard Model of particle physics. E H = 9.638732018E-32 [m-2 s-4 ] Higgs field In addition, we use the value of
FOR EXAMPLE THE ARTICLE "DID THEY REALLY DETECT THE HIGGS BOSON?"
Milestones since then include the observation of Higgs boson decays to , ZZ, WW, as well as precision measurements of its mass, spin, and couplings.
ATLAS Paper CMS Paper. In quantum field theory, particles can be described as waves in a field (Image: Piotr Traczyk/CERN) To answer this question needs an exploration into the quantum world and how particles interact.
The three papers written on this boson discovery by Higgs, Englert and Brout, and Guralnik, Hagen, Kibble were each recognized as milestone papers for this discovery by Physical Review Letters 50th anniversary celebration. 4 July 2022.
Pages 1. The other mode of the original scalar doublet remains as a massive spin-zero particle the Higgs boson.
( Related Article)
Benchmark results using Bayesian Decision Trees from a standard physics package and 5-layer neural networks are presented in the original paper. The original super-sized star had more mass and gravity than the black hole that was created. The Higgs boson is a tiny, fundamental particle with some massive implications.
El descubrimiento del Bosn de Higgs solo es un pequeo paso ms en el camino para entender la gran cantidad de cosas que existen en el Universo, y la curiosidad y perseverancia del hombre If you want to see what the original paper looks like, you need to unfold the origami sculpture. A physics paper which attempts to present a more precise estimate for the combined Measurement of the Higgs Boson mass has been published in Physical Review Letters with a record 5,154 authors. The Higgs boson was predicted in 1965 and first detected with confidence on 4 July 2012.
The Higgs boson's existence would have profound importance in particle
What is the Higgs boson?
The precision of the measurement is expected to improve in the coming years with data from the next LHC runs and new analysis ideas." Introduction The Higgs boson is the last piece of the Standard Model of particle physics.
Higgs boson's lifetime," says CMS physicist Pascal Vanlaer.
The 1964 Peter Higgs paper that first predicted the existence of what would come to be known as the Higgs boson.
MangoDating.com online dating site continues to explore its way to lead this exciting and evolving category and is looking forward to redefine the way people meet and fall in love.
I'm a Physics major at a liberal arts school, and part of the agenda for our Modern Physics
Scientific
A search for the Standard Model Higgs boson in proton-proton collisions with the ATLAS detector at the LHC is presented. The 1964 PRL symmetry breaking papers were written by three teams who proposed related but different approaches to explain how mass could arise in local gauge theories.
The discovery of the boson will lead to the understanding of the mysteries that have clouded the minds of scientists for many years (Hu & Wu, 2012).
Looking for the best Higgs Boson Wallpaper?
(Higgs Boson) The Higgs boson, also known as the Higgs particle, was a recently discovered elementary particle which was announced on July 4th of 2012 at the CERN
On July 4, 2012, CERN announced the discovery of a new subatomic particle that is consistent with the Higgs Boson a particle thats been searched for since the 1970s (The Buy "Higgs Boson original math formula" by Judamigoioni as a Hardcover Journal. Get original paper in 3 hours and nail the task.
1. Christoph Paus, the MIT physicist who co-led the effort to detect the particle, looks ahead to the next 10 years.
3 Questions: Marking the 10th anniversary of the Higgs boson discovery. In celebration of this anniversary, a special all-day symposium on the Higgs boson is currently underway at CERN. The discovery of the Higgs boson, announced on 4 July 2012 at CERN, was the result of more than two decades of work by the ingenious Large Hadron Collider (LHC) machine physicists and by many thousands of ATLAS and CMS experimental physicists.
Higgs boson self-couplings, which would give access to the scalar potential in the SM Lagrangian, may be observed with the full luminosity of the upgraded LHC (3000 fb 1 per
This may seem like a tiny excess, but the measurement is accurate to within 9 MeV.
In this paper we use the Metre-Second System (MS System) of Units to derive a precise theoretical value for the magnitude of the Higgs field and explore its implications. It consists of Higgs boson particles (God particles). On 4 July 2012, physicists at CERN, Europes particle-physics laboratory, declared victory in their long search for the Higgs boson.
The new analysis that Berkeley Lab researchers contributed to is based on data collected by the ATLAS detector from 2015 to 2018. "A techni-higgs particle is not an elementary particle. Sell your art Login Signup. Physicist Frank Wilczek explains how the Higgs may help complete the Standard Model of physics. It was first theorized in 1964 and finally produced in 2012 14th March 2013: Updated results from ATLAS and CMS confirm even more strongly the new particle is the Higgs boson.
The Higgs boson has been discovered recently by the ATLAS and CMS experiments at CERN's Large Hadron Collider (LHC). E H = 9.638732018E-32 [m-2 s-4 ] Higgs field In addition, we use the value of
often described as a tiny subatomic particle that gives other subatomic particles their mass.
At CERN on 4 July 2012, the ATLAS and CMS collaborations present evidence in the LHC data for a particle consistent with a Higgs boson, the particle linked to the mechanism proposed in the 1960s to give mass to the W, Z and other particles. (Image: Maximilien Brice/Laurent Egli/CERN) What is the Higgs boson? Ten years ago, scientists announced the discovery of the Higgs boson, which helps explain why elementary particles (the smallest building blocks of nature) have mass. Confirming that the Higgs boson interacts with muons, as expected, is also vital in considering the physics potential of a new type of particle collider that collides muons. | http://ge-shi.net/cheap/excess/11329442acdbfc2020873109fc4a137e1372 |
Physicists at CERN have announced that five years after its discovery, the Higgs boson has been observed decaying into fundamental particles known as bottom quarks.
The major breakthrough was made using the ATLAS and CMS detectors, two experiments designed to analyze particle collisions, at the Large Hadron Collider, the most powerful particle accelerator on Earth.
Both teams submitted the findings for publication on Tuesday, Aug. 28.
Higgs Boson Decay
The Higgs boson is an elementary particle in the Standard Model of particle physics responsible for giving mass to fundamental subatomic particles. Its existence was confirmed in December 2013.
Scientists have hypothesized that the Higgs boson decays to a pair of matter-antimatter bottom quarks 60 percent of the time. However, proving it was an almost impossible task.
Higgs boson is made in high-energy collisions of pars of particles accelerated to the speed of light. Once produced, the particles vanished almost instantaneously. It exists for about 0^minus 22 seconds before it decays.
Observing the Higgs boson directly is impossible, thus, scientists look at the particle's decay products to infer properties of the parent bosons. However, other types of proton-proton collisions can also produce bottom quarks, and because Higgs bosons are an incredible rarity (one out of every billion collisions), it is doubly more difficult to observe the most common Higgs boson decay.
Scientists from both ATLAS and CMS teams combined data and applied complex statistical analysis. According to CERN, the outcome from the combined datasets revealed "the decay of the Higgs boson to a pair of bottom quarks with a significance that exceeds 5 standard deviations."
This is the first time that scientists have seen evidence of a Higgs boson decaying to a bottom quark.
Significance Of The Discovery
"We now have the opportunity to study the Higgs boson in unprecedented detail and will be able to further challenge the Standard Model," stated Karl Jacobs, a spokesperson from the ATLAS collaboration.
The discovery is a major stepping stone to understanding the rate at which the Higgs boson decays into other particles as predicted by Standard Model.
"The analysis methods have now been shown to reach the precision required for exploration of the full physics landscape, including hopefully new physics that so far hides subtly," added Eckhard Elsen, director for research and computing at CERN.
The Large Hadron Collider will continue to operate until December. Then, it will undergo refurbishing and upgrades for two years. It will reopen in 2021. | https://www.techtimes.com/articles/233846/20180829/scientists-finally-catch-a-much-awaited-glimpse-of-higgs-boson-decay.htm |
Rockefeller scientists among those involved in search for Higgs boson
This week’s announcement that two physicists have received the Nobel Prize in Physics for their discovery of the Higgs boson is also a victory for thousands of scientists, including more than 2,000 from the United States, who worked to collect data and analyze results from particle collisions conducted at the Large Hadron Collider at CERN.
At Rockefeller, ten scientists led by Konstantin Goulianos, head of the Laboratory of Experimental High Energy Physics, were involved in developing particle detectors that were deployed at CERN and in processing and reviewing data they collected.
The Higgs particle was discovered by the ATLAS and CMS collaborations, each of which involves over 3,000 people from all around the world. The sophisticated instruments they created to study proton collisions at the Large Hadron Collider were designed to confirm the existence of the Higgs particle theorized by Franҫois Englert and Peter W. Higgs, this week’s Nobel recipients.
The existence of the Higgs boson is critical to proving the so-called Standard Model of physics that describes the fundamental particles from which we and all the visible matter in the universe are made, along with the interactions that govern their behavior. | https://www.rockefeller.edu/news/?p=7511 |
Johns Hopkins University physicists are celebrating the roles they played in the confirmation of the Higgs boson, the so-called God particle predicted almost 50 years ago by Peter Higgs and François Englert, whose 2013 Nobel Prize in physics was announced Oct. 8.
Johns Hopkins researchers were members of one of two collaborations conducting the Higgs hunt at the Large Hadron Collider, the world's largest particle accelerator, located in Geneva, at CERN: the European Organization for Nuclear Research.
Andrei Gritsan, an experimental physicist and associate professor in the university's Henry A. Rowland Department of Physics and Astronomy, was a member of the CMS (Compact Muon Solenoid) collaboration and co-leader of a group that investigated one of the most promising avenues to revealing how Higgs could be found and studied.
"The challenge was to actually 'see' the Higgs bosons once they are created," says Gritsan, who worked alongside more than 2,000 other scientists and researchers on the project, which culminated in July 2012 with scientific confirmation that the Higgs boson exists.
"One aspect is to make the scientific instrument, the CMS detector in our case, as precise as possible," Gritsan says. "With my team at Hopkins and leading a group of several experts, I developed the methods to precisely focus the key system of the detector, the silicon tracking system. Also, with my team, we developed the methods to extract maximum information from the relative angles and momenta of Higgs boson traces. This gave us confidence at the time of discovery and later to call what we see a Higgs boson."
In reviewing the data from the collider, researchers knew a graph with a bump in it would indicate strong evidence that it had produced the elusive Higgs boson. The Higgs boson, first proposed in the 1960s by Higgs, a Briton, and Englert, of Belgium, was needed to fill in the largest gap in the Standard Model, the leading theory in fundamental particle physics. Its existence would signify the existence of a field that permeates all space and imbues the matter in the universe with mass.
"It all changed in an instant; it was an emotional moment, and it left no doubt that we had something big," Gritsan says.
Also contributing to this historic confirmation of the particle's existence were four other Johns Hopkins Physics and Astronomy professors: Morris Swartz, whose work focuses on the development of radiation-hard silicon pixel sensors; Petar Maksimovic, who works on signatures of the physics beyond the Standard Model of elementary particles; Bruce Barnett, who was a member of the team that discovered a top quark at Tevatron; and Barry Blumenfeld, the leader of a widely distributed system to provide database information to remote computer nodes.
"This was truly amazing teamwork where people with different backgrounds, nationalities, and ages came together with a common goal and nothing else mattered," Gritsan says. "There were no country boundaries when it came to sharing of ideas, technology, computers, or the data."
David Kaplan, a theoretical physicist at Johns Hopkins, was so fascinated by the Large Hadron Collider and the work of CMS that he produced a documentary on the subject. Particle Fever is currently making its way through the film festival circuit and is receiving positive reviews from industry insiders.
The film highlights the work of six devoted scientists as they attempt to re-create conditions that existed just moments after the Big Bang. Particle Fever was recently screened at the New York Film Festival and had its North American debut at the Telluride Film Festival in Colorado. | https://hub.jhu.edu/gazette/2013/november/news-roundup-jh-physicists-contribute-higgs-boson/ |
|Science & Medicine||31|
Explorations in the world of science.
April 14th, 2014
Episode 360 of 590 episodes
An extended interview with the Nobel prize laureate. Peter Higgs tells Jim Al-Khalili that he failed to realise the full significance of the Higgs boson and to link it to the much celebrated Standard Model of Physics. He puts the oversight down to a string of missed opportunities, including one night at a physics summer camp when he chose to go to bed early. Working alone in Edinburgh in the 1960s, Peter Higgs says he was considered "a bit of a crank... No-one wanted to work with me". In 1964, he predicted the possible existence of a new kind of boson but, at the time, there was little interest. Three years later, the Higgs mechanism was shown to be central to the new Standard Model of Physics, which brings together three of the four fundamental forces of nature and has dominated physics ever since. Higgs met one of the key architects of the Standard Model several times, but they failed to realise they were working on the same thing. The 1970s were an exciting time for particle physics but Higgs says he lacked technical competency. He adds that work pressure contributed to the breakdown of his marriage. Four decades and several billion pounds on, scientists at the Large Hadron Collider at CERN confirmed that the Higgs boson had indeed been found and Peter Higgs shot to fame. This ephemeral speck of elusive energy is now so well-known it's featured in car adverts and countless jokes. There's even song by Nick Cave called the Higgs Boson Blues. But Higgs has always called it the 'scalar boson' and remains embarrassed that it is named after only him. He remains surprised that another British physicist, Tom Kibble from Imperial College, London didn't share the 2013 Nobel Prize for Physics along with him and Belgian physicist, Francois Englert. These days, he's constantly stopped in the street and asked for autographs and photographs which, he says, is "nice but a bit of a nuisance". Producer: Anna Buckley Image Credit: BBC
You'll need to sign in to write a comment.
The Art of Charm Podcast is where self-motivated guys and gals, just like you, come to learn from a diverse mix of experienced mentors, including the world's best professional and academic minds, scientists, relationship experts, entrepreneurs, bestselling authors, and other badasses. This show will make you a better networker, better connector, and -- most important -- a better thinker. | http://www.podcastchart.com/podcasts/discovery/episodes/peter-higgs |
When Scientists at CERN’s Large Hadron Collider reported that they had likely found the Higgs boson, physics went mainstream. Overnight, the subatomic particle became a meme, a watercooler topic, and a harbinger of the Mayan apocalypse. On the day it was announced, July 4, 2012, “Higgs boson” was a more popular Google search term than “cats” and “Kim Kardashian” combined.
Scientists predicted the existence of the boson nearly 50 years ago as a part of the Higgs field, an omnipresent field of energy that explains why some theoretically massless particles have mass. A particle interacting with the field can excite it, resulting in the appearance of a Higgs boson. Physicists could not directly observe the field, so they looked for bosons to prove its existence.
The discovery would also complete the Standard Model—the theory of how the 17 most basic particles interact—provide answers for how the universe expanded after the big bang, and even help explain how matter can exist at all.
CERN announced this spring that more data suggests the particle was indeed a Higgs boson. And while you might think physicists would be racing to eradicate doubts, there’s a bit of ambivalence. After all, confirming the discovery would mean there’s no “new physics” to explore. Sometimes, hunting for the answer is more rewarding than actually finding it. | https://www.wired.com/2013/04/higgsboson/ |
First the researchers at Larger Hadron Collider found the Higgs boson by means of the world’s biggest particle smasher. And now, thanks to observations made by an ultra-rare particle interface, researchers have more proof that the Higgs does what it’s thought to do. For more than forty years physicists have been using the standard model of particle physics in order to explain how the ultimate forces of nature work. And a vital aspect of this model is the Higgs boson, a particle that’s supposed to deliver mass to all known matter. As explained in New Scientist, the particles that actually make us up have mass, and deprived of the Higgs, these particles would actually be massless, just like photons. Its detection in 2012 might be reflected as the topmost accomplishment of the Large Hadron Collider (LHC), and it significantly boosted physicist’s sureness in the model they'd been using from last four decades. But the discovery of Higgs Boson isn't the end of the story at all. For one thing, certain physicists are pursuing even superior heights of sureness in the standard model of particle physics; for another, the standard model isn't a comprehensive explanation of the way the subatomic realm actually operates.
|Higgs Boson Observed At Work For The First Time|
After years of smashes to confirm the Higgs detection, and the peak of data LHC has produced leathers more mysteries for physicists to expose. For instance, impacts of two particles called W bosons. When they strike, they distribute in an approach that can tell physicists whether the Higgs actually does its job of communicating mass to matter in the way they suppose it would and perhaps remove some of the opposing extra theories. So what’s the problem? These connections are tougher to find than even the Higgs Boson itself. The ATLAS teamwork witnessed 34 of these proceedings. To exam the Higgs mechanism, the scientists associated disseminations of decay products of the W scattering process, how often specific products are detected at a specific energy and geometrical formation. | http://www.physics-astronomy.com/2014/07/higgs-boson-observed-at-work-for-first.html |
Many questions in particle physics are related to the existence of particle mass. The “Higgs mechanism,” which consists of the Higgs field and its corresponding Higgs boson, is said to give mass to elementary particles.
ATLAS observes direct interaction of Higgs boson with top quark
The ATLAS Collaboration at CERN has announced the observation of Higgs bosons produced together with a top-quark pair. Observing this extremely rare process is a significant milestone for the field of High-Energy Physics. It allows physicists to test critical parameters of the Higgs mechanism in the Standard Model of particle physics.
All together now: adding more pieces to the Higgs boson puzzle
The Higgs boson was discovered in 2012 by the ATLAS and CMS experiments, but its rich interaction properties (its coupling to other particles) have remained a puzzle. Thanks to an unprecedented amount of Higgs bosons produced at the LHC, all of the main Higgs boson production and decay modes have now been observed.
ATLAS measures Higgs boson coupling to top quark in diphoton channel with full Run 2 dataset
At the Rencontres de Moriond (La Thuile, Italy), the ATLAS Collaboration presented an updated measurement of ttH production in the diphoton channel. The result examines the full Run 2 dataset – 139 fb-1 collected between 2015 and 2018 – to observe ttH production in a single channel with a significance of 4.9 standard deviations.
ATLAS completes data-taking for Run 2
Beams in the Large Hadron Collider came to a stop today, closing out four years of record-breaking operation for the ATLAS experiment. Run 2 saw the extraordinary exploration of the high-energy frontier, as the ATLAS experiment brought new understanding of particle physics.
ATLAS searches for double Higgs production
The Brout-Englert-Higgs (BEH) mechanism is at the core of the Standard Model, the theory that describes the fundamental constituents of matter and their interactions. It introduces a new field, the Higgs field, through which the weak bosons (W and Z) become massive while the photon remains massless. The excitation of this field is a physical particle, the Higgs boson, which was discovered by the ATLAS and CMS collaborations in 2012.
ATLAS observes elusive Higgs boson decay to a pair of bottom quarks
Geneva. The ATLAS Collaboration at CERN’s Large Hadron Collider (LHC) has – at long last – observed the Higgs boson decaying into a pair of bottom (b) quarks. This elusive interaction is predicted to make up almost 60% of the Higgs boson decays and is thus primarily responsible for the Higgs natural width. Yet it took over six years after the 2012 discovery of the Higgs boson to accomplish this observation.
Could a new type of quark fix the “unnaturalness” of the Standard Model?
While the discovery of the Higgs boson at the Large Hadron Collider (LHC) in 2012 confirmed many Standard Model predictions, it has raised as many questions as it has answered. For example, interactions at the quantum level between the Higgs boson and the top quark ought to lead to a huge Higgs boson mass, possibly as large as the Planck mass (>1018 GeV). So why is it only 125 GeV? Is there a mechanism at play to cancel these large quantum corrections caused by the top quark (t)? Finding a way to explain the lightness of the Higgs boson is one of the top (no pun intended) questions in particle physics.
Higgs boson observed decaying to b quarks – at last!
Today, at the 2018 International Conference on High Energy Physics in Seoul, the ATLAS experiment reported a preliminary result establishing the observation of the Higgs boson decaying into pairs of b quarks, furthermore at a rate consistent with the Standard Model prediction.
Combined measurements of Higgs boson couplings reach new level of precision
Higgs boson couplings manifest themselves in the rate of production of the Higgs boson at the LHC, and its decay branching ratios into various final states. These rates have been precisely measured by the ATLAS experiment, using up to 80 fb–1 of data collected at a proton-proton collision energy of 13 TeV from 2015 to 2017. Measurements were performed in all of the main decay channels of the Higgs boson: to pairs of photons, W and Z bosons, bottom quarks, taus, and muons. The overall production rate of the Higgs boson was measured to be in agreement with Standard Model predictions, with an uncertainty of 8%. The uncertainty is reduced from 11% in the previous combined measurements released last year. | http://atlas.cern/tags/higgs-boson |
Accelerating the pace of science: interview with CERN’s Rolf Heuer Understand article
CERN’s director general tells the story behind the Higgs boson – and describes the next steps.
CERN is not just the world’s largest particle physics laboratory. As its director general, Rolf Heuer, explains, “CERN is a role model, demonstrating that science can bridge cultures and nations. Science is a universal language and this is what we speak at CERN.”
CERN also unites people in other ways. “As a young summer student, you can find yourself having lunch next to a Nobel Prize winner. And everyone, from the canteen workers to the senior management – we all identify ourselves with CERN, sharing a desire to increase human knowledge. We all do our bit towards that goal, leaving political, cultural and educational differences outside the campus.”
This undoubtedly makes CERN a very special place to work, but what makes it unique is its particle accelerator, the Large Hadron Collider (LHC). First used in 2008, the LHC is the world’s largest particle accelerator, its 27 km tunnel forming a ring beneath the French-Swiss border. As Professor Heuer explains, “It’s also one of the coldest places in the Universe, cooled to 1.9 K with superfluid helium. Even outer space is warmer, at 2.7 K.
“Simultaneously, it’s one of the hottest places in our galaxy because when we collide protons in the LHC, we produce temperatures that are much, much higher than those at the centre of the Sun.” Professor Heuer describes the collision of two protons at close to the speed of light as being like two mosquitoes colliding in mid-flight.
“The key difference is that these protons are tiny, tiny, tiny particles, so their energy density – the energy of the protons divided by their volume – is huge, and it is this energy density that brings us close to the Big Bang.”
As detailed in two previous Science in School articles (Landua & Rau, 2008; Landua, 2008), these enormously energetic collisions can create very heavy particles, the sort of particles that were formed in the extremely energetic conditions a fraction of a second after the Big Bang. These are particles so massive that they have not been created since then (remember that Einstein’s law E=mc2 tells us that to create a very heavy particle, we need a large amount of energy).
As Professor Heuer and I talk, extremely distant history has just been re-enacted: on 4 July 2012, CERN announced the detection by the LHC of a particle that is ‘consistent with the Higgs boson’, last created about 10-12 seconds after the Big Bang. This is momentous news. “We’ve been looking for this particle for 40 years. I’m not sure if I’ve digested the news yet, but I think this might be one of the biggest discoveries of recent decades,” he tells me.
If the newly detected particle is indeed the Higgs boson, this discovery will validate the standard model of particle physics. The standard model (figure 1) describes the fundamental particles from which we and every visible thing in the Universe are made, and the forces acting between them. And as Professor Heuer explains, “The Higgs boson was the missing cornerstone of the standard model.”
The discovery would also explain why particles – and thus matter – have mass. The search for the Higgs boson began in the 1960s, when a group of physicists, including Peter Higgs, postulated what is now known as the Higgs field. Immediately after the Big Bang, they believed, particles had no mass but rapidly acquired it by interacting with this field; the more the particles interacted with the Higgs field, the more massive they became.
“Imagine that the Higgs field is a party of journalists, equally distributed in the room,” says Professor Heuer. “I can pass through the room mass-less – with the velocity of light – because they don’t know me. If someone well known enters, the journalists cluster around that person: the person’s velocity is limited and he or she acquires mass. The better known that person is, the more the journalists cluster around, and the more massive that person becomes. This is how a particle acquires mass from the Higgs field.” See figure 2.
But where does the Higgs boson come in? By definition, bosons are particles with an internal angular momentum – known as spin – corresponding to an integer multiple of the Planck constant (e.g. 0, 1 or 2). Some bosons are force particles, through which matter particles interact with each other. For example, a photon is a boson that carries the electromagnetic force; a graviton is a boson that carries the gravitational force. The Higgs boson, however, is postulated to be different: it is the result of the Higgs field interacting with itself (figure 2). “Suppose I open the door to the journalists’ party and whisper a rumour into the room. The journalists will be curious – ‘what did he say?’ That’s the journalists interacting among themselves – or the self-interaction of the Higgs field: that is a Higgs boson.”
The only problem with the Higgs boson was that nobody knew if it actually existed. Over the years, larger and larger particle accelerators have been built to look for it, capable of colliding particles with higher and higher energies. This enabled physicists to create more and more massive particles, but there was still no sign of the Higgs boson. Did it not exist after all, or did it just require a still more powerful accelerator to detect it? The new particle may well have answered that question.
So how did the CERN scientists actually detect this new particle?
The signal they were looking for was the decay of the Higgs boson. However, the scientists needed to be able to distinguish the decay pattern of the Higgs boson from the decay signals of the many, many other particles created in the LHC. As Professor Heuer jokes, “It’s like looking for one type of snowflake by taking photos of it against a background of a snowstorm. Very difficult.”
One promising signal to look for was the decay of the Higgs boson into two photons, specifically two high-energy photons. When two photons are detected originating from the same spot, they may be the result of a Higgs boson decaying. On the other hand, they may be part of the background noise of other particle collisions and decays occurring in the LHC. So how do the scientists distinguish the two?
The answer is that they cannot, in any one case, tell whether the photons originate from a Higgs boson or from the decay of some other particle, but they can use statistical analysis to test whether the number of decays detected is what they would expect. For this, they construct a null hypothesis – in this case, that the Higgs particle does not exist – and predict what they would find if the null hypothesis were true. If more decays were detected than expected, this would indicate the existence of the Higgs boson.
This was precisely what two of the LHC experiments, ATLAS and CMS, found in July 2011: above the smooth curve of the expected results, there was a deviation representing more decays than expected. Importantly, both experiments found this deviation at the same point – representing the decay of particles with a mass of 126 GeV – and the deviation had the same magnitude – representing the same number of ‘extra’ decays in the accelerator. The question was, were these deviations statistically significant? For new discoveries in particle physics, the bar for statistical significance is set very highw1: at five sigma, which represents about a one in 3.5 million chance of detecting more decays than expected by chance alone, even if the null hypothesis were true.
The initial data in July 2011 certainly looked promising, but offered nowhere near this level of certainty. Over the following year, however, the two LHC experiments gathered more and more data, all pointing in the same direction: there were more two-photon events with a mass of 126 GeV than would be expected if there were no Higgs boson. Finally, on 4 July 2012, the five-sigma threshold was crossed and the CERN scientists were confident enough to announce to the world that they had indeed detected ‘a particle consistent with the Higgs boson’.
For most of us, despite previous tentative statements from CERN, this announcement came quite unexpectedly. For Professor Heuer, in contrast, the excitement had been building up over months, but the step-by-step revelation of the discovery did not reduce its impact one bit. “The discovery is the most exciting moment of my career, because we are writing a little bit of history.”
So how much do we actually know about this new particle?
“We know it’s a new particle and we know it’s a boson. It’s the heaviest boson ever found, and it looks like the Higgs boson. However, scientists can be very cautious. As a layman, I would say ‘we’ve found the Higgs boson’. As a scientist, I have to ask ‘what have we found?’”
The next step, therefore, is to measure the properties of this particle, including its spin. All previously known bosons are particles with spin 1, for example photons. They are associated with vector fields: the electromagnetic field, for instance, is a vector field that has both a direction and a strength. As a result, the photon is moved in a particular direction: it has spin. The Higgs boson, however, is postulated to be different – it is associated with a scalar field, the Higgs field, and that means it has spin 0.
“If you swim in a river, the force that the water exerts on you will depend on which direction you swim in. That would be a vector field. If, in contrast, you are in a swimming pool, the force that the water exerts on you will be the same whichever direction you swim in. That’s a scalar field.”
It’s also important to measure the new particle’s mass more accurately. “Instead of being the Higgs boson, it could be a Higgs boson. The standard model predicts only one, but supersymmetry – an extension of the standard model (as explained in Landua & Rau, 2008) – predicts at least five. And the lowest-mass Higgs boson predicted by supersymmetry is very similar in mass to that predicted by the standard model.
“That makes it difficult to distinguish the two; we need more measurements.” To this end, the LHC will collect data from as many collision events as possible before February 2013, when it will be closed until the end of 2014 to refit it for still higher energy collisions, enabling it to create and detect even heavier particles.
So if the measurements over the next few months show the new particle to be the (or a) Higgs boson, then the standard model would be validated, proving the existence of the Higgs field, and thus confirming the mechanism by which particles acquire mass. But what if the newly discovered boson turns out not to be a Higgs boson? “If it’s slightly different to what we expected, it could introduce physics beyond the standard model.”
Whatever the outcome of the measurements on the new particle, once the LHC is reopened, it will turn its focus beyond the standard model, which describes only the visible Universe – thought to be no more than 4-5% of the total energy balance of the Universe. As Professor Heuer points out, “The standard model leaves many questions open. For example, it doesn’t tell us what happened to the antimatter that existed at the beginning of the Universe, nor does it tell us in how many space or time dimensions we are living. And it casts not the faintest light on what dark matter or dark energy is.”
Of the dark 95% of the Universe that is not addressed by the standard model, 25% is thought to be dark matter. “When we compare that to the 5% that comprises the visible Universe, it’s obvious that dark matter must have played a dominant role in shaping the early Universe. Astronomers can tell us how it has shaped the Universe, but only particle accelerators are likely to be able to produce dark matter in the laboratory and help us understand exactly what it is. Is dark matter composed of a single kind of particle or is it rich and varied like the normal world?” One potential answer involves supersymmetry, and after refurbishment, the LHC will be powerful enough to create and detect some of the very massive particles that supersymmetry would predict.
The other three quarters of the dark Universe is dark energy, thought to drive the Universe apart. Professor Heuer believes that the LHC and its investigation of the Higgs boson could be important here too.
“The Higgs field is scalar, as is dark energy. They are not the same, but studying the Higgs field might tell us a lot about dark energy.”
In short, “So far, we know very little about dark matter and we know essentially nothing about dark energy, but I think that, with the LHC, we are about to enter the dark Universe.”
Throughout our interview, it’s been obvious just how much Professor Heuer relishes bringing physics alive for non-specialists. He’s clearly very good at it, too: “I gave a public lecture in the Royal Society in London in which I presented the LHC, the science around it and the fascination of the dark Universe. A day later, I received an email from a 14-year-old boy who wrote that he was doing very well in maths and physics and that he wanted to start work at CERN in 2018.”
The difficulty, as Professor Heuer acknowledges, is not rousing young people’s enthusiasm for science, but maintaining it. To this end, he emphasises the importance of “explaining new developments and important topics in science, for example using Science in School”. Professor Heuer is clearly a fan.
As we close our interview, I ask if Professor Heuer has any further advice for our readers. “Enthusing students within the current school curricula can be very difficult – if you start with 19th-century mechanics, you will lose 99% of them immediately. Introducing modern science, however, can really help.” Fortunately, he is convinced that a lot can be explained without mathematics.
“For example, I explained the Higgs mechanism by talking about journalists. Of course, to fully understand it and explain it, your students would need to use mathematics, but they can always look that up. What they need to understand is the logic.”
References
- Landua, R (2008) The LHC: a look inside. Science in School 10: 34-45.
- Landua R, Rau M (2008) The LHC – a step closer to the Big Bang. Science in School 10: 26-33.
Web References
- w1 – To find out more about the statistical analysis, see ‘5 sigma – what’s that?’ in the Scientific American blog.
Resources
- To learn more about the LHC shut-down, planned for 2013-14, see:
- Brewster S (2012) Scientists already planning for LHC long shutdown. Symmetry September 2012.
- To learn more about the research leading to the discovery of the new particle, see:
- Baggott J (2012) Higgs: The invention and discovery of the ‘God Particle’. Oxford, UK: Oxford University Press. ISBN: 9780199603497
- For an explanation of the LHC in layperson’s terms, see:
- Ginter P, Franzobel, Heuer RD (2011) LHC: Large Hadron Collider. Paris, France: UNESCO. ISBN: 9783901753282
- Watch Rolf Heuer’s lecture ‘The search for a deeper understanding of our Universe at the Large Hadron Collider: the world’s largest particle accelerator’, given at the Euroscience Open Forum in Dublin, on 14 July 2012, just 10 days after the announcement that a boson compatible with the Higgs boson had been discovered at CERN.
- To learn more about the next generation of particle accelerators that will study the Higgs boson in more detail, see:
- Chalmers M (2012) After the Higgs: the new particle landscape. Nature 488: 572-575. doi: 10.1038/488572a
- Download the article free of charge here, or subscribe to Nature today.
- Read two interviews with scientists working on dark matter and dark energy:
- Boffin H (2008) “Intelligence is of secondary importance in research”. Science in School 10: 14-19.
- Warmbein B (2007) Making dark matter a little brighter. Science in School 5: 78-80. | https://www.scienceinschool.org/article/2012/heuer/ |
By Sachin Nayak
“The first thing that would enter the minds of most of you, the ‘IIT Junta’, when you think of the Higgs Boson would be ‘The God Particle’. The more well informed of you would think of it as being the missing link in the description of the universe. Each one of you might have different ideas about the Higgs Boson yet the ideas of each one of you would have some element of mystery in it.” On 5th September 2012, Dr. Rohini Godbole in her lecture on the Higgs Boson cleared most of the common misconceptions about the so-called ‘God Particle’.
She started her talk by stating the question that has been lingering in the minds of the physicists from the days of yore. “What is at the heart of matter?”. With scientific advancement, the answer to this question has undergone a rapid transformation but the question has remained the same. We have moved from the theory of Earth, Fire, Wind and Water being the ‘Elements’ to that of the theory of infinitesimal particles like the leptons and the quarks constituting the entire universe. Yet, we don’t have a conclusive answer to this question.
Then, she gave a brief overview of ‘The Standard Model’, the most widely accepted model of the universe at present. The Standard Model (SM) has two components, the ‘bricks’ and the ‘mortar’. The bricks are the leptons and the quarks which form the bulk of the universe. The mortar is the set of force particles, i.e. the gluons, the W bosons etc., which binds these particles to the form in which we know it. She showed the audience how elaborate and complex the SM theory is by saying that out of the 72 Nobel prizes in Physics that have been awarded from 1936 to 2008, 15 have been awarded for contributions to the SM. We now even have a periodic table of the fundamental particles. In this section of her talk, she also showed that SM is not only relevant to the small group of theoretical physicists but also to the whole scientific community as the laws of particle physics which act at a distance of 10^(-15) meters have a great impact on events which occur at cosmological times and astronomical distances. Thus, Dr. Rohini in a short time managed to give a holistic view of the SM.
In the next section of her talk, she explained how the existence of a law of symmetry in Physics called the ‘Gauge Invariance’ pointed to the existence of Higgs boson. When Gauge Invariance is applied to Maxwell’s equations, we get the result that the mass of a photon is zero. This fact is used in Fermi’s explanation of β-decay. But this explanation invokes the mathematician’s ire as it leads to absurd conclusions like the probability of the event of 2 high energy particles colliding being greater than one. This problem was solved by Schwinger who said that high energy particles follow a different mechanism of collision in which they exchange a W+ boson. Unfortunately, this solution implied that the probability of a neutrino and an anti-neutrino colliding had a probability greater than one. Glashow showed this problem could only be solved by ignoring ‘Gauge Invariance’. So, the theoretical physicists had a tricky problem at hand. Gauge Invariance showed the existence of Z, W+ and W- boson whose mere existence raised doubts about the truth of the Gauge Invariance. The chain of ideas had knotted around itself.
This is when physicists like Higgs and Englert came into the picture. 4-5 of these theoretical physicists independently showed how these massive bosons could exist without violating the Gauge Invariance law. Though each of them independently solved this problem, the solution of each one of them predicted that there existed a particle which gave mass to every other particle in the universe. This might give us ideas the theoretical physicists are so clever that they can read one other’s mind telepathically. Definitely, Higgs didn’t have this ability as his solution was the more accurate than the others and thus the mass giving particle was named after him in his honour. Finally, Abdus Salam showed how the Higgs mechanism could be expanded to photons and other particles.
After telling us why theoretical physicists believed that the Higgs Boson existed, Dr. Rohini proceeded to explain the significance of the experiment which took place on 4th July. First, she showed us how elusive the Higgs Boson is with the help of energy diagrams. In her own words, ‘Only 1 in 1013 experiments can lead to the conclusive proof of the existence of the Higgs Boson’. No wonder it is called the ‘God Particle’. She continued by saying that the experiment which took place on 4th July at the LHC was basically an event of 2 high energy protons colliding and giving out two photons at a large angle to each other. This indicated that a massive particle had been formed and had annihilated. This particle had the same characteristics as that predicted for the Higgs Boson by theory. It even appeared in the same energy range as predicted by theory, i.e. 100GeV – 150 GeV. In her own words, “It was like a dream come true for us, the theoretical physicists.”
Still, she believes that scientists don’t really have enough data to formally conclude the existence of the Higgs Boson. A German exchange student addressed a query to Dr. Rohini on the observation of the p-p collision being an oscillation of a ‘Superstring’. She replied in a lighter vein, “It flies like a plane, looks like a bird, so it might be ‘Superstring’”. In light of the recent observation that the neutrinos travelling faster than light turning out to be nothing more than a computer glitch, a person in the audience asked her how the physicists were convinced of the veracity of the experiment. She replied by saying that though the neutrinos which were thought to travel faster than light were produced at CERN, CERN was not the organisation which conducted the embarrassing experiment. She supported her statement by saying that the reports were read and re-read by at least 2000 people and that the experiments at the LHC had taken place only after meticulous planning of over 30 years.
Even if the experiment on the 4th of July provides conclusive proof of the existence of the Higgs Boson, it will only fill in the missing link in the chain of the SM. It will definitely not render Theoretical Physics redundant (and lead to the unemployment of theoretical physicists like Rohini). The reason for this is that then they would have only understood 4% of the matter in the universe. They would still have to probe into the darkness of Dark Energy and Dark Matter constituting the remaining 96% of the universe. They would also have to probe into the niches of the ‘Beyond Standard Model’ theory, which says that the whole universe is made up of oscillating strings. Referring to the discovery of the Higgs Boson just marking the beginning of a new era in Theoretical Physics Dr. Rohini said, “The king is dead; long live the king.”
Dr. Rohini Godbole is one of India’s prominent theoretical physicists. Dr. Rohini did her MSc. from IIT Bombay in 1974 and won the institute silver medal for her performance. Following a PhD from SUNY, Stony Brook, USA, she worked in TIFR and the university of Bombay before joining IISc as a faculty member where she is a professor in the Centre for High Energy Physics at present. Dr. Rohini has made fundamental contributions in High Energy Physics, especially to the ‘Standard Model’ and the ‘Beyond Standard Model’ and has authored over 200 papers in these areas. Many a times, she has been invited to the LHC to work, deliver lectures and offer theoretical formulations of the universe. She has been elected to numerous academies and won many international and national awards and fellowships. She is also the editor of the journal ‘Pramana’. | https://www.t5eiitm.org/the-elusive-higgs-boson-and-beyond/ |
and are credited with the theory of the Higgs mechanism and the prediction of the Higgs field and Higgs boson. Together, these provide a theoretical means by which Goldstone's theorem (a problematic limitation affecting early modern particle physics theories) can be avoided. They show how gauge bosons can acquire non-zero masses as a result of spontaneous symmetry breaking within gauge invariant models of the universe.
As such, these form the key element of the electroweak theory that forms part of the Standard Model of particle physics, and of many models, such as the Grand Unified Theory, that go beyond it. The papers that introduce this mechanism were published in Physical Review Letters (PRL) and were each recognized as milestone papers by PRL's 50th anniversary celebration. All of the six physicists were awarded the 2010 J. J. Sakurai Prize for Theoretical Particle Physics for this work, and in 2013 Englert and Higgs received the Nobel Prize in Physics.
On 4 July 2012, the two main experiments at the LHC (ATLAS and CMS) both reported independently the confirmed existence of a previously unknown particle with a mass of about 7002125000000000000♠125 GeV/c2 (about 133 proton masses, on the order of 10−25 kg), which is "consistent with the Higgs boson" and widely believed to be the Higgs boson.
Introduction
A gauge theory of elementary particles is a very attractive potential framework for constructing the ultimate theory. Such a theory has the very desirable property of being potentially renormalizable—shorthand for saying that all calculational infinities encountered can be consistently absorbed into a few parameters of the theory. However, as soon as one gives mass to the gauge fields, renormalizability is lost, and the theory rendered useless. Spontaneous symmetry breaking is a promising mechanism, which could be used to give mass to the vector gauge particles. A significant difficulty which one encounters, however, is Goldstone's theorem, which states that in any quantum field theory which has a spontaneously broken symmetry there must occur a zero-mass particle. So the problem arises—how can one break a symmetry and at the same time not introduce unwanted zero-mass particles. The resolution of this dilemma lies in the observation that in the case of gauge theories, the Goldstone theorem can be avoided by working in the so-called radiation gauge. This is because the proof of Goldstone's theorem requires manifest Lorentz covariance, a property not possessed by the radiation gauge.
History
Particle physicists study matter made from fundamental particles whose interactions are mediated by exchange particles known as force carriers. At the beginning of the 1960s a number of these particles had been discovered or proposed, along with theories suggesting how they relate to each other, some of which had already been reformulated as field theories in which the objects of study are not particles and forces, but quantum fields and their symmetries. However, attempts to unify known fundamental forces such as the electromagnetic force and the weak nuclear force were known to be incomplete. One known omission was that gauge invariant approaches, including non-abelian models such as Yang–Mills theory (1954), which held great promise for unified theories, also seemed to predict known massive particles as massless. Goldstone's theorem, relating to continuous symmetries within some theories, also appeared to rule out many obvious solutions, since it appeared to show that zero-mass particles would have to also exist that were "simply not seen". According to Guralnik, physicists had "no understanding" how these problems could be overcome in 1964. In 2014, Guralnik and Hagen wrote a paper that contended that even after 50 years there is still widespread misunderstanding, by physicists and the Nobel Committee, of the Goldstone boson role. This paper, published in Modern Physics Letters A, turned out to be Guralnik's last published work.
Particle physicist and mathematician Peter Woit summarised the state of research at the time:"Yang and Mills work on non-abelian gauge theory had one huge problem: in perturbation theory it has massless particles which don’t correspond to anything we see. One way of getting rid of this problem is now fairly well-understood, the phenomenon of confinement realized in QCD, where the strong interactions get rid of the massless “gluon” states at long distances. By the very early sixties, people had begun to understand another source of massless particles: spontaneous symmetry breaking of a continuous symmetry. What Philip Anderson realized and worked out in the summer of 1962 was that, when you have both gauge symmetry and spontaneous symmetry breaking, the Nambu–Goldstone massless mode can combine with the massless gauge field modes to produce a physical massive vector field. This is what happens in superconductivity, a subject about which Anderson was (and is) one of the leading experts." [text condensed]
The Higgs mechanism is a process by which vector bosons can get rest mass without explicitly breaking gauge invariance, as a byproduct of spontaneous symmetry breaking. The mathematical theory behind spontaneous symmetry breaking was initially conceived and published within particle physics by Yoichiro Nambu in 1960, the concept that such a mechanism could offer a possible solution for the "mass problem" was originally suggested in 1962 by Philip Anderson, and Abraham Klein and Benjamin Lee showed in March 1964 that Goldstone's theorem could be avoided this way in at least some non-relativistic cases and speculated it might be possible in truly relativistic cases.
These approaches were quickly developed into a full relativistic model, independently and almost simultaneously, by three groups of physicists: by François Englert and Robert Brout in August 1964; by Peter Higgs in October 1964; and by Gerald Guralnik, Carl Hagen, and Tom Kibble (GHK) in November 1964. Higgs also wrote a response published in September 1964 to an objection by Gilbert, which showed that if calculating within the radiation gauge, Goldstone's theorem and Gilbert's objection would become inapplicable. (Higgs later described Gilbert's objection as prompting his own paper.) Properties of the model were further considered by Guralnik in 1965, by Higgs in 1966, by Kibble in 1967, and further by GHK in 1967. The original three 1964 papers showed that when a gauge theory is combined with an additional field that spontaneously breaks the symmetry, the gauge bosons can consistently acquire a finite mass. In 1967, Steven Weinberg and Abdus Salam independently showed how a Higgs mechanism could be used to break the electroweak symmetry of Sheldon Glashow's unified model for the weak and electromagnetic interactions (itself an extension of work by Schwinger), forming what became the Standard Model of particle physics. Weinberg was the first to observe that this would also provide mass terms for the fermions.
However, the seminal papers on spontaneous breaking of gauge symmetries were at first largely ignored, because it was widely believed that the (non-Abelian gauge) theories in question were a dead-end, and in particular that they could not be renormalised. In 1971–72, Martinus Veltman and Gerard 't Hooft proved renormalisation of Yang–Mills was possible in two papers covering massless, and then massive, fields. Their contribution, and others' work on the renormalization group, was eventually "enormously profound and influential", but even with all key elements of the eventual theory published there was still almost no wider interest. For example, Sidney Coleman found in a study that "essentially no-one paid any attention" to Weinberg's paper prior to 1971 – now the most cited in particle physics – and even in 1970 according to Politzer, Glashow's teaching of the weak interaction contained no mention of Weinberg's, Salem's, or Glashow's own work. In practice, Politzer states, almost everyone learned of the theory due to physicist Benjamin Lee, who combined the work of Veltman and 't Hooft with insights by others, and popularised the completed theory. In this way, from 1971, interest and acceptance "exploded" and the ideas were quickly absorbed in the mainstream.
The significance of requiring manifest covariance
Most students who have taken a course in electromagnetism have encountered the Coulomb potential. It basically states that two charged particles attract or repel each other by a force which varies according to the inverse square of their separation. This is fairly unambiguous for particles at rest, but if one or the other is following an arbitrary trajectory the question arises whether one should compute the force using the instantaneous positions of the particles or the so-called retarded positions. The latter recognizes that information cannot propagate instantaneously, rather it propagates at the speed of light. However, the radiation gauge says that one uses the instantaneous positions of the particles, but doesn't violate causality because there are compensating terms in the force equation. In contrast, the Lorenz gauge imposes manifest covariance (and thus causality) at all stages of a calculation. Predictions of observable quantities are identical in the two gauges, but the radiation gauge formulation of quantum field theory avoids Goldstone's theorem.
Summary and impact of the PRL papers
The three papers written in 1964 were each recognised as milestone papers during Physical Review Letters's 50th anniversary celebration. Their six authors were also awarded the 2010 J. J. Sakurai Prize for Theoretical Particle Physics for this work. (A controversy also arose the same year, because in the event of a Nobel Prize only up to three scientists could be recognised, with six being credited for the papers. ) Two of the three PRL papers (by Higgs and by GHK) contained equations for the hypothetical field that eventually would become known as the Higgs field and its hypothetical quantum, the Higgs boson. Higgs's subsequent 1966 paper showed the decay mechanism of the boson; only a massive boson can decay and the decays can prove the mechanism.
Each of these papers is unique and demonstrates different approaches to showing how mass arise in gauge particles. Over the years, the differences between these papers are no longer widely understood, due to the passage of time and acceptance of end-results by the particle physics community. A study of citation indices is interesting—more than 40 years after the 1964 publication in Physical Review Letters there is little noticeable pattern of preference among them, with the vast majority of researchers in the field mentioning all three milestone papers.
In the paper by Higgs the boson is massive, and in a closing sentence Higgs writes that "an essential feature" of the theory "is the prediction of incomplete multiplets of scalar and vector bosons". (Frank Close comments that 1960s gauge theorists were focused on the problem of massless vector bosons, and the implied existence of a massive scalar boson was not seen as important; only Higgs directly addressed it.) In the paper by GHK the boson is massless and decoupled from the massive states. In reviews dated 2009 and 2011, Guralnik states that in the GHK model the boson is massless only in a lowest-order approximation, but it is not subject to any constraint and acquires mass at higher orders, and adds that the GHK paper was the only one to show that there are no massless Goldstone bosons in the model and to give a complete analysis of the general Higgs mechanism. All three reached similar conclusions, despite their very different approaches: Higgs' paper essentially used classical techniques, Englert and Brout's involved calculating vacuum polarization in perturbation theory around an assumed symmetry-breaking vacuum state, and GHK used operator formalism and conservation laws to explore in depth the ways in which Goldstone's theorem explicitly fails.
In addition to explaining how mass is acquired by vector bosons, the Higgs mechanism also predicts the ratio between the W boson and Z boson masses as well as their couplings with each other and with the Standard Model quarks and leptons. Subsequently, many of these predictions have been verified by precise measurements performed at the LEP and the SLC colliders, thus overwhelmingly confirming that some kind of Higgs mechanism does take place in nature, but the exact manner by which it happens has not yet been discovered. The results of searching for the Higgs boson are expected to provide evidence about how this is realized in nature.
Consequences of the papers
The resulting electroweak theory and Standard Model have correctly predicted (among other discoveries) weak neutral currents, three bosons, the top and charm quarks, and with great precision, the mass and other properties of some of these. Many of those involved eventually won Nobel Prizes or other renowned awards. A 1974 paper in Reviews of Modern Physics commented that "while no one doubted the [mathematical] correctness of these arguments, no one quite believed that nature was diabolically clever enough to take advantage of them". By 1986 and again in the 1990s it became possible to write that understanding and proving the Higgs sector of the Standard Model was "the central problem today in particle physics." | https://alchetron.com/1964-PRL-symmetry-breaking-papers |
After much excitement, the Force has not been found. But don't be sad, my fellow nerds. Scientists may have not found evidence of the Higgs boson yet, but they have discovered "tantalizing hints" that may indicate its presence.
According to CERN, "these are not yet strong enough to claim a discovery" but they have been recorded. They have discovered that the elusive particle is "most likely to have a mass constrained to the range 116-130 GeV." This mass region shows "unexplainable excesses on these decay channels" that may be caused by the Higgs boson. However, there is not enough data to confirm this.
The Higgs boson is an hypothetical massive elementary particle that should exists according to the Standard Model. In theory, this particle is everywhere, permeating all of reality. The existence of this particle would be part of the answer to a very important question: why particles have mass?
When is the proof coming now?
The scientists at the Large Hadron Collider now say that they would have a definitive answer to the the existence or not existence of the Higgs boson later in 2012. They will be gathering new data during these months using the ATLAS and CMS experiments:
As of today what we see is consistent either with a background fluctuation or with the presence of the boson. Refined analyses and additional data delivered in 2012 by this magnificent machine will definitely give an answer.
What happens if they don't find the Higgs boson?
But what happens if they find proof of its non-existence? Would the universe disappear in a poof of smoke and confetti? Fortunately, scientists are kind of like Groucho Marx: if they can't prove a theory, they have others:
A non-Standard Model Higgs, currently beyond the reach of the LHC experiments with data so far recorded, would immediately open the door to new physics, whereas the absence of a Standard Model Higgs would point strongly to new physics at the LHC's full design energy, set to be achieved after 2014. | https://gizmodo.com/scientists-still-chasing-the-force-getting-closer-5867620 |
Theoretical particle physics isn’t the usual stuff of community newspapers.
But because last week’s discovery of the Higgs boson — aka “the god particle” — goes a long way to explaining how everything in the universe came to be, a localized story might not be too far a stretch.
The Progress has also learned that a graduate from the University of the Fraser Valley worked on a team looking for the elusive particle by smashing photons together at the Large Hadron Collider in Geneva, Switzerland.
Jennifer Godfrey could not be reached Monday for an interview, but Tim Cooper, a theoretical nuclear physicist at UFV explained what a Higgs boson is, and what it means to you and me.
“To be honest, not that much at the moment,” he said.
But to scientists, he said, it is a huge step toward confirming “the standard model” of how the universe formed after the Big Bang.
Unfortunately, in mathematical terms, the standard model worked only if the particles had no mass.
“But we know particles do have mass,” Cooper said.
So, in the 1960s, a physicist named Peter Higgs postulated a field that interacted with other known fields — the electrical field that generates electrons; the electro-magnetic field that generates photons — which would give those particles the mass we see in the universe around us.
Scientists have been looking for the “god particle” this Higgs field would generate ever since.
Cooper said the problem has been the very high energy needed to shatter a photon in a collider to produce the Higgs particle, and the very short lifespan of the particle (if it existed) which would make it that much more difficult to detect.
“Now we know exactly where to look … it’s going to be much easier,” he said, for physicists to duplicate the discovery and confirm the existence of the Higgs boson.
After that, who knows what future discoveries may be made?
Dr. Carin Bondar, a molecular biologist in Chilliwack who writes an Internet blog on things scientific, agreed the Higgs discovery “isn’t going to mean much on a day-to-day level.”
“But to scientists, it’s so massive, it’s the last piece of the puzzle. If (the Higgs) wasn’t there … we would need to rewrite everything we know about physics,” she said.
However, the discovery may also open doors to things yet unimagined, she agreed, like the discovery of the electron that has led to every electronic device we see in use today.
“It’s mind-blowing,” she said, about the possibilities. | https://www.bclocalnews.com/news/higgs-boson-discovery-mind-blowing-says-chilliwack-scientist/ |
'God Particle' Discovered, Say CERN Scientists
Scientists announced Thursday that they believe a subatomic particle discovered last year is indeed the Higgs boson, or "God particle," they have been searching for.
"The preliminary results with the full 2012 data set are magnificent and to me it is clear that we are dealing with a Higgs boson though we still have a long way to go to know what kind of Higgs boson it is," said Joe Incandela, spokesperson for the CMS physics team, in a statement.
The findings were announced during the Moriond Conference, a gathering of physicists in the Italian Alps.
CMS and ATLAS, two teams at the European Organization for Nuclear Research, have analyzed two and a half times the data that was available back in July when an announcement about the possible discovery was made. Researchers say the particle fits the characteristics of Higgs bosons in terms of both its quantum properties and the way it interacts with other particles.
What is unclear, however, if it is the Higgs boson of the Standard Model, which explains how matter's basic building blocks interact, or if it is just one of several bosons as predicted by other theories. To determine whether or not it is the Standard Model particle, scientists must measure the rate at which the boson decays into other particles.
"The beautiful new results represent a huge effort by many dedicated people," Dave Charlton, ATLAS spokesperson, said. "They point to the new particle having the spin-parity of a Higgs boson as in the Standard Model. We are now well started on the measurement program in the Higgs sector."
The particle was discovered using CERN's Large Hadron Collider, a massive scientific instrument used to accelerate particle beams to close to the speed of light before colliding them into one another.
The Higgs boson was named after physicist Peter Higgs, who introduced the theory of the particle's existence, in 1964. The particle, it is believed, plays a role in giving other particles their mass, and by doing so allows them to bind together to form atoms. | http://www.christianpost.com/news/god-particle-discovered-say-cern-scientists-91891/ |
The University Dance Company debuted Karole Armitage's "Power Surge" Thursday night at the Lied Center and did justice to the Lawrence-grown choreographer's inventiveness.
Deanna Doyle and Beau Hancock were stunning in Armitage's romantic duet, which blended ballet, modern and pop dance and laid it against the repetitive, ebbing melody of Philip Glass' "Mishima String Quartet." Armitage, who choreographs for some of the world's top dance companies, created the work earlier this fall during a residency at KU.
Doyle was especially impressive. Confident and controlled on-stage, her dancing downplays its athleticism and opts for graceful strength, unfaltering balance and fully extended lines.
Another highlight of the 90-minute concert was Patrick Suzeau's "Opus 2001," an intriguing work for 17 dancers that evokes images of combat and prisoner-of-war camps. Martial arts, modern dance and movements that seem unnatural to humans mix and build into a swarming mass that falls apart with the snap of a finger. Suzeau once again dips into his bottomless well of imagination to create a work that's both innovative and dynamic.
The rest of the concert included Jerel Hilding's classical ballet, "Allegro con Brio," a piece for 13 dancers with lovely solos, duets and ensemble sections, splendid leaps and a sense of elegance, and Willie Lenoir's "Oxford Suite, Part I," a dramatic work with challenging choreography that is mastered by its dancers.
University Dance Company will repeat the concert at 7:30 p.m. today.
Copyright 2018 The Lawrence Journal-World. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. We strive to uphold our values for every story published. | http://www2.ljworld.com/news/2001/nov/16/dancing_does_justice/ |
Ballet Fashion Show
During the Amsterdam Fashion Week, sponsored by Mercedes Benz, I photographed a ballet fashion show for theModeCode. In this exhibition, a dance performance was combined with a film. The event took place on Saturday, January 24th in the Koepelkerk – Renaissance Amsterdam Hotel. It was a collaboration between The Nationale Ballet, designer David Laport and filmmaker Mathieu Gremillet.
Two ballet dancers performed in movements inspired by insects. I loved the contrast between the sophistication and elegance of ballet combined with the unique and sometimes awkward insect inspired movements. They showed how a butterfly transforms from a worm and how “Evil” wants to take this change down before the final stage.
The detailed choreography of the dancers was fascinating to follow in harmony with the fashion film in the background. The dancers controled every part of their body in order to take the audience through the ‘story’ they told. | https://meganalter.com/ballet-fashion-shoot/ |
Youthful enthusiasm usually fades in time. Consider Michael Tilson Thomas. When he stepped as a substitute for William Steinberg in the middle of a Boston Symphony concert in 1969 season, I was a teenager who was just beginning to discover classical music. At first I was skeptical of the new conductor, but after hearing his sensuous recording of Debussy's Prélude to the Afternoon of a Faun I was hooked. I quickly purchased all of his available recordings, including The Rite of Spring, a work that was utterly incomprehensible to me until I heard this version. It's been over twenty years since then, and many recordings of the Rite have come and gone. As I put this disc in the player, I wondered whether my enthusiasm might have cooled. I need not have worried.
The Boston Symphony of 1972 was one spectacular instrument – not yet ravaged by years of drudgery under the hapless Seiji Ozawa – and Tilson Thomas played it like a true virtuoso. Not only is his the most colorful Rite on disc, it's by far the most rhythmically vital and thrilling. Tilson Thomas has a way of inflecting the rhythms that calls to mind a giant, but superbly disciplined jazz band, and they really cook in the 'Adoration of the Earth'.
In spite of the conductor's clear emphasis on rhythm and forward motion, he never misses a lovely turn of phrase or a beautiful melodic fragment. Listen, for example, to the tremendously lyrical phrasing in the 'Games of the Rival Tribes'. Nor have I yet heard a more thoroughly integrated interpretation. Each tempo and each phrase in Part One builds to the tremendous climax of the 'Dance of the Earth', while Part Two transports us from the mysterious and beautifully colored introduction to the hair-raising 'Sacrificial Dance.' And if this version of the 'Glorification of the Chosen Victim' doesn't send shivers up your spine, see your doctor immediately.
Gerald Fox (in the January/February 1993 issue of the American Record Guide) praises Benjamin Zander and Yoel Levi for adopting the tempo of Stravinsky's piano roll recording and zipping through the 'Sacrificial Dance' in just about four minutes. With all due respect to Mr. Fox, I find that pace far too frantic. Tilson Thomas takes a slightly more reasonable 4:35, and it sounds absolutely right. DG's analog recording is nothing short of spectacular, with each line of Stravinsky's complex score always clearly audible and a bass drum that will knock you out of your chair.
The King of the Stars is a short, five and one-half minute score for male chorus and orchestra. Although written at about the same time as the Rite, its harmonies look ahead to the Symphonies of Wind Instruments. Tilson Thomas effectively captures the hushed, sensuous mood of the piece.
Dutoit's Pétrouchka, while adequate, is far from outstanding. All too often, he completely misses the vibrant colors and beautiful melodies that are the hallmark of this score. Pacing is dull, and rhythms are flaccid. Pianist Tamás Vásáry provides the only brilliant moments. Indeed, the all-important piano part of this ballet has rarely been so well played (or given such prominence by the engineers). The sound is dry, artificial, and without warmth. Haitink and Boulez both find far more variety and vitality in this music.
Leonard Slatkin's latest release is a valiant attempt at a classical 'concept' album. In his notes, Richard Freed attempts to draw the parallels between the three works. As he tells us, the Haydn "evokes in dramatic terms the image suggested in the second verse of the Book of Genesis…" while "Ginastera's final work for orchestra was his response to a different version of the creation of the world, the story told in the Mayan 'Book of the Council'…" Stravinsky's primitivist ballet doesn't exactly fit into the 'concept' as Freed acknowledges: "The tale told in the Rite of Spring is not the creation of the world, but," he claims, "one that is not too far removed." What a bunch of hogwash! Musically, the Haydn is completely out of place in this context, though Slatkin's interpretation is superb. Where Stravinsky and Ginastera are raw, primitive, and unrefined, Haydn's 'Representation of Chaos' is a model of classical elegance and grace.
Ginastera's Popol Vuh is a harsh, violent, and unmelodic composition that is – in its scoring and rhythms – occasionally reminiscent of the Stravinsky. For example, the drumming at 1:42 into 'The Birth of the Earth' calls to mind the Rite's 'Ritual of the Rival Tribes', though it is neither as exciting nor inspired. Then again, Stravinsky isn't the only influence. The timpani in 'The Magic Ceremony of Indian Corn' are taken almost verbatim from the opening of Gershwin's Piano Concerto. A recurring wind chord (used extensively in both the opening and closing movements) evokes neither the everlasting night nor the mysterious creators imagined by the Mayas; it sounds precisely like a distant train whistle. The third section, 'Nature Wakes', with its colorful evocation of the twittering of birds, is by far the most effective portion of the work. If you like this sort of thing, Slatkin and the orchestra give the score a vibrant reading in its world première recording.
After the organic unity of Tilson Thomas's reading, Slatkin's Rite seems like an aimless succession of unrelated pieces. While Slatkin takes just over a minute longer than Tilson Thomas, his version sounds sluggish by comparison. The fire and excitement that permeates every bar of the Tilson Thomas is rarely heard here. While some listeners might favor this generally introspective approach, I find Slatkin dull and routine at best. RCA's sound is distant, over-reverberant, and lacking the crystal clarity of the DG.
The Nieuw Sinfonietta of Amsterdam was certainly nieuw to me, and this six year old bunch plays brilliantly. Their account of Apollo is the most appealing item on their disc, though the first violins tend to whine at times and conductor Markiz is in no hurry. Both Rattle and Lubbock deliver the goods in just over 30 minutes, but Markiz drags the process out for nearly 33 minutes. While his slow and stately tempos work well in some sections (such as the opening) other moments ('Calliope's Variation', to cite just one example) are clearly lacking energy and animation. By comparison, Rattle brings considerably more grace, charm, variety, and nuance to the music. He's got the further advantage of the rich and velvety City of Birmingham Symphony strings and a first rate Rite of Spring to fill out his recording. The two concertos on the Globe disc are given heavily-accented, dry, and charmless readings. While Stravinsky himself might have approved of Markiz's cold and rhythmic approach, I want more variety, lyricism, and joy. Globe's sound is boxy, claustrophobic, and oddly artificial.
Copyright © 1995, Thomas Godell. | http://www.classical.net/~music/recs/reviews/r/rca60993a.php |
John Neumeier’s A Midsummer Night’s Dream is wacky, whimsical and occasionally maddening, yet his depiction of a ballet sci-fi world is so dazzling and absorbing, and his story-telling so rich, that it seems almost churlish to find fault.
Having suffered through his turgid Lady of the Wilted Camellias and the relentlessly bleak Little Mermaid, I decided to give Neumeier’s Nijinsky a wide berth last season – which turned out to be a mistake, said fellow ballet enthusiasts. The learned Neumeier, a Renaissance man in the ballet world, invariably tackles the grandiose, and sometimes mauls the target beyond recognition. His 1977 vision for Dream, however – on display for two nights only at San Francisco’s War Memorial Opera House last week – remains gloriously tethered to tradition even as he explores the fringes of the Twilight Zone.
In this staging, boundaries often blur between fairy and human worlds. Minimalism reigns in the design: graceful Regency era costumes and loosely hanging silk backdrops for the court scenes; three portable trees against a night sky pinpointed by stars for the forest, with the androgynous fairies in silvery, shimmering unitards and tightfitting caps (an apparent homage to Frederick Ashton’s Monotones of a decade prior). When humans wander into this menacing world, their movement either grinds to a slow motion while the fairies dance at a normal tempo, or the fairies’ movement decelerates while the humans go about their business, oblivious to the presence of these Machiavellian sprites.
Neumeier created three distinct choreographic signatures: a gracious classical style with neoclassical flourishes for the court dancers; a spare, angular technique for the fairies; vaudevillian shtick for the Craftsmen. The daring score tacks on to Mendelssohn an ominous electronic drone by György Ligeti for the fairies, and hurdy-gurdy arrangements of popular themes, some from La Traviata, for the Craftsmen. This pastiche may sound more like a scenario for Cirque du Soleil than the serious Hamburg Ballet, but Neumeier makes us believe.
His decision to have the lead dancers double up in the roles of Theseus/Oberon, Hippolyta/Titania and Philostrate/Puck echoes that in Peter Brook’s watershed 1970 staging of the play for the Royal Shakespeare Company. (In turn, Brooks cites Jerome Robbins’ Dances at a Gathering as a significant influence on his radical minimalist concept for his Dream.)
Thus, the fairies embody a side of the human characters’ personalities – a dark, mischievous side – and the conflicts between the fairy king and queen hint at unresolved issues between Theseus, Duke of Athens, and Hippolyta, his bride-to-be. Guest artist Alina Cojocaru (who recently decamped from the Royal Ballet to join the English National Ballet, setting the ballet world a-titter) and Hamburg Ballet stalwart Alexandre Riabko are perfectly matched: all creamy elegance in the aristocratic roles, stern and combative as monarchs of the fairy kingdom. Their transformation was so compelling that they looked like different dancers in the different roles. As did Konstantin Tselikov, who segued heroically between the courtly Philostrate, Theseus’ “Master of the Revels,” and the hell-raising Puck.
Neumeier creates an elaborate Prologue in which Hippolyta appears to harbor doubts about tying herself permanently to the playboy Theseus. We are also introduced to Demetrius and Lysander, both ardent suitors to the lovely Hermia, while the bespectacled Helena (portrayed with great comic timing and impressive technical skill by Carolina Agüero) hungrily pursues Demetrius. The characterizations are cleverly delineated in the choreography, but the Prologue drags on too long – so it is a relief when blissful Mendelssohn gives way to Ligeti’s alarming electronic hum, ushering in the malevolent fairies. Their movement is all geometric squiggles, stabbing bourrées, thrusting sideways leaps, lightning fast turns and twisting jumps.
Oberon climbs a tree to get a better view of the lovers’ quartet and map out his intrigue. Those misshapen trees, wheeled around stage by fairies, look like dime-store mylar Christmas trees that have suffered through a tornado, but every slight movement causes the branches to quiver and shimmer in the ghostly lighting, to splendid effect.
Titania performs acrobatic gyrations mounted on Oberon’s shoulder, then falls asleep in a bower formed by the tangled arms and legs of supine fairy men. When she wakes, under the influence of the floral aphrodisiac dispensed by Puck, she falls, literally head over heels, for Bottom, sniffs him all over in delight and rides him like she rode Oberon.
The Craftsmen, representing the ignorant, happy-go-lucky side to our humanity, supply light comic relief as they bumble through the forest in dire need of GPS, their hurdy-gurdy very effectively accompanying their virtuosic slapstick. Later, in the Act II wedding divertissement, Dario Franconi and Thomas Stuhrmann brilliantly enact the sorry tale of Pyramus and Thisbe – Franconi straight out of Monty Python, and Stuhrmann in a costume that, hilariously, evokes Nijinska’s 1923 Les Noces, with his inconveniently long braids, shapeless dress and fire engine red pointe shoes. The audience laughed till they cried.
Act II opens with an unmemorable pair of pas de deux for the happily rearranged lovers’ quartet, set to Mendelssohn’s majestic and wistful Nocturne that in other productions accompanies the principal duet for Oberon and Titania, after their marital spat has been resolved. Neumeier’s choreography does not do justice to the towering score at this moment, but goes on to soar in the wedding scene, a model of grace and purity in the ensemble choreography – the entire company elegant, clean and musical in lovely baroque formations – and in Theseus and Hippolyta’s pas de deux, heartfelt and true.
Riabko initially carries Cojocaru through a series of poignant lifts while she sleeps. Once she wakes, she is full of serene joy and a newfound commitment, her dream presumably having dispelled her pre-wedding jitters. Acting and ballet technique meld naturalistically in her performance; her passion and delicacy, her exquisite carriage of the head and shoulders, remind me of Margot Fonteyn. Her lines are slightly marred by her somewhat ungainly pointe shoes – unusually soft, with an unusually wide box – and she does not roll through her feet with the clarity of most modern ballerinas. Yet there is a wondrous, fragile elegance to her dancing. She could be wearing Nike Airs and still entrance us. She brings out the best in Riabko – his variations exceptionally clean, buoyant, and passionate in this act. A sharp contrast to their ascetic modernism as Titania and Oberon – and Neumeier shrewdly brings the curtain down on a reprise of the fairy king and queen’s acrobatic embrace. | https://bachtrack.com/de_DE/review-feb-2014-hamburg-ballet-dream-san-francisco |
For more than four hundred years, the art of ballet has stood at the center of Western civilization. Its traditions serve as a record of our past. A ballerina dancing The Sleeping Beauty today is a link in a long chain of dancers stretching back to sixteenth-century Italy and France: Her graceful movements recall a lost world of courts, kings, and aristocracy, but her steps and gestures are also marked by the dramatic changes in dance and culture that followed. Ballet has been shaped by the Renaissance and Classicism, the Enlightenment and Romanticism, Bolshevism, Modernism, and the Cold War. Apollo's Angels is a groundbreaking work—the first cultural history of ballet ever written, beautifully told.
Ballet is unique: It has no written texts or standardized notation. It is a storytelling art passed on from teacher to student. The steps are never just the steps—they are a living, breathing document of a culture and a tradition. And while ballet's language is shared by dancers everywhere, its artists have developed distinct national styles. French, Italian, Danish, Russian, English, and American traditions each have their own expression, often formed in response to political and societal upheavals.
From ballet's origins in the Renaissance and the codification of its basic steps and positions under France's Louis XIV (himself an avid dancer), the art form wound its way through the courts of Europe, from Paris and Milan to Vienna and St. Petersburg. It was in Russia that dance developed into the form most familiar to American audiences: The Sleeping Beauty, Swan Lake, and The Nutcracker originated at the Imperial court. In the twentieth century, émigré dancers taught their art to a generation in the United States and in Western Europe, setting off a new and radical transformation of dance.
Jennifer Homans is a historian and critic who was also a professional dancer: She brings to Apollo's Angels a knowledge of dance born of dedicated practice. She traces the evolution of technique, choreography, and performance in clean, clear prose, drawing listeners into the intricacies of the art with vivid descriptions of dances and the artists who made them. Apollo's Angels is an authoritative work, written with a grace and elegance befitting its subject. | https://libro.fm/audiobooks/9781452671086-apollo-s-angels |
Dublin Dance Festival – Highlights by Stephen McDermott
Identity, migration and territory are among the themes that will feature as part of the 2016 Dublin Dance Festival.
The highly contemporaneous programme also sees pieces that will respond to the 1916 proclamation, with new Irish work programmed alongside work from Greece, Canada, the U.S., and Spain.
Combining dance, circus, visual art, and theatre, the programme continues a tradition of boldness and open-mindedness, according to new festival director Benjamin Perchet.
Here are some of highlights from the festival, which runs from 17 – 28 May next year:
Planites (Patricia Apergi / Aerites Dance Company)
The Irish premiere of choreographer Patricia Apergi’s latest work sees five men in a highly physical, urban fusion piece which confronts immigration and the concept of the foreigner. The piece is inspired by flamenco, traditional step dance, and Arabic and African movement and is described as a “compelling mix of dynamic movement and sharp humour”. The result, we’re told, will be an interplay between diverse traditions and Apergi’s own choreography, which itself is inspired by street culture and protest.
17-19 May, Samuel Beckett Theatre
(Tickets: €25/€20)
Shostakovich, Rasa (Alonzo King LINES Ballet)
San Franscisco ballet master Alonzo King comes to Dublin for the first time, with two works in one night at the Bord Gáis Energy Theatre. Classical technique and modern movement are bridged in a showcase of King’s style, with twelve dancers taking the stage for almost two hours of performance. The first work, Shostakovich, sees his choreography set to four string quartets and an exploration into the space between harmony and discord.
The second, Rasa, is performed to an original table score by the Grammy-winning Zakir Hussain.
22 May, Bord Gáis Energy Theatre
(Tickets: €20-€55)
Betroffenheit (Kidd Pivot & Electric Company Theatre)
Betroffenheit – a German word for a state of shock and trauma – sees a collaboration between two of Canada’s most renowned artists: Olivier Award-winning choreographer Crystal Pite and playwright Jonathon Young. Described as a marriage between dance and theatre, Betroffenheit’s inspiration comes from the tragic death of Young’s daughter and two of her cousins in a fire in 2009. The two-hour piece sees a crisis-management team keep an emergency situation alive, while “The Show” provides a distraction from the horror of reality. Touching on universal themes of loss, trauma, addiction and recovery, it promises to be as thrilling as it is harrowing.
24-25 May, O’Reilly Theatre
(Tickets: €32/€28)
Anam (Siamsa Tíre, The National Folk Theatre of Ireland)
Co-commissioned by Dublin Dance Festival and Siamsa Tíre, Anam sees a collaboration and fusion of styles between world-class step dancers from the US, Canada and Ireland.
The piece, which will be performed to live music and songs by acclaimed musician Fergal O’Murchú, seeks to explore the similarities and differences between each of the dancers and their styles. Performers will include: Matthew Olwell (Appalachian Flat Foot), Jonathan Kelliher (North Kerry “Munnix”), Nathan Pilatzke (Ottawa Valley, Canada), and John Fitzgerald (Modern Irish).
23-25 May, Samuel Beckett Theatre
(Tickets: €25/€20)
Embodied (GPO: Witness History Public Art Commission)
As a curious counterpoint to the recent Abbey debacle, six female choreographers will present their response to the 1916 Proclamation to mark the opening of the new GPO: Witness History Centre. Embodied will see a series of new dance solos by Irish-based female choreographers, in what are being described as ‘physical proclamations’ in response to the Proclamation and its relevance today. Among those whose work will be performed are Jazmín Chiodi, Sibéal Davitt, junk ensemble (Jessica Kennedy & Megan Kennedy), Liv O’Donoghue, Jessie Keenan and Emma O’Kane.
Full details on the Dublin Dance Festival Website. | https://nomoreworkhorse.com/2015/11/18/dublin-dance-festival-2016-highlights/ |
Eric Richmond has lived and worked in London for nearly 30 years.
Raised in Rome by an opera singer father, and an artist mother, he has travelled the globe photographing performing artists in places as diverse as India, Argentina, and S. Africa.
Eric trained at the Art Center College of Design in Los Angeles, and assisted a variety of photographers both in Los Angeles and New York City. Beginning his career in Milan, he moved to London in 1984; since then he has worked for many publications, record companies, dance companies , theatre companies and performance venues, producing an array of material to promote their productions. A 20 foot print of one of Eric’s photos for the Royal Ballet was hung on the Royal Opera House after its refurbishment to announce its re-opening.
Eric’s work is held in the permanent collections of:
Musee Des Arts Decoratifs Paris
Musee des Beaux Arts Montreal
Diana Vishnyeva II
A beautiful portrait of Diana Vishnyeva, principal dancer with both the Mariinsky Ballet (formerly the Kirov Ballet)...
Sakoba 01
This digitally remastered image evokes a sense of continued movement and narrative as the dancer becomes smaller and...
Shambhu
Capturing the athleticism and grace of the dancer as she moves forward, Eric grants us a magical insight into the world...
Shell
Inspired by the classical nude photography of Horst P. Horst, 'Shambu' reflects the links between the curves of the...
Shiva
Eric specialises in dance and performing arts photography and here he takes the subject of Indian dance. Made famous by... | https://ongallery.com/en/artist/eric-richmond |
French-American choreographer Carole Alexis and her Ballet des Amériques dance company bring a rich and sprawling repertoire to the Tarrytown Music Hall in a new series called Westchester Wednesday Dance. These weekday evenings of dance offer audiences a break in the workday routine and a chance to witness the original work of the Music Hall’s company in residence. Treat yourself to a Wednesday night of vibrant culture!
A student of Maurice Béjart, Alexis’ choreography is rooted in classical ballet training and tradition while drawing on influences as disparate as children’s games, island rituals, natural phenomena, and modern urban life. Her dances entice and charm the viewer with elements of humor, psychological insight and surprise. Her company’s extensive and eclectic repertoire bears her distinctive signature and vision, reinvigorating “ballet” for new audiences and dance connoisseurs alike.
Event Location and Ticket Information
Tarrytown Music Hall
13 Main Street
Tarrytown, NY 10591
Handicap Accessible? Yes
Date: Wednesday, April 24, 2019
Times: 7:00 pm - 8:30 pm
Ticket pricing: | https://artswestchester.org/events/westchester-wednesday-dance-3/ |
Reception:
Thursday, April 18th, 2019
6-9pm
on-view through May 3rd, 2019
Artist's Statement:
"I photograph dancers because I love the art of dance. I am also a dancer and have been dancing for 19 years. I choreograph, I observe, I study, I perform, being a dancer really helped me portray these photographs the way I envisioned them. The beautiful movement and elegance dancers create with their bodies is truly something amazing to witness.
I’ve been exploring abandoned buildings for a long time, photographing inside and out. The old, run down, grimy looking buildings are surprisingly very beautiful to me. Just to think at one point in time there was so much going on inside them, and just one day it shuts down forever and becomes empty and forgotten.
My vision is to take these dancers on an adventure into these buildings and take photographs of them. Of their movement, their grace, the beauty and passion. Dance is a beautiful art and the movement created with the body is extraordinary. My end goal is to photograph these beautiful dancers using elegant movement in these forgotten but also beautiful abandoned spaces. The dancer makes a beautiful contrast in this old, grimy space.
The passion you can see in their eyes as they express themselves through dance, when words are insufficient. Dance is a language that is felt instead of heard. I created something out of the ordinary. Something that will draw any viewer’s eye and attention and grasp it. I incorporated things I have a great deal of interest and passion for.
The music that is being played at my show is the music that the dancers in the photographs have danced to in their dancing careers. Contemporary, modern type soft music with a lot of emotion." | https://www.starlightstudio.org/copy-of-jen-ryan-shirley-french-son |
It says that Hanoi has slipped two places to rank in the bottom three of 100 cities assessed on three sustainability factors – people, planet and profit.
The Sustainable Cities Index was commissioned by Arcadis, an Amsterdam-based design and consultancy firm. It measured each city’s social, environmental and economic impacts.
|
|
Hanoi drops two places to 98th out of 100 global cities in new sustainability ranking. Graphics by Arcadis
Hanoi fared particularly badly against its peers in Southeast Asia, falling far below Singapore (4th), Kuala Lumpur (67th) and Bangkok (80th). Jakarta ranks 94th and Manila 95th in the survey.
Hanoi’s declining air quality, lack of green spaces and inefficient energy consumption were blamed for its low ranking.
The air pollution in Hanoi has been caused by construction projects, expanding fleets of cars and motorcycles and heavy industry, including steel works, cement factories and coal fired plants surrounding the capital city.
The city of 7.7 million people has more than five million motorbikes and 550,000 cars. Data shows the number of private vehicles has been increasing at a rate of 4.6 percent annually, but the amount of land allocated for transportation projects has only been expanding at a rate of 0.4 percent.
Private vehicles are one of the leading causes for dirty air in the city. Last year, the capital experienced only 38 days of clean air as pollution levels rose to match China's smog-prone capital, Beijing, according to a report by the Green Innovation and Development Center (GreenID) earlier this year.
Nguyen Duc Chung, chairman of Hanoi, said at a meeting with GreenID in July last year that monitoring results showed that air pollution in the city had reached the "red alert" level.
"The heaviest emissions are coming from vehicles," he said.
"Recent developments have benefited economic growth, but (problematic) issues related to sustainable development, and consequences on the environment increased," said Nguy Thi Khanh, head of the GreenID Hanoi.
Unpopular decisions
In its pollution fight, the Hanoi People’s Council, the municipal legislature, went against public opposition last July to approve a ban on motorcycles by 2030.
The ostensible aim was to boost use of public transportation, including a new metro system, but this is poorly developed sector in the capital city.
While public buses are cheap and affordable, most vehicles are old and have to share congested roads with cars and motorbikes, making them an unpopular choice for most commuters.
Over the past years, hundreds of decades-old trees in Hanoi have been chopped down to expand roads and lay the metro line network, with authorities consistently prioritizing urbanization over protecting the environment.
One project to cut many trees was stalled by public protests and a campaign to save them in 2015. However, the felling of trees in different parts of the city has continued.
The Arcadis study ranked London as the most sustainable city, based on its innovative and well-connected metro network and trees-planting campaigns.
Singapore overtook Seoul place to become the world’s fourth most sustainable metropolis, while Hongkong, Taipei and Tokyo were other Asian cities that performed well on the index.
South Africa’s Cape Town, Egypt’s Cairo and India’s Kolkata joined Hanoi on the bottom rungs. | https://e.vnexpress.net/news/news/hanoi-goes-from-bad-to-worse-in-sustainability-ranking-3841712.html |
Public Participation Essential for a Sustainable CityWed, 02/19/2014 - 16:21
Q: What are the priorities for Mexico City’s Environment Ministry during the current administration?
A: We have determined five priority areas on which we will focus for the next five years. The first two go together: air quality and climate change. Our air quality target during this administration is to reduce ozone by 25%, as well as reducing PM2.5 and PM10. The third is biking infrastructure or sustainable mobility. The fourth is green urban infrastructure, which covers the benefits that green infrastructure can provide. The fifth line of action is water management.
Q: What are the main challenges that come with implementing sustainability policies in a metropolitan area the size of Federal District?
A: One of the greatest challenges we face is ensuring much more sustainable and cleaner mobility in Mexico City but we have come a long way since the 1980s, when pollution and air quality were the city’s biggest problems. There is much more consciousness about the environment today. Right now, our greatest challenge is the increasing number of vehicles in the metropolitan area and the accompanying high levels of fuel consumption and emissions. Our challenge is getting people who drive their own cars to switch to cleaner methods of transportation: either public transportation or cycling.
Q: What lessons has Mexico City learned from overcoming some of its environmental challenges, and how can that serve as an example to other cities?
A: The main lesson we have learned is regarding how the budget is invested, which provides lessons especially for Latin American cities and other metropolitan areas of Mexico. We see that, contrary to what is happening in Mexico City, air pollution is getting worse in other Mexican cities. One of the great lessons from Mexico City is the importance of investing in public transportation. If there is a safe and secure transport system, there will be much better mobility and air quality. As for green buildings, if they want to be competitive, they have to be certified, which is something that the market is pushing cities toward.
Q: How have you convinced the public to implement sustainable practices in their daily lives?
A: Mexico City has set an example, in terms of encouraging public participation in sustainability. Our public bike system, Ecobici, has received a very positive response. One of the elements that has the biggest impact on quality of life is the time it takes to commute. We have seen an increase of 40% in the use of bikes in the last couple of years. Ecobici saw an increase of 50% in its total users in 2013. I really think there is a predisposition among inhabitants to change their transportation habits, as long as they can do so efficiently and safely.
Q: How is the Mexico City government pushing for more stringent construction regulations that would lead to on-site energy generation and water recycling in new buildings?
A: Our regulations have become much more stringent and norms are being changed to reflect this. Every new building or construction now has to have mitigation and compensation actions. We are switching from a voluntary mitigation program to an obligatory one. The private sector has reacted quite well to these changes so far. Obviously they have to make a bigger investment, but they are aware that in order to be competitive their buildings have to be greener, as clients are increasingly looking for sustainable buildings.
Q: What potential is there to use the Bordo Poniente landfill for waste to energy conversion?
A: The plan is that by 2015 we will be able to use methane from Bordo Poniente for energy production, and that by 2018 it will provide close to 75% of the energy requirements for public buildings and public lighting. In the meantime, we are considering using wind power to provide the remaining 25%. We are analyzing our options for how to best start using wind energy and will announce the project officially in late 2014.
Q: How are you combining the environmental and economic priorities of the government of Mexico City?
A: We cannot have a greener, better city, without public policies that are focused on sustainability. We are planning important investments in public transportation in the next few years. For example, we will extend the rapid bus transit system, RTP, by a further 100km from its current 105km. For Mexico City to be competitive, sustainable growth and economic development must go hand in hand. | https://mexicobusiness.news/energy/news/public-participation-essential-sustainable-city |
EN Sustainable development requires scheduling and implementation of projects by considering cost, environment, energy, and quality factors. Using a robust approach, this study investigates the time-cost-quality-energy-environment problem in executing projects and practically indicates its implementation capability in the form of a case study of a bridge construction project in Tehran, Iran. This study aims to take into account the sustainability pillars in scheduling projects and uncertainties in modeling them. To model the study problem, robust nonlinear programming (NLP) involving the objectives of cost, quality, energy, and pollution level is applied with resource-constrained. According to the results, as time diminished, the cost, energy, and pollution initially decreased and then increased, witha reduction in quality. To make the model close to the real world by considering uncertainties, the cost and quality tangibly improved, and pollution and energy consumption declined. We applied the augmented ε-constraint method to solve the proposed model. According to the result of the research,with regard to the time-cost, time-quality, time-energy, and time-pollution charts, as uncertainty increases, the cost and quality will improve, and pollution and energy will decrease. The proposed model can be employed for all industrial projects, including roads, construction, and manufacturing. | https://sin.put.poznan.pl/publications/details/i41512 |
Sustainable Building Design from Start to Finish (Week 1)
Green building, or sustainable design, is the practice of increasing the efficiency with which buildings and their sites use energy, water, and materials, and of reducing impacts on human health and the environment for the entire lifecycle of a building. Green-building concepts extend beyond the walls of buildings and include site planning, community and land-use planning issues as well. The growth and development of our communities has a large impact on our natural environment. The EPA has revealed that buildings account for approximately 40 percent of the total U.S. energy consumption and residential buildings account for approximately 54 percent of that total, while commercial buildings accounted for the other 46 percent. At home or at work, without a doubt buildings are a big part of our lives. The manufacturing, design, construction and operation of the buildings in which we live and work are responsible for the consumption of many of our natural resources. The planning of a project is a key component in figuring out how to go about getting a sustainable building design or a green building. Many teams use a multi-disciplinary, integrated design approach, and for the next few weeks we will outline the steps involved in producing an effective Sustainable Building Design from Start to Finish.
The conventional way of bringing a project to fruition which includes, design, bid, building, and operations processes often fails to recognize that buildings are part of larger, complex system. As a result, solving one problem may negatively affect other systems elsewhere in the system. In contrast, an integrative process is a highly collaborative method used for the design, construction and operations of sustainable built environments. This approach requires the whole project team to think of the entire building and all of its systems together, emphasizing connections and improving communication among professionals and stakeholders throughout the life of a project. It breaks down similar to the strategy outlined in the Integrative Process (IP)© – ANSI Consensus National Standard Guide© – Design and Construction of Sustainable Buildings and Communities to inefficient solutions. Although the term integrative design is most often applied to the planning stage of a new construction or renovations project, it is often used interchangeably by the public with the integrative process, where an integrative process is applicable to any phase in the life cycle of a building. What it boils down to, is getting everyone who will be involved in the project, from the design phase to construction to the actual day-to-day operations, together right from the start to collaborate and trust.
Stage 1: The Discovery Phase
During the discovery phase a project must go through a business justification. This takes place once a business identifies that they have a need, which might result in a building project. This is where the client will define the need. The client will then explore high level options for meeting the requirements set out in the statement of need. This may include an assessment of comparable projects. Then they would prepare a preliminary business case, which is a first attempt to justify the investment required by the potential project and should include a management structure, a draft legal agreements and funding options. The client will need to develop a statement of needs, that provides sufficient information about the project to allow for the appointment of a consultant team who will carry out feasibility studies and options appraisals, prepare a project brief and develop the strategic brief. The brief should also outline preliminary requirements of the project including any potential goals or visions or basic metrics of performance.
The initial strategic brief may include the identification of potential sites for the development. It should be noted that for particularly large projects, an environmental impact assessment may be required by the local planning authority and that this may include an assessment of alternative sites for the development. It is important to consider this when assessing potential sites. Identifying possible sites, and considering their impacts (such as the possibility of moving staff) can be a complicated process and may require the appointment of independent client advisers (such as surveyors and or civil engineers).
The client undertakes risk assessment and value management exercises, sets an initial budget for the project and considers funding options for the project. They may then revise the preliminary business case and strategic brief.
The next step is to establish an internal team. They will need to identify who will lead the project for the organization and have the organization’s best interests in mind. This can take the shape of a project director, one person, or a project board, a team of champions who would take responsibility for the quality of the project from a business standpoint. The team may include a user panel or project stakeholders.
Next week we will further investigate what it takes to build a sustainable or green building from start to finish. | https://www.usgbcwm.org/sustainable-building-design-start-finish/ |
ANN ARBOR, MI – The University of Michigan’s new $39-million facility to maintain its transportation fleet is nearing completion.
Construction of the facility started last winter, and is on track to be completed in time for a fall opening, according to UM spokeswoman Kim Broekhuizen. The proposed $39-million budget for the project has also stayed steady despite economic factors such as inflation or supply chain issues, officials said.
The approximately 70,000-square-foot facility is at the site of the university’s laundry building, which was demolished after Michigan Medicine built a fully operational laundry facility at an off-campus location, according to university documents.
The new building at 1665 Dean Road, near North Campus, provides expanded space for the university’s larger bus fleet. Since 1974, the bus fleet has grown 45% and the university now transports more than 7 million riders per year on UM buses.
The Dean Road Transportation Center replaces the current transportation building on the Stephen M. Ross athletic campus, with the exception of auto and truck fleet maintenance services. It is the second attempt at replacing the old transportation facility, as public pressure from northeast Ann Arbor residents in 2016 led to the cancellation of the first attempt.
A coalition of about 600 households from multiple housing complexes expressed concerns at the time that the transportation center would result in increased sound pollution, air pollution and traffic putting children and pedestrians in the largely residential area at risk.
Read more: UM regents to consider design, construction plans of Dean Road transportation facility
The university redrafted a plan for the Dean Road facility, which was approved by the Board of Regents in February 2020. The center is 44,000 square feet smaller than the first proposal, has 243 less parking spaces and will not add any bus traffic on nearby Green Road, according to a university presentation on the project.
The new facility could save an estimated $100,000 each year as a result of increased bus operational efficiency, according to the project details, and the capability of maintaining larger articulated and electric buses could increase options for sustainability measures.
The new facility also is part of the university’s carbon neutrality goals, Broekhuizen said. This means sustainability measures such as eliminating fuel consumption through cutting 2,000 annual hours of daily bus operations and increasing capacity for electrical vehicle charging.
For more information on the project, visit the UM Architecture, Engineering and Construction website.
Read more from The Ann Arbor News: | https://altunhatours.com/new-39m-university-of-michigan-transportation-center-nearing-completion.html |
Thriving natural habitats, sustained ecosystem services, and resilient communities and infrastructure are vital to a livable and prosperous region.
At the same time, our natural resources face many ongoing challenges and new threats. While the region permanently preserved 61,500 acres of natural and agricultural lands from 2001 to 2015, an additional 140,000 acres of such lands were developed -- an area roughly equivalent to the land area of the City of Chicago. Despite increased awareness about the importance of environmental assets, constrained funding at all government levels and competing priorities hinder our ability to adequately protect and enhance them. Climate change, manifesting in our region as more frequent and severe storms, extreme temperatures, and drought, is already significantly affecting our economy, ecosystems, built environment, and people. In particular, the region faces substantial flooding issues, which the intense storms brought by climate change and increased impervious coverage from development will continue to exacerbate. Flooding can cause extensive property damage and reduced water quality.
Many diverse factors influence the extent and form of development, from market forces to tax policy to infrastructure investment. Development at the region’s edge necessitates increased water, wastewater, and stormwater infrastructure. It can push demand for groundwater beyond sustainable levels, and affects communities and the resources themselves. The short and long term costs associated with providing infrastructure and services in these locations can be substantial. The impacts of these trends do not affect all residents equally. Low income and communities of color, children, seniors, and people with disabilities in particular may experience heightened risks, costs, and liabilities, including repetitive flooding, high water rates, and compromised infrastructure in areas that are otherwise overlooked by private investment.
ON TO 2050 proposes a comprehensive suite of actions by a range of stakeholders to address these and other environmental issues. It envisions a future where development practices and infrastructure embrace natural landscapes and contribute to healthy ecosystems. In concert with other plan strategies, the environmental recommendations will lead to a region that is more resilient to the anticipated impacts of climate change, particularly flooding, and contributes to worldwide efforts to stabilize our changing climate; has sustainable and clean water resources; preserves high priority agricultural and natural lands while accommodating strategic growth and infill; and helps protect the residents of the region who are most vulnerable to environmental impacts.
Promote inclusive growth by growing the ability of vulnerable populations to respond to environmental challenges and improving their environmental conditions and access to nature.
Improve resilience by planning for anticipated future impacts, protecting residents from risk, and promoting gray and green infrastructure that provides essential services and can adapt to changes in climate and technology.
Prioritize investing limited financial resources in a strategic and efficient way, maintaining existing infrastructure, and securing new revenues for needed enhancements.
The effects of climate change will have significant implications for the built environment, economy, ecosystems, and people of this region. We must intensify mitigation efforts while at the same time prepare for and be equipped to recover from the acute shocks and chronic stresses posed by climate change. Reducing greenhouse gas (GHG) emissions will require continued compact infill development, improved pedestrian and bicycle infrastructure, and increased investments in public transit as well as aggressive expansion in renewable energy systems, energy efficiency and retrofits, and electrification of our transportation system. Sound planning and decision-making can maximize the crucial role that the region’s natural landscapes, including trees and parks within our developed landscapes, play in promoting resilience. Trees, for example, store millions of tons of carbon, and provide shade which cools our communities and reduces energy consumption. The region’s land and water resources provide ecosystem services that enhance communities’ ability to withstand climate-related stresses, and also offer models for expanded green infrastructure inside and outside of our communities that grows our ability to adapt. Planning for climate resilience entails a wide variety of strategies for managing risk, strengthening our built and natural environment, and improving our operational response to specific events. Regional stakeholders, from local elected officials to business leaders, need access to up-to-date data on climate science to make informed decisions. At the same time, many resilience strategies require coordinated subarea, regional, or statewide action.
Abundant and high quality water resources play an essential role in sustaining economic prosperity, environmental and public health, and quality of life. Aquatic systems support an array of ecosystem services, a rich composition of native flora and fauna, recreation, and water purification. Water supplies from Lake Michigan, the Fox and Kankakee Rivers, and shallow and deep bedrock aquifers support the region’s industry, households, and energy generation needs. Lake Michigan and the region’s waterways also provide one of the great recreational systems in the country, while simultaneously transporting goods, both nationally and globally. Yet despite our status as a water‐rich region, we often fail to recognize the real and inherent value of this globally scarce resource. As a result, the region continues to suffer major flood damage on an annual basis, degraded aquatic systems across the majority of the region, and water shortages in areas that are growing the fastest.
A regional goal is to recognize, value, and manage water as a singular resource that could be almost infinitely reusable if managed properly. This applies to our natural aquatic systems, our built water management infrastructure, and our water supplies, both on the surface and underground. This approach seeks to integrate planning and management of water supply, wastewater, and stormwater in a way that considers the water cycle as a single system in which all water inputs and flows are recognized as potential resources, where efforts are made to enhance these systems rather than simply minimize or avoid impact on the environment, and maximizes the contribution to social economic vitality.
To preserve the region’s highest-priority natural and agricultural areas, stakeholders must pursue conservation strategies and also promote reinvestment in existing communities. While preservation decisions are often driven by opportunity, strategic frameworks like the ON TO 2050 Conservation Areas local strategy map and the Green Infrastructure Vision can help maximize the benefits of land protection by assisting the coordination of different actors and funding streams, particularly at the region's developing edge. In addition, sensitive development techniques such as conservation design in these locations can help ensure preservation of high quality natural assets as well as continuity and connectivity of natural areas via open space corridors, which is critical to protecting native species and systems.
Reinvestment efforts, which focus growth in areas with existing infrastructure, housing stock, transportation access, and services, can help reduce development pressures on natural and agricultural lands and revitalize disinvested areas as well as remediate brownfields and other barriers to infill development. In fact, redevelopment can significantly improve the environmental performance of communities and reap co-benefits. Integrating green infrastructure into the redevelopment process can result in additional parks and open spaces, tree-lined streets, and stormwater management. Combined, these investments provide places for recreation, habitats for native flora and fauna, air pollutant filtration, flood reduction, urban heat island mitigation, and groundwater recharge, while at the same time creating more desirable, healthy, and resilient communities. | https://www.cmap.illinois.gov/2050/environment |
According to the UN report «World Urbanization Prospects», 55% of the world’s population live in cities today, and by 2050 this figure will have increased to 68%. Analysis of demographic trends shows that by 2050, the number of urban residents will have increased by another 2.5 billion.
Historically, the process of urbanization is based on economic and social factors. People are eager to move from a village to a city in pursuit of better education and social services, as well as better job opportunities.
At the same time, urbanization as a phenomenon is accompanied by a number of problems, such as increased crime, pollution or congestion. In this regard, the city authorities are increasingly turning to systems of «smart city» or looking for other solutions in a given situation. Before we propose our own version, let’s look at the pros and cons of urbanization.
Pluses of urbanization
Urbanization is based on saving of funds and resources due to the scale and increase of labor productivity. Concentration of the population in the city allows to reduce the costs of engineering support, infrastructure and communications, as well as the organization of activities involving a large number of specialists.
According to experts of the international consulting company McKinsey, urbanization will become one of the largest drivers of global economic growth in the current century. A similar position is expressed in the UN report, which emphasizes the importance of increasing the number of cities to reduce poverty.
Minuses of urbanization
The gap between economically developed and degraded cities is widening. High population growth rates in megacities cause social tension. The number of migrants is constantly changing and can no longer be reliably recorded. It’s not yet possible to provide them all with housing and jobs.
The issues of overpopulation and providing cities with proper infrastructure are being solved too slowly. In this regard, the number of negative factors, influencing the creation of conditions for sanitary-epidemiological well-being of the population and the preservation of its health, is increasing.
For the last 10 years, the density of construction in the megacities has significantly increased, the areas of recreational zones have reduced, the sanitary protection zones of objects – sources of atmospheric pollution, have decreased. All this, along with the exponential growth in the number of cars, has led to the creation of a new artificial environment, where official standards of safe and secure living do not work. In addition, increased intra-city migration and poor ecology have escalated the risk of spreading epidemics and is one of the reasons for the development of psychosomatic diseases.
In addition to housing problems, urbanization has caused air pollution by dust, nitrogen oxides, formaldehyde, benzopyrene. And for urban residents, it caused increased noise, vibration, infrasounds and high-frequency electromagnetic fields from numerous sources, including mobile communications, WI-FI, as well as electric transport: subway, tram, electrified railways and electric cars. Existing health protection measures have ceased to provide urban populations with the necessary level of security, and the consequences of negative impacts have become irreversible. For example, it is the noise exposure that is associated with the increased cardiovascular diseases and neurosis, which, in terms of harm to the health of urban residents, reaches the level of negative impact of exhaust gases from motor vehicles, and the noise in the city streets in the houses are not even saved by glass packets. Among other things, the boost in the number of urban residents has led to an increase in the volume of household waste and, as a result, accelerated environmental pollution.
It is believed that urbanization should contribute to providing people with jobs and quality goods and services, but practice shows that there are no guarantees for migrants to benefit from these advantages after relocation. And the growing demand for housing is leading to rise in real estate prices. A lot of people can’t afford it. That’s why slums are formed in megacities.
The question «What to do?» arises, it is obvious that the growth of the population of cities cannot be stopped.
The way out of the situation
Theoretically, it is possible to calculate an approximate rate of urban population, which could be provided with the proper volume and quality of infrastructure, and in which the risk to human health from the impact of the urban environment will remain acceptable.
Or SkyCity linear cities should be designed and built. These are cluster-type settlements, harmoniously integrated into the natural environment, where the land surface is intended for pedestrians, and green plants, and transport, energy and information communications are above the ground.
In the construction of linear cities, there will be no need to cut down forests, build roads and railroads, or otherwise disrupt the biogeocenosis in the development area. Horizontal SkyWay lifts will connect high-rise buildings, settlements, as well as residential, shopping, entertainment, manufacturing and multifunctional clusters. SkyWay high-speed transport routes will transport you anywhere in the world. SkyCity’s linear cities can be built in the mountains, desert and waterlogged areas, including complex terrain, as well as on the sea shelf.
The basic principle of the construction of each infrastructure cluster is a pedestrian quarter, in which between multifunctional high-rise buildings, connected by horizontal lifts, there is housed low-rise comfortable development with landscaping of urban areas and with the use of renewable energy sources. In doing so, significant savings are achieved by:
- Cutting the cost of asphalt roads construction;
- Reduced maintenance costs of asphalt roads, especially in wintertime;
- Decrease by an order of magnitude in emissions into the urban atmosphere and urban soil;
- Reduction by three orders of magnitude in the number of accidents and deaths, including those of children, on urban roads;
- Decrease by two orders of magnitude in the allocation of expensive urban land for «second level» transport and communication infrastructure;
- Reduced energy consumption of transport;
- Decrease in municipal transport costs;
- Cutting the cost for irrigation of green areas using the technology of restoration of living fertile soil;
- Cutback on urban greening expenses.
SkyCity’s Linear City development will deliver many other positive effects. The duration and quality of life will significantly increase thanks to the ecological and comfortable natural environment in a metropolitan area; Thanks to landscaping urban neighborhoods everywhere; Thanks to reduction in irrigation costs for rehabilitated areas (up to 10 times); Thanks to obtaining environmentally friendly food products even in megacities; Thanks to creating a comfortable microclimate in the city; Thanks to placing green gardens on the roofs and floors of high-rise apartment and commercial buildings. This will also help to protect living quarters from overheating (roof plants evaporate moisture and cool the soil) and to create a comfortable indoor climate. It will be possible to reduce the cost on cooling dwellings (taking into account the fact that the soil layer and roof cover are natural heat insulators).
Accounting to the widespread installation of solar panels in the facades of high-rise buildings (up to half of the facade area), there will be created extensive opportunities for the use of renewable energy in the territories of SkyCity linear cities.
Innovative approach to creating the infrastructure of the SkyCity linear city will improve logistics, as the area of coverage of the erected buildings will be reduced in comparison with the models of the existing cities with transport of the «first level», i.e. on the ground.
Blockchain technology can have a great impact on the development of SkyCity linear cities, as the introduction of innovative architectural solutions in various aspects will reduce administrative costs, decrease the likelihood of errors and fraudulent schemes with documents. Creation and development of the SkyCity linear city involves the construction of a single system, which will constantly transfer huge amounts of data which needs protection and storage. It is in this context that blockchain will act as a platform for the accounting and settlement system of the SkyCity linear city, capable of providing the necessary level of information security.
As a result, the innovative and image project of the linear city, based on the SkyCity technology, will increase the attractiveness of the created region for business and will become an impulse for intensive economic development and growth.
A similar way to address the challenges posed by urbanization has already been appreciated by the United Nations. Anatoliy Yunitskiy was supported by 2 grants in 1998 and 2002 to develop the concept of linear cities. Accordingly, given the sustainable development goals adopted by the international community and officially known as «Transforming Our World: the 2030 Agenda for Sustainable Development», heads of states should focus on the SkyWay linear city of as one of the concepts of a “smart city”. Especially since SkyWay recently, in the spring of 2019, became a UN expert and partner in the Smart Cities program. | https://swinvestclub.com/news/skyway-skycity/ |
# Urban consolidation
Urban consolidation describes the policy of constraining further development and population growth to within the boundaries of preexisting urban areas rather than expanding outward into suburban areas. Urban consolidation seeks to increase the population density of a given urban area by expanding upward, redeveloping preexisting buildings and lots, and constructing new facilities in available spaces. It is theorized that discouraging urban sprawl and encouraging further development of housing units in preexisting urban areas will lead to a net gain in social and economic prosperity (e.g. more accessible public transportation, more efficient use of public utilities, and increased affordability of housing).
A major feature of modern urban consolidation practices is the incorporation of urban green space and open space areas. In higher density environments, incorporating natural settings into the landscape design can have positive impacts, such as increased happiness, decreased stress, and a reduction in maintenance costs. There are broadly three kinds of urban consolidation: Market-led consolidation of existing residential areas involves residential redevelopment of established dwellings as well as non-residential land and buildings at higher densities than the metropolitan average. Transit-oriented development involves high-density residential and mixed-use buildings within walkable precincts around public transport nodes, often referred to as activity centres. The third approach is to require that all new development on the urban fringe of existing metropolitan areas is at higher densities than the current average for those cities.
## History
The term "urban consolidation" first appears in social science and urban planning literature around the late 19th and early 20th centuries. Much of the existing literature on urban consolidation comes from Australia; some of the world's first government-official urban consolidation policies were enacted in Sydney and Melbourne to increase construction of higher-density terrace housing in the late 19th century. Throughout the 20th century, implementation of urban consolidation policies appears to come in 'waves', separated by population surges stemming from major events like World War I and II. Urban consolidation policies began to appear in the United States around the same time, with one of the earliest examples being a proposal for the consolidation of railroad lines in Iowa and Minnesota to increase the capacity and efficiency of existing passenger and freight traffic.
## Limits of urban consolidation
Although urban consolidation policies may have many positive social, economic, and environmental effects, there are limits to the extent of its benefits. Efficiency is a key feature of urban consolidation, but the aspect of infrastructure capacity is often overlooked. As a city's infrastructure is used by an increasing number of people, the systems must be upgraded and retrofitted, a process which can cost hundreds of millions of dollars. Falling under this category of 'capacity' are common features of civilization, such as roads, drainage systems, and open spaces. Roads in areas with urban consolidation policies are often overburdened with increased intercity traffic in addition to the preexisting suburban commuter traffic, and this problem is not always easily solved with transit-oriented development. Drainage systems are severely impacted by higher populations, potentially leading to increased flooding and pollutant runoff. Open spaces in high-density urban areas often conflict with urban consolidation policies; residents of high-density areas require a significantly higher amount of open space, but this would limit development of consolidation-oriented housing and transportation.
## Terminology
Brownfield site: land suspected or known to be contaminated by pollutants from previous industrial and commercial businesses. Urban consolidation policies seek to restore these abandoned spaces into parks and wildlife habitats, sometimes in an effort to make green spaces more accessible to inner city residents. Greenfield site: undeveloped and unpolluted land located in a rural or urban area. Urban consolidation aims to develop these areas for retail business, manufacturing, public services, and housing. Housing affordability: the price level at which people can purchase or lease houses and apartments while maintaining a sufficient income flow to fulfill basic needs. Urban consolidation policies have been successful at increasing population density through more generally affordable housing. Transit-oriented development: the development of public transportation with a core goal of maximizing the number of urban business, residential, and recreational spaces. Replacing car-oriented infrastructure with public transportation and making pedestrian activity a priority is key to increasing and optimizing urban density. | https://en.wikipedia.org/wiki/Urban_consolidation |
There is a growing interest in pedestrian and transit-oriented development as a way to reduce the cost of transportation and home energy use. Yet there is little knowledge of how much alternative travel modes and compact developments reduce environmental impacts and household costs. As US cities begin to rethink their growth, city planners need better tools to measure the environmental and economic effects of infrastructure redesign.
We have developed a life-cycle assessment framework that helps planners understand how energy use, air pollution, and household costs change when integrating transportation and land use planning around high-capacity transit. Using this framework, we can evaluate emissions from the construction and rehabilitation of buildings around transit, changes in household energy use within these neighborhoods, and reductions in automobile use as households shift some of their travel to alternate modes. We compare transit-oriented households to households that do not have access to high-capacity transit. Using Los Angeles’s Gold Line (light rail) and Orange Line (bus rapid transit) as case studies, we assess how these variables changed with development around those lines.
High-Capacity Transit in Los Angeles
The Los Angeles County Metropolitan Transportation Authority (LA Metro) operates one of the largest urban bus systems in the United States, in addition to a growing number of high-capacity bus and rail lines. The LA Metro system of high-capacity transit (HCT) comprises six rail lines and two bus-rapid-transit lines that run on dedicated routes, have short wait times, and carry large numbers of passengers. In addition, LA Metro plans to spend $14 billion to expand HCT lines. The growth of physical infrastructure has been accompanied by impressive ridership increases on all HCT lines, as many residents—particularly those living near transit stations—have chosen to change their travel habits and adopt transit.
We developed a case study of the environmental and economic effects attributable to future redevelopment around LA Metro’s Gold and Orange transit lines. We focused on these two transit systems because they are old enough to have established ridership, yet young enough that land use around the lines has not changed drastically. The Gold Line opened in 2003 and was extended twice to now include 21 stations and almost 20 miles of rail from East Los Angeles through downtown to Pasadena. The Orange Line began operation in 2005 and follows an old railroad right-of-way in the San Fernando Valley. Metro expanded the Orange Line in 2012 to include a total of 18 stations and over 18 miles of dedicated busway (Figure 1). In many places, the busway runs parallel to a greenbelt with a bike and pedestrian path. We found potential for higher density redevelopment around both lines and explored how the land could be repurposed. We then estimated the potential changes to energy consumption, air emissions, and household costs.
Smart Growth around Transit Stations
Many people are beginning to demand cities that support convenient and cost-effective multi-modal transportation. Creating mixed-use residential and commercial space that encompasses a supportive mix of office space, entertainment, retail, and public areas, all within walking distance of a transit station, can attract residents who are willing to pay for the convenience of nearby amenities. In the long-term, these mixed-use spaces have the potential to reduce greenhouse gas emissions and household costs by reducing vehicle travel and increasing the energy efficiency of buildings.
We identified five square miles of land that could be used for smart growth development within one half-mile of Gold and Orange Line stations. We focused primarily on vacant parcels of land and dedicated surface parking lots, which are the most underused space around high-capacity transit. Second, we considered parcels of land with existing structures, but that were also underused. We assigned these parcels to one of two reuse plans: 1) demolish existing small structures on large parcels and construct larger buildings; or 2) adapt existing large buildings for reuse. A land parcel was considered to be underused if the value of the land was greater than the value of the existing structures on the land, a common occurrence when an old building exists on a piece of land where the value has recently increased. Adaptive reuse of existing buildings means that the building shell is kept while the inside is remodeled. Both HCT lines in our study offered a mix of residential and commercially zoned parcels that could be redeveloped.
Integrating Life-Cycle Assessments
To estimate how energy consumption, air emissions, and household costs would change due to constructing smart growth around the Gold and Orange Line Stations, we first examined the available land and buildings for each parcel. Next, we estimated the environmental and economic impacts from redevelopment. Finally, we created a household and transportation behavioral assessment that integrates the entire life cycle of the system. Emissions assessments often focus on the use of vehicles and buildings while overlooking the production of these vehicles and buildings. Environmental assessments should instead include the life cycles for every part of the transportation system and the buildings, and should include the consequences of changing the area’s physical composition. By expanding the boundary of our analysis, we created an integrated life-cycle assessment of transportation and land use that can estimate all sources of emissions and all costs of redevelopment.
Environmental assessments should include the life cycles for every part of the transportation system and the buildings, and
should include the consequences of changing the area’s physical composition.
Our model estimated changes in energy consumption, air emissions, and costs from constructing walkable, mixed-use developments. Life-cycle estimations for each building included materials, construction activities, energy production, and energy consumption within the building. For some developments, remodeling older buildings would generate fewer emissions than constructing an entirely new building because parts of the old building would be reused. According to the National Household Travel Survey, residents living near high-capacity transit use automobiles less but walk, bike, and ride public transit more. We also found that households living around transit stations may take the same number of trips compared to others, but these trips tend to be shorter because destinations are closer. Additionally, residents are likely to purchase fewer automobiles over time because they drive fewer miles than people living an auto-dependent lifestyle. Based on these travel characteristics, residents who live near HCT lines reduce their transportation energy consumption and household costs.
Environmental Impacts and Household Costs
Smart growth redevelopment around transit stations requires initial monetary and greenhouse gas (GHG) expenditures, but has the ability to reduce life-cycle costs and emissions when compared to business-as-usual developments. Smart growth around the Gold Line would cost an additional $260 million for redevelopment and create an additional 100,000 metric tons of GHG emissions during construction (Figure 2). However, over a 60-year period, each household could avoid emitting as many as six metric tons of GHG each year, which would more than offset the initial construction emissions. When considering all households combined, the proposed mixed-use developments around the Gold and Orange lines could reduce GHG emissions by over 35 percent compared with business-as-usual developments. Similar reductions are also found for energy consumption, particulate matter emissions, and the potential for smog formation. On average, high-density TOD requires less land per person and leads individuals to reduce their transportation-based emissions by as much as 70 percent through changed travel habits. The upfront cost to construct a transit line and the cost of higher density development around stations are offset by emission reductions over time from residents who are able to change their behaviors and break away from car-dependent habits.
Over a 60-year timeframe, people using more energy-efficient transportation modes and less energy at home will generate fewer emissions than people living an auto-dependent lifestyle. But this wouldn’t be possible without the upfront monetary investment to construct HCT and mixed-use developments, and the initial increase in GHG emissions from their construction. The process of constructing higher density buildings around transit requires more concrete, steel, and the operation of construction equipment, which produces more GHG emissions than construction of lower-density, business-as-usual developments. Despite requiring more money to construct higher density buildings, the overall cost for developing land around transit would be cheaper than a similar sprawling development because purchasing less land for higher density development is substantially cheaper than purchasing more land to enable low density building construction. Developers would likely charge higher rents to tenants in TODs because of higher construction costs and prime locations. But residents can actually save money over time by reducing their costs for transportation and energy consumption in buildings. Figure 3 shows the potential benefits of smart growth around the Gold Line compared to an equal amount of business-as-usual development.
Along two of LA’s eight HCT lines, enough land exists to support 96,000 dwellings and 32 million square feet of commercial space in mixed-use developments. Construction of these buildings for smart growth can produce up to 15 percent more initial GHG emissions than constructing a comparable sprawling development near the urban core. While upfront emission expenditures may not be attractive to policy makers, they enable overall emission reductions from the entire transportation and land use system by up to 35 percent over 60 years. These reductions comprise three primary factors: 1) greater household mobility options that reduce automobile reliance; 2) concentrated pockets of residential and commercial space that reduce trip distances; and 3) households moving into smaller, more energy efficient dwellings that also benefit from reduced electricity use. The life-cycle assessment framework for integrating transportation and land use planning isolates the primary drivers of environmental impacts and monetary costs, and can better inform policy and planning goals.
Effective Planning to Reduce Life-Cycle Impacts
Planning policies that facilitate smart growth need to be in place at various stages of development to realize the full potential of transit-oriented neighborhoods. Transit-oriented district planning and relaxed parking requirements can encourage developers to create TODs by reducing the time and financial costs involved in complex zoning requirements. Allowing denser development, encouraging mixed-use spaces, and pricing parking appropriately can encourage people who live in smart growth neighborhoods to reduce energy consumption and automobile travel. Together, these policies can reduce the upfront barriers to creating livable, walkable communities around high-capacity transit.
Conclusion
Pairing infrastructure construction with transportation changes shows how the initial higher costs from smart growth are far outweighed by the long-term improvements. Our framework can be used to show policymakers the benefits of smart growth across a wide suite of environmental and economic indicators. The combined assessment of transportation and land use provides a link between land development and behavioral changes that can be used to advise policymakers and planners throughout the development process.
This article is adapted from “Transit-Oriented Smart Growth Can Reduce Life-Cycle Environmental Impacts and Household Costs in Los Angeles,” originally published in Transport Policy.
Further Readings
Mikhail Chester, Stephanie Pincetl, Zoe Elizabeth, William Eisenstein, and Juan Matute. 2013. “Infrastructure and Automobile Shifts: Positioning Transit to Reduce Life-Cycle Environmental Impacts for Urban Sustainability Goals,” Environmental Research Letters, 8(1): 015041.
Matthew Nahlik and Mikhail Chester. 2015. “Policymaking Should Consider the Time-Dependent Greenhouse Gas Benefits of Transit-Oriented Smart Growth,” Transportation Research Board Annual Meeting 2015, Paper #15-0254.
Matthew Nahlik and Mikhail Chester. 2014. “Transit-Oriented Smart Growth Can Reduce Life-Cycle Environmental Impacts and Household Costs in Los Angeles,” Transport Policy, 35: 21–30. | https://www.accessmagazine.org/fall-2015/life-cycle-impacts-of-transit-oriented-development/ |
I believe that Tacoma needs a variety of living wage jobs capable of supporting families. If elected I will work to ensure Tacoma encourages jobs in a variety of fields, and make the process of starting or growing a business more accessible. Working for a locally owned business for 12 years, I understand how difficult it can be to navigate that process, but believe with improvement we can increase the number of job opportunities for our citizens. I am passionate about building partnerships connecting our youth with skilled trades, bringing more mid-size businesses to Tacoma, encouraging our existing business to grow, and improving our permitting and planning process to incentivize sustainable local growth.
Housing Options for Everyone
Tacoma should be able to offer affordable housing to all its residents. I want to increase affordable housing options across the city to accommodate everyone; no matter their age or field of employment. I will work to restructure building incentives so perks such as tax breaks and increased building height limits go to contractors with a commitment to affordable housing. My goal is to define what affordable housing truly means for our residents so we can accommodate our current and incoming population. Having a background in urban planning it’s important to me that we structure our growth intelligently and sustainably across the city. I will work to ensure that our vulnerable populations, such as lower income families and seniors, all have a place to call home.
Improving Infrastructure
Tacoma is a wonderful city, but as we continue to grow, much of our infrastructure needs to be updated and improved. I will work to upgrade and maintain our streets by repairing potholes, push for consistent sidewalks and crosswalks across our community, and will look at options to ease traffic. Another issue I want to address is our aging stormwater system which is undersized and overflows into Puget Sound during heavy rainfall, washing trash and harmful substances into the sound. I support upgrading our storm water infrastructure and looking at creative new ways to address this citywide and preserve the healthy ecosystems in Puget Sound. I also support our valuable municipal projects and investments, and will work to keep them financially sustainable.
Public Transportation and Walkable Neighborhoods
As a former public transportation planner I want to work to decrease congestion on our streets and keep our traffic moving, along with guiding growth around transit hubs and supporting alternative transportation options. I will work to connect and improve our urban trails network, along with bike paths and sidewalks, so citizens have safe, efficient alternatives to driving. As our city grows, I also want to structure our increasing density around transit hubs to create spaces where people can live and work in their communities.
Bright Futures for our Families
As the mother of two young boys, I want Tacoma to be a place where all of our children thrive. With our city population growing, I believe it’s important to keep our neighborhoods safe and invest in our fire and safety staffing to keep people out of harm's way. I also support building community partnerships and working to keep information on city projects transparent and accessible. It is important to me that we create a culture where citizen input on projects can be heard, so community concerns can be addressed in a timely and respectful manner.
Homelessness
Currently the city of Tacoma is facing a crisis with an increasing number of residents living on the streets. Last January, I participated in the ‘Point in Time’ homeless count where I learned that many of the homeless self identified as having mental health and substance abuse issues, and a good portion of those had been homeless for many years. Among them were families, veterans, teenagers and senior citizens; people just like you and me. I will work towards creating a stable safe space for these displaced individuals with basic amenities such as water, bathrooms, trash pickup, safety patrols, and access to services and support when they are ready to access them. Then the real work begins to address the root causes of their situations. Our homeless need support and stability to get off the streets.
Preserving Green Spaces and Ensuring Tacoma has a Sustainable Future
As an environmentalist I have always cared deeply about preserving green spaces and minimizing my footprint. Tacoma is a beautiful city and I want to work towards its long-term sustainability. We need to support industries that will continue the commitment we have to cleaning up our industrial areas and seeking sustainable options that will improve our local ecosystems. By investing in a sustainable future, we can create more jobs for our community while decreasing pollution across the city. Finally, I support increasing our city’s tree canopy through the creation of a tree committee to help the city reach its goals of increased canopy coverage. Planting more trees helps increase air quality and supports a healthier urban ecosystem, and will help up achieve our goal of a 30% tree canopy by 2030. | https://www.meredithneal.com/copy-of-meet-meredith |
Pro-rail policies in Turkey: A policy shift?
Date
2007-01-01
Author
Babalik-Sutcliffe, Ela
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
30
views
0
downloads
Cite This
Growing concerns over environmental issues and the increasing role of the transport sector in creating environmental pollution have had a significant impact on transport policies and investments worldwide. There is increasing emphasis on the importance of transporting people and goods in the most sustainable ways, and the role that environmentally friendly modes, and particularly railways, should play in transport. It appears that in Turkey too a policy shift has taken place in favour of rail transport as a consequence of both environmental concerns and the will to integrate into international transport networks, and consequently international markets. This paper summarizes research that looks into the policy documents, annual investment programmes, and actual investments in the transport field in Turkey within the past two decades. While a policy shift in favour of railway development can be observed in policy documents, the study shows that actual investment has been extremely limited for the improvement and expansion of the railways in Turkey. The paper focuses on this gap between policy and practice in national transport network development, and aims to discuss why it currently is, and may continue to be, difficult for Turkey to deliver successfully a pro-rail policy. The discussion helps one understand how a country's transport history, past policies and mobility traditions affect the delivery of new policies, particularly when they represent a significant policy shift.
Subject Keywords
Transportation
URI
https://hdl.handle.net/11511/63381
Journal
TRANSPORT REVIEWS
DOI
https://doi.org/10.1080/01441640601151564
Collections
Department of City and Regional Planning, Article
Suggestions
OpenMETU
Core
Analysis of public use bicycle systems from a product-service system perspective
Keskin, Duygu; Börekçi, Naz Ayşe Güzide Z.; Department of Industrial Design (2006)
Recent studies on sustainability indicate that the concept of the Product-Service System (PSS) is a promising approach to dematerialise the economy and reduce the environmental impacts of industrial activity. Considering the urban scale, mobility is one of the challenges that should be addressed and improved for a better quality of life for residents and lower pollution levels for the environment. In this respect, this study aims at analysing Public Use Bicycle (PUB) systems from a PSS perspective. Six PUB ...
Political and legal documents for ensuring sustainable urban transportation: a comparative analysis of USA, UK and Turkey
Yılmaz, Begüm; Babalık, Ela; Department of Urban Policy Planning and Local Governments (2011)
Principle of sustainability, as in all areas, is becoming a major issue in urban transportation planning all around the world. Urban transportation political and legal documents are statutory basis of urban transportation plans and projects and developed countries have established urban transportation political and legal documents to ensure that local governments implement plans and projects in line with the sustainability principle. In this thesis, it is intended to analyze the sufficiency of central gover...
New airports and their impact on mobility and air traffic : an analysis of Sabiha Gökçen international airport
İlhan, Burhan; Babalık, Ela; Department of City and Regional Planning (2006)
In the process of globalization, air transportation plays an important role for the development of regional and national economy. There has been considerably increasing demand for air transportation and will continue in the foreseeable future. In this respect this research demonstrates a case study of Sabiha Gökçen International Airport, the second and recently built airport of İstanbul, and its relation with the region. The main aim of this study is to understand the initial transportation benefits of the ...
Planning and design criteria to make urban transport more sustainable: the case of Baku
Bayramoğlu, Gülnar; Babalık, Ela; Department of City and Regional Planning (2011)
Since the industrial revolution, technological developments and increased population have caused environmental damages such as lavish use of sources, pollution increased waste areas, poisoned wastes, harmless gasses, decreased green areas, global warming and climate change, harming of the ozone layer, decreased forests areas and natural sources. Appearance of the term of Global Warming makes the humanity to revise their facilities in all fields. To protect the nature and architectural environment, firstly, ...
Best Option for Reducing On-Campus Private Car-Based CO2 Emissions: Reducing VKT or Congestion?
Altıntaşı, Oruç; Tüydeş Yaman, Hediye (2016-01-01)
Reducing carbon emissions, especially that of private car use, is an inevitable goal of sustainable transportation policies. This study focuses on i) determination of the current level of CO2 emissions from private cars on the Middle East Technical University (METU) campus and ii) evaluation of the impact of different emission reduction scenarios. Such scenarios were based on hypothetical conditions that can either reduce Vehicle-km-travelled (VKT) or congestion on campus, which are defined as the two major...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
E. Babalik-Sutcliffe, “Pro-rail policies in Turkey: A policy shift?,”
TRANSPORT REVIEWS
, pp. 485–498, 2007, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/63381.
METU IIS - Integrated Information Service
[email protected]
You can transmit your request, problems about OpenMETU. | https://open.metu.edu.tr/handle/11511/63381 |
The green and inclusive urban mobility challenge originates from limited quality public transport with a non-motorized infrastructure combined with the environmental hazards of too much motorization at the same time. The transport sector is responsible for more than one-third of the carbon dioxide (CO2) emissions in Latin America, and CO2 emissions are expected to more than double in 2050 as a result of increased motorization and vehicle use. In Argentina, one of the most urbanized countries in Latin America with over 90% of its population living in urban area, cities have continued to grow rapidly, further worsening congestion and air pollution as citizens opt to use private motor vehicles rather than the suffering public transit systems for transportation needs. Despite the rapid growth in urbanization and motorization, the pattern of absolute automobile dependence in most Latin American cities is not fixed. This provides a clear opportunity to develop long term policies reinforcing the link between land use, transport and environment.
Approach
The Sustainable Transportation and Air Quality Project, financed by the Bank’s Global Environment Facility (GEF), helped strengthen the institutional capacity of the municipalities of Rosario, Córdoba, Tucumán and Posadas, in order to integrate the concept of sustainable transport into the public policy agenda. Efforts were undertaken by these cities in the development of master plans, planning instruments, new policies and regulatory frameworks, all of which were designed to create incentives for more efficient and sustainable transport use.
All four beneficiary cities introduced numerous initiatives aimed at enabling the use of less energy intensive transport modes, including: the designation of exclusive bus lanes for public transport, the construction of new bikeways, promotion of bike-sharing programs, sustainable transportation campaigns, development of pedestrian corridors, and establishment of a transit monitoring center, among others. All of these initiatives toward cleaner, less energy intensive modes of transport are expected to significantly reduce emissions in the near future.
Results
From 2010 to 2014 the Argentina GEF Project financed a total 18km of bikeways (12km in Rosario and 6km in Cordoba), which resulted in a 50 percent increase in the use of bikes Regarding Green House Gas (GHG) emission reductions, initial information available for the new bikeways in the city of Rosario demonstrate that the equivalent of 165.32 metric tons of CO2 were saved the year following construction, a 0.6 percent decrease in comparison to the scenario without the Project. Considering the estimated social values of carbon is US$ 30 per metric ton of CO2, the value of these savings is equivalent to approximately US$ 5,000 for the first year of this Project. A cumulative savings of approximately 2,000 CO2 equivalent tons are expected eight years after the completion of the civil works.
Additionally, to promote the use of mass public transport, the following results were achieved:
- 3.2 km of a Bus Rapid Transit (BRT) project in Posadas was designed and financed.
- Rosario received financial support to construct 9.5 km of BRT-type lanes on the North-South corridor. These lanes are expected to increase the number of public transport users by 3.000 new riders per day.
- The Municipality of Cordoba received financial assistance to construct a pilot BRT-type project (5.3 km), that reported a reduction of 24% in commercial travel time.
- Sustainable transportation campaigns and a Bike-sharing program were launched in Rosario, including 480 public-use bicycles, and is expected to provide 4,000 new rides per day. | https://www.worldbank.org/en/results/2016/09/30/reducing-ghg-emissions-and-promoting-sustainable-transport-in-argentine-cities |
Today, more than half of the World population and about 75% of Turkish population lives in cities. That’s why cities play a key role on basic needs of human beings from drinkable water to healthy air, and in general, on the life quality. How we want a city has become in fact directly related to how we want to lead our lives.
For this reason, TEMA Foundation issued a call to candidate mayors prior to the local elections that will take place in March 2014.
1. Protecting Farm Lands: A rapidly growing population causes pressure for construction on agricultural lands around the cities. Because of rapidly growing constructions, we lose our thousand years old fertile soil.
TEMA Foundation requests, for sustainable living, areas close to agricultural land not to be opened for construction. Preservation of agricultural land is a guarantee to food security.
2. Protecting Natural and Cultural Heritages: It is important for cities to protect their natural and cultural heritage.
TEMA Foundation requests that, during the evolution and transformation period of the cities, Municipalities have to carry their responsibilities to protect this heritage and structure. .
3. Protecting Green Areas: Green areas constitute the link between people and nature and it regulates the balance between these two. Green areas also help to recover human’s life quality.
TEMA Foundation requests that, Green areas are not places for construction and they must be protected.
4. Waste Management and Environmental Protection: Although it’s important to have storage and all other waste management facilities in the city, it’s also important to choose their locations wisely and not close to cities and forests.
5. Adaptation to Climate Change: We witness the effects of climate Changes all around Turkey and the world. There has been an increase in observed flood and drought which affects our lives and causes a great deal of soil loss.
TEMA Foundation requests that, for combating the climate change and adaptation, new ways of sustainable transportation and energy conservation have to be found and renewable energy and improved waste management facilities have to be supported.
6. Provide Sustainable Transportation: Transportation policies, because of the pressing problems of air pollution and traffic in the cities, are of outmost importance in terms of quality of life.
TEMA Foundation requests public transportation systems to be improved, holistic plan of various transportation kinds to be developed and bicycle usage to be promoted in cities..
7. Improve Urban Infrastructure: Urban infrastructure plays very important role on healthy growing cities and improving quality of life.
TEMA Foundation requests our water to be protected, water channels to be improved to prevent leakages, electricity distribution networks to be improved for energy efficiency and saving, an innovative method to be developed to prevent the mixing of rainwater with sewage..
8. Healthy Cities: In Turkey there has been a really fast evolution and transformation in all cities.
TEMA Foundation requests protection and improvement of green areas in the cities and ecological concerns to be integrated in city planning.
9. Ensure Active Participation of Stakeholders in Decision Making: Municipalities should take decisions with the involvement of all interested Parties.
TEMA Foundation requests that, to prevent environmental problems, local administrations should adopt environmental participation and ban projects with potential harm to environment.
10. Establish Ecologically Literate Municipalities: Ecological Literacy helps people to understand the Nature, to learn nature’s principles and to communicate with nature.
TEMA Foundation dreams of Ecologically Literate Municipalities. | http://www.tema.org.tr/web_14966-2_2/entitialfocus.aspx?primary_id=1226&target=categorial1&type=2&detail=single |
Tampa ranks in the top 10 most dangerous areas in the nation for pedestrians. The top 2 causes are drunk driving and speeding. We must prioritize safety as our #1 transportation goal. We must ensure people with disabilities, children, and seniors (who are most likely to be struck by vehicles) are considered in new infrastructure and transit development.
Our priorities:
-Enforce safer speeds in our neighborhoods
-Protect pedestrians with more walkability and modern lighting/infrastructure design
-Fill the potholes in our streets with sustainable, environmental-friendly material.
-Integrate transit stops into new developments
-Implement e-bike voucher program by building a local public-private coalition to get people off of our congested roads and onto healthier carbon-free alternatives (biking, walking, scooters, skateboards).
Housing
Domestic violence and lack of affordable housing are the 2 leading causes for homelessness in our city. We must encourage smart growth and green development while protecting and preserving the legacy and beauty of Tampa.
Our priorities:
-Embrace diverse housing development including duplexes, triplexes, quadplexes, bungalow courts, small apartment buildings, and tiny homes in an effort to reduce homelessness and housing insecurity.
-Promote green roof initiatives and integrate vegetation and green spaces into new developments.
-Adjust rental requirements to reflect current wages, inflation, and affordable housing shortages.
-Create initiatives for local landlords and potential landlords to provide sustainable housing at reasonable costs.
Air quality & Livability
Tampa is one of the most beautiful places in the world to live. We must protect this natural beauty by prioritizing air quality, picking up trash, and protecting our water.
Our priorities:
-Phase out water bottles and other excess, wasteful plastics to protect the environment.
-Promote community cleanups by building coalitions with local nonprofits and private sector leaders to prevent further pollution into the Hillsborough River and Tampa Bay.
-Provide more trash cans (sustainable and durable) to reduce littering and save public funds.
-Expand solar sidewalks and solar lighting in order to save tax dollars and provide a more equitable city. | https://www.tyler4tampa.com/copy-of-platform |
In 2010, Mansfield Frazier received a grant from ReImagine Cleveland to start Chateau Hough, a vineyard set on a formerly vacant lot in Cleveland's Hough neighborhood.
In this excerpt from “The Empty House Next Door,” author Alan Mallach challenges cities to reimagine vacant properties into pocket parks, urban farms and other green space as a permanent commitment to sustainable infrastructure.
Editor’s note: The following is an excerpt from The Empty House Next Door: Understanding and Reducing Vacancy and Hypervacancy in the United States (Lincoln Institute of Land Policy, 2018). In this policy report, city planner and housing advocate Alan Mallach assesses the problem of vacant properties in the United States and recommends creative mitigation strategies for local officials, nonprofits, and community leaders — particularly for America’s legacy cities. In this excerpt, Mallach argues that converting vacant properties to community gardens and parks should not be seen as a temporary beautification strategy, but rather a long-term paradigm shift that prioritizes the “greening” of a city’s infrastructure.
Vacant and abandoned properties are a familiar part of the American landscape, from the boarded row house in North Philadelphia to the empty factory in Detroit to the collapsing farmhouse in rural Kansas. These structures can devastate the neighborhood, undermine the neighbors’ quality of life, diminish the value of nearby properties, and reduce local tax revenue. Yet vacant properties can also become community assets. Thousands of vacant commercial and industrial buildings have been converted to apartments and condominiums, and vacant lots have found new lives as community gardens and parks.
Perhaps the most significant vacant property strategy to emerge over the past decade is what has come to be known as “greening” vacant lots: putting them to such environmentally friendly uses as community gardens, vineyards, and tree farms. It’s not a fundamentally new idea. European allotment gardens — small plots for people living in high-density urban areas to cultivate — go back to the 19th century and are still widespread there. In the United States, the lineage of community gardens goes back to the “Victory Gardens” of World War II, if not earlier.
Today’s approach to community greening may not be new, but it is very different. While food security and recreation, which were uppermost in the minds of 19th-century European advocates of allotment gardens, still matter, today’s explicit connection between urban greening and the strategic reuse of vacant properties represents a new and significant departure from previous thinking.
As vacant lots proliferated in older American cities in the 1980s and 1990s, community gardening was actively promoted by community organizations and agricultural groups and often encouraged by local officials as a temporary use for properties awaiting redevelopment. Community gardens, however, while valuable and productive, depend on a critical mass of neighborhood residents eager to till the soil, something that is both uncertain and fluctuating over time. With vacant lots continuing to proliferate, particularly in legacy cities, people needed to find other ways to use lots.
A critical step in moving from community gardens to a broader approach to greening vacant lots was the collaboration between Cleveland Neighborhood Progress (CNP), a citywide nonprofit intermediary, and Kent State University School of Architecture’s Cleveland Design Collaborative under the creative leadership of CNP’s Bobbi Reichtell and Kent State’s Terry Schwarz. This partnership provided Cleveland’s officials, nonprofits, and community leaders with a vision of how the city’s thousands of acres of vacant land could become an asset for their city’s future. One part of this effort was the publication of the Cleveland Vacant Land Reuse Pattern Book, a catalog of alternative green uses for vacant land with information on the costs and the materials needed to carry out each alternative.
In 2009, using the options in the Pattern Book, CNP and the City of Cleveland initiated Re-Imagining Cleveland, a competitive vacant land reuse grant program, to empower neighborhood residents and other community stakeholders to turn vacant land bank property into community assets and pilot projects. With $500,000 in grant funds, they awarded small grants to 56 projects on nearly 15 acres, including environmentally oriented projects such as pocket parks, rain gardens, and agricultural projects including gardens, orchards, and vineyards.
More recently, both Detroit and Baltimore have created even more detailed pattern books for reusing vacant land. Detroit Future City’s Field Guide for Working with Lots and Baltimore’s Green Pattern Book, created in partnership with the U.S. Forest Service, are invaluable resources for community organizations and activists not only in those two cities, but in any city in the United States.
Cleveland was not alone in exploring the potential of vacant lots. A second pioneering city was Philadelphia, where the nearly 200-year-old Pennsylvania Horticultural Society (PHS) took the leading role. Although PHS had supported community gardens in Philadelphia since the 1970s, in recent years their efforts have broadened to encompass a comprehensive and multifaceted citywide greening strategy. Two Philadelphia initiatives, one led by PHS, are particularly worth noting.
A second Philadelphia initiative addresses a concern shared by nearly all older American cities: combined sewer overflow (CSO) in sewerage systems where the same system handles both sanitary and stormwater flows. At times of heavy rainfall, sewer flows overwhelm the system’s capacity, leading to discharges of untreated or partially treated sewerage into rivers and lakes. CSO is a major source of water pollution in violation of the Clean Water Act, and the U.S. Environmental Protection Agency has aggressively pressed cities to comply with the act. Until recently, compliance was considered achievable by spending billions of dollars to build either separated sewer systems or massive underground tunnels and holding tanks.
We’re recreating the living landscapes that once slowed, filtered, and consumed rainfall by adding green to our streets, sidewalks, roofs, schools, parks, parking lots, and more—any impermeable surface that’s currently funneling stormwater into our sewers and waterways is fair game for greening. It’s going to take decades of work, but when it’s all done, we’ll have reduced the stormwater pollution entering our waterways by a stunning 85 percent (emphasis in original).
The city estimates that implementing this greening strategy will save Philadelphia $5.6 billion, compared to complying with EPA mandates through conventional engineering solutions. Similar efforts are now underway elsewhere, including Milwaukee, Syracuse, Cleveland, and Detroit.
The strategies pioneered in Cleveland and Philadelphia have been embraced by hundreds of towns and cities across the United States, while research has identified clear benefits from greening in the form of improved health, healthier food, lower crime and higher property values. Unresolved questions remain, however, including the most fundamental — is this a long-term strategy for legacy cities or only a transitional effort? If the latter, what is the expected outcome?
In the few years since the start of Philadelphia’s and Cleveland’s pioneering efforts, greening has begun to come of age as a multifaceted response to using vacant land to improve residents’ quality of life. Many cities, though, have barely scratched the surface; thousands of lots remain untreated and are at best intermittently mowed and cleaned. Looking to the future, two distinct, but closely related obstacles stand in the way of building sustainable greening efforts in legacy cities.
The first problem is lack of resources. Although the cost of greening or maintaining any individual lot is modest, the vast number of vacant lots in legacy cities means that the total cost can easily become substantial. The Cuyahoga County Land Bank spent $2.23 million from 2011 to 2015 simply to clean and mow the vacant lots it created through demolition. The cost to turn each vacant lot into a garden, a park, or a vineyard under the Re-Imagining Cleveland grant program typically ran between $3,000 and $6,000 — not much, but substantial if multiplied by the number of lots awaiting greening in the typical legacy city. Cleveland is having difficulty raising enough funds to expand their program.
Philadelphia devotes more public resources than almost any other city to greening, and yet the great majority of vacant lots in that city are still waiting their turn. In contrast to economic development projects, greening projects rarely yield direct cash returns and the benefits of increased property values, improved health, or reduced crime tend to be reflected indirectly if at all in municipal balance sheets.
Long-term sustainability of greening projects is another challenge. Maintaining attractive green spaces can be labor-intensive: While many neighborhood-based greening projects last for years, others tend to fade away as the individuals who provided the initial impetus move away or on to other things.
Many neighborhoods even lack the critical mass of concerned neighbors to get greening projects started in the first place. This is part of the reason that cities have begun selling side lots to individual homeowners, even while recognizing that these programs may have uncertain long-term outcomes. By the fall of 2017, the Detroit Land Bank had sold off more than 8,000 parcels to adjacent homeowners as side lots.
Cities have realized that to succeed, a greening infrastructure needs to be put in place to support the hundreds of individuals and groups that create and maintain green spaces around the city. Even in cities with strong support systems like Philadelphia and Baltimore, resources are limited and far more lots remain untouched than greened, while far too many cities lack even a basic citywide greening infrastructure.
Many public officials view committing formerly developed urban land to permanent green uses that lead neither to new construction nor to population regrowth as the equivalent of relegating the land to nonuse. As a result, greening is often undervalued compared to other forms of public investment.
Underlying these issues of cost and maintenance is a larger question: Should greening be seen as a short-term transitional activity or a long-term use of urban land? Cities like Detroit, Cleveland, and Baltimore have lost population for many decades and despite regrowth in some areas, they have no realistic prospect of regaining their peak population in the foreseeable future. Still, many local officials and others continue to see greening as, at most, a short-term interim step until “a more desirable type of investment presents itself, such as [the] construction of a new home,” as one Ohio land bank official put it.
From that perspective, many public officials view committing formerly developed urban land to permanent green uses that lead neither to new construction nor to population regrowth as the equivalent of relegating the land to nonuse. As a result, greening is often undervalued compared to other forms of public investment.
Large inventories of vacant land, however, will be a long-term reality in all but a handful of America’s legacy cities. Thus, viewing greening as no more than a short-term strategy handicaps the efforts of cities to rebuild their quality of life and ultimately their economy and market strength. At the same time, certain areas in each city have the potential for short- or medium-term regrowth. Planners in legacy cities need to assess which areas have the most potential for regrowth and ensure that vacant land in those areas is available for growth. They should also establish sound ground rules for long-term greening in other areas, recognizing that becoming a greener city can be a powerful impetus for economic and social revitalization.
Cities need to evaluate to what extent — by looking at market conditions, financial realities, demographic data, and economic trends — their vacant lots, both present and projected, can be reused for development within 10 to 15 years. If the answer is, as it often will be, that many lots will not be developed, that city should begin to plan for long-term green reuse, making what DFC calls a “green culture shift.” That demands thinking creatively about how long-term greening can be accomplished — reflecting the unique character of each area — and building the support system and infrastructure to ensure that green uses remain sustainable for the future.
Excerpted from The Empty House Next Door: Understanding and Reducing Vacancy and Hypervacancy in the United States, by Alan Mallach. Copyright © 2018 by Lincoln Institute of Land Policy. Reproduced with permission of Lincoln Institute of Land Policy, Cambridge, Massachusetts. | https://nextcity.org/features/view/why-the-greening-of-vacant-land-is-a-smart-long-term-investment-in-cities |
What is Moment of Inertia?
A moment of inertia refers to the force needed to cause a rotating object to change speeds. All rotating objects have an axis, or a point that they rotate around. Some objects require more force to change the speed of this rotation than others. Those that will change their rotational speed easily have a low moment of inertia, while those that are difficult to change have a high one.
This concept was first introduced by Swiss mathematician and physicist Leonhard Euler in 1730. He published his theory in the book titled, Theoria Motus Corporum Solidorum Seu Rigidorum or Theory of the Motion of Solid or Rigid Bodies. The formula used to calculate it is I = k m r2, in which I equals the moment of inertia, k equals the inertial constant that depends on the shape of the object, m equals mass, and r equals the distance between the axis and rotational mass. For bodies with an awkward shape and no apparent constant, a series of experiments can help to determine the rate.
A common example is to think of an ice skater who is spinning. As the skater begins a spin, she stretches her arms and back out and away from her body to create a slow spin. As she reaches the end of the spin, she tucks her arms in and tightens her frame, causing her to spin much more rapidly. She has consolidated her mass closer to the axis, reduced her moment of inertia, and reduced the effort needed to spin quickly. With no change in force, she is able to accelerate her rotational speed.
There are many practical applications for this measurement. Car manufacturers study it carefully to determine how quickly a car will spin out of control. The goal is to create a high moment so that the car is less likely to lose control in a spin.
Many sports also use the concept, with golf, baseball, and diving topping the list. Divers are concerned with which angles they can use to create the lowest moment and the fastest spin in order to complete the move and seamlessly enter the water. Baseball players and golfers both work to create a smooth, effective swing, to hit the ball with the correct angles and the force required to make it fly far without excessive muscle use.
Discussion Comments
Do bicycles and gears use a moment of inertia too? I think a one-speed bike going up a hill has a high moment of inertia. It sounds like a lot more fun experimenting with spinning cars - sounds like a MythBuster episode!
It's fascinating that our minds and bodies can intuitively use something that sounds so mathematically to explain, I doubt many of the athletes mentioned pulled out their calculators before they made their moves!
Isn't it more correct to describe the increase of rotation of a skater as they bring there arms in as conservation of angular momentum rather then inertia?
Even in space, where there is no effort, or resistance to rotation, if you pulled in mass that is rotating to a closer rotation, it would also speed up. | https://www.allthescience.org/what-is-moment-of-inertia.htm |
Many people don't know too much about angular momentum—and that's fine. But what about figure skaters? Whether they understand the concept of angular momentum doesn't matter but they use it in one of the all time classic skating moves. You've seen it before. The skater starts off in a standing position and spins about the vertical axis. After a few rotations, the skater pulls both arm in closer to the body and spins faster. In physics, we call this conservation of angular momentum.
Just as an example, here is this same maneuver performed on a rotating platform instead of on ice.
Really, you can try something like this on your own. Sit on a nice spinning chair or stool. Start with your arms stretched out as you spin and then bring your arms in. Don't barf.
But what exactly is angular momentum? In short, it is something that we can calculate that can be conserved. That's a tough definition, so let me give an example of a conserved quantity—like mass (which only mostly conserved). Suppose you take add some baking soda to vinegar. If you've ever done this, you will see that the resulting mixture foams and produces some gas. But here's the cool part. If you measure the mass of the stuff you start with (vinegar and baking soda) it's the same as the mass of the stuff you end up with (carbon dioxide and water and sodium acetate). Boom, mass is conserved. It's the same before and after.
OK, I have to point out that mass isn't always conserved. n a nuclear reaction, the mass of the stuff before doesn't have to be equal to the mass of the stuff after. But if you look at energy (and include mass in the energy), then energy is conserved.
Now for angular momentum. The angular momentum is a quantity that we can calculate for rotating object. It's the product of the angular velocity (how fast it spins—represented with the symbol ω) and the moment of inertia (using the symbol I). I think most people are OK with the idea of the angular velocity—but the moment of inertia thing is a bit more complicated. Basically, the moment of inertia is a property of an object that depends on the distribution of the mass about the rotation axis. If you have more mass further away from the axis of rotation, the moment of inertia is larger than if that was was close to the axis. | https://www.wired.com/story/how-ice-skaters-turn-physics-into-astonishing-spins/ |
It’s a classic device that demonstrates conservation of angular momentum and inertia. Dancers use this effect while doing pirouettes – when drawing their arms closer to the body they can spin faster.
With your arms closer to your body you can spin faster than with arms spread. This happens because of the conservation of angular momentum. You and the platform form a system. Angular momentum is a quantity related to the system’s mass, its size (those quantities make up the moment of inertia) and the rotational speed. Mass of the system remains the same, so when its size increases, rotational speed decreases and vice versa – smaller size means higher rotational speed (called angular velocity).
Depending on the model, stand or sit on the carousel. Ask someone to hand you the weights. Keep them close to your body. Ask someone to spin the carousel. Move the weights away from and closer to your body, keeping them in extreme positions for a moment. Notice how fast you spin. | http://cngsystem.pl/en/realizacje/mala-platforma-obrotowa-piruet-en/ |
I've learned in that in this experiment:
...the skater will start rotating faster when she brings her arms in and there is no net torque acting on her. But what would happen to her angular momentum and rotation if she only brings one of her arms inwards with the other arm sticking outwards?
edit: will a net torque act on the body? What will this cause? | https://physics.stackexchange.com/questions/81791/conservation-of-angular-momentum-experiment |
Two-time World Ice Dance Medalists Madison Hubbell and Zach Donohue teach the very common ice show partner spin which is sometimes referred to as the eagle spin or spread eagle spin. And although this is not a serious figure skating skill, the spin is fun and an excellent tool for improving partnering and building awareness of pulling in slowly to maximize spin speed (even helpful for solo spinning). Pretty much every figure skater has tried this with a friend at some point.
Madison provides a nice description of this spin, and Madison and Zach demonstrate how even from a slow start, significant spin speed can be attained when done correctly. Part of the secret lies in bending close to the ice at the beginning and creating the proper tension in the arms by leaning away from each other with the upper body. Then by standing up slowly and pulling the arms in while maintaining core engagement, the two skaters can dramatically change the moment of inertia and increase the spin speed. Madison notes that pulling together strongly but slowly typically creates more spin speed than just trying to pull together fast. This echoes how skaters pull in on solo spins as well to maintain the energy of the spin and increase the speed.
Notice that some of the skaters in the class struggle, primarily with getting a good starting position (down and leaning away from each other to create arm tension). And some struggle by not matching the pull of their partner, resulting in off-balance spins that lose energy.
|
|
Sorry, this content is for members only.
Click here to get access. | https://icoachskating.com/ice-show-2-hand-eagle-partner-spin-madison-hubbell-and-zachary-donohue/ |
State and apply the relations between angular position, angular displacement, angular speed, angular velocity, and angular acceleration to solve related problems.
State and apply the relations between the angular (or rotational) motion of a body or system and the linear (or translational) motion of a point on the body or system.
Determine the torque of an applied force and solve related problems.
Determine the moment of inertia for a system of masses or sold body and solve related problems.
State and apply Newton’s 2nd Law for fixed-axis rotation to solve related problems.
Apply work and energy to solve fixed-axis rotation problems.
Apply work and energy to solve rolling problems.
Determine angular momentum for a particle, system, or rotating body and relate to torque and angular impulse to solve problems.
Apply conservation of angular momentum to solve related problems.
1. An old record turntable is set for 45.0 rpm. Starting from rest it takes 1.50 seconds for the turntable to attain this rate. (a) Determine the angular speed of the turntable when it turns at 45.0 rpm. (b) Find the rate of angular acceleration when it is first turned on. (c) Find the angular displacement of the turntable as it accelerates. How many “turns” is this?
2. The same turntable is now switched to play an LP it slows at a rate of 2.00 rad/s2 from 45.0 rpm to 33.3 rpm. (a) Find the time for this to occur. (b) What is the angular displacement during this interval?
3. A good-ol’ boy at a stoplight revs the engine of his 426-Hemi ‘Cuda such that the tachometer goes from 1000 rpm to 4000 rpm in 0.50 s. (a) Find the average angular acceleration of the engine’s crankshaft. (b) Determine how many revolutions occur.
4. Starting from rest, a Ferris wheel of diameter 30.0 m undergoes an angular acceleration of 0.0400 rad/s2. A certain rider is at the lowest point of the wheel just as it starts to move. (a) Find the velocity of the rider just as he completes a quarter of a turn. (b) Find the radial and tangential components of his acceleration at the same point. (c) How much farther must the wheel turn before the rider attains a speed of 6.00 m/s (the maximum that occurs during the ride)?
5. A helicopter rotor has a total length of 7.50 m and turns at a rate of 47.0 rad/s CW and has an angular acceleration 1.5 rad/s2 CCW. For a point at the tip of the rotor, find the (a) speed, (b) radial acceleration, (c) and tangential acceleration.
6. A cyclist is traveling at 7.0 m/s to the right when she applies the brakes and slows to a speed of 5.0 m/s. Each wheel of the bicycle has radius 0.33 m and completes 5.0 revolutions during this braking period. (a) Find the time that elapses. (b) Determine the angular acceleration of each wheel. (c) Determine the linear acceleration of the bicycle.
7. A string is wrapped around the axle of a gyroscope – 30.0 cm of the string is in contact with the axle, which has diameter 2.20 mm. Starting at rest the string is pulled with a constant acceleration, which causes the gyroscope to start spinning. It takes 1.10 seconds to pull the string off of the axle and the gyroscope then spins for an additional 60.0 seconds before stopping. (a) Find the maximum angular speed of the gyroscope in rad/s and rpm. (b) What are the rates of angular acceleration and angular deceleration? (c) Find the total number of revolutions the gyroscope will spin.
8. As shown below a square metal plate with sides of length 20.0 cm is subject to three forces: F1 = 60.0 N, 180.0°, F2 = 90.0 N, 30.0°, F3 = 100.0 N, 180.0°. (a) Find the torque of each force about the lower left corner of the square. (b) Find the torque of each force about the midpoint of the upper side of the square. (c) What additional horizontal force applied at the lower left corner would make the net torque equal zero about this midpoint?
9. The pedal of a bicycle has a radius of 0.17 m relative to the center of the crank. A rider of mass 80.0 kg puts all of his weight on the pedal as it moves from top to bottom. Find the torque on the crank when the pedal is in each of the following positions: (a) r = 0.17 m, 90.0°, (b) r = 0.17 m, 45.0°, (c) r = 0.17 m, 0.0°.
10. The nut on the drive axle of a 1974 VW bus must be torqued to 407 Nm. In order to apply this much torque the mechanic attaches a wrench to the nut and places a “cheater” pipe over the wrench in order to “lengthen” it. (a) If length from the center of the nut to the end of the cheater is 0.500 m, what is the minimum force that must be applied? (b) If the mechanic is only capable of exerting 445 N of force how long must the cheater be? (c) Draw a picture showing the most effective way to arrange the cheater and apply the force.
11. A 1250 kg car is stopped at a red light on a hill of incline 10.0°. (a) Assuming the weight is equally distributed on the four tires of radius 30.5 cm each, what is the torque due to friction about the center of each wheel? (b) This torque must be “countered” by the brakes to prevent the car rolling. If the car has disk brakes and the calipers are located 11.4 cm from the center of the wheel, what is the amount of tangential force on the calipers?
12. A meter stick of mass 80.0 grams is balanced vertically on one end, but then it falls over. Find the torque about the lower end of the stick (a) when it is tilted 45.0°, and (b) right before it hits the ground.
13. In a physics demonstration a student stands upright, centered on a platform that can rotate freely. The student holds in each hand a 0.500 kg mass. Find the moment of inertia of the two masses about the axis of rotation when (a) the two masses are held at the shoulders and are 0.50 m apart, and (b) the two masses are held at arms length and are 1.50 m apart.
14. Compact masses of 4.00 kg and 5.00 kg are attached to the ends of a rod of length 90.0 cm and negligible mass. (a) Find the moment of inertia about the center of the rod. (b) Find the moment of inertia about each end of the rod. (c) Find the moment of inertia about the center of mass. Assume the axis is perpendicular to the rod in each case.
15. Use reference Table 8.1. (a) Find the moment of inertia of a solid disk of mass 3.00 kg and radius 0.500 m about an axis passing perpendicularly through its center. (b) Repeat for a hoop of the same mass and radius. (c) Repeat for a solid sphere of the same mass and radius.
16. Repeat part (a) of problem 14 using the diagram below and taking into account the masses are solid spheres of radius 10.0 cm and 11.0 cm and the rod has mass 250 grams. Hint: use the parallel axis theorem for the spheres!
17. Find the moment of inertia of the thin triangular plate of mass M about the y-axis.
18. Use the parallel-axis theorem to find the moment of inertia of the triangular plate from the previous problem about a vertical axis that passes through its center of mass.
19. A uniform cylindrical grinding wheel of mass 5.00 kg and radius 20.0 cm is turned by an electric motor. Once it is turned on it takes 10.0 seconds for the wheel to go from rest to 125 rpm. Ignore friction in the bearings of the wheel. (a) Find the amount of torque exerted on the wheel by the motor. (b) The motor is switched off and a piece of metal is pressed against the edge of the wheel with a force of 45.0 N. If the coefficient of friction is 0.500 how many turns will the wheel complete before coming to a stop?
20. An Atwood’s machine is formed by a string passing over a solid pulley of mass 50.0 grams and diameter 5.00 cm. The masses on each end of the string are: m1 = 200.0 g, and m2 = 300.0 g. Friction is negligible. (a) Find the angular acceleration of the pulley as one mass falls and the other rises. (b) Find the linear acceleration of m1. (c) What would be the linear acceleration of m1 if the pulley were “massless”?
21. A teeter-totter (or “seesaw”) of length 3.00 m and mass 50.0 kg is pivoted at its center. A boy of mass 40.0 kg sits 1.40 m left of center and a girl of mass 30.0 kg sits 1.30 m right of center. Ignore friction in the bearing. For the instant when the teeter-totter is horizontal find: (a) the angular acceleration, and (b) the linear acceleration of each child. (c) As the teeter-totter tilts past horizontal, do the acceleration rates increase, decrease, or stay the same? Explain.
22. A string passes over a pulley of mass m and radius r and connects a cart, mass M1 and a hanging mass M2 as shown in the diagram. The pulley is a solid disk and friction is negligible. (a) Solve for the linear acceleration of the cart once the hanging mass is released. (b) If an experiment is done with this arrangement and it is assumed that the pulley is massless, what will be the effect on the results? What type of error? Explain.
23. A stick of length L is held vertically with one end at rest on the floor. The stick is released and it rotates around its bottom end until the top end hits the floor. The bottom end does not slip on the floor. Find the speed and acceleration of the top end of the stick just before it hits.
24. A cylinder of radius 0.200 m with a moment of inertia of 5.00 kg m2 is mounted on a horizontal axle with negligible friction. A string is attached and wrapped several times around the rim of the cylinder, which is initially at rest. A constant force of 30.0 N is applied to the end of the string and it is pulled 4.00 m. (a) Find the angular speed attained by the cylinder. (b) Suppose the experiment is repeated but the applied force is replaced with a hanging weight of 30.0 N and it falls 4.00 m starting from rest. Again find the angular speed attained.
25. In order to do a certain demonstration, a physics instructor uses his hand to spin a bicycle wheel of diameter 0.500 m. The wheel has a mass of 1.50 kg, most of which is in the rim. He grabs the rim and pulls it through precisely one half of a turn to get it spinning at 4.00 Hz. The wheel continues to spin for 450 revolutions before it stops. (a) Find the maximum kinetic energy of the wheel. (b) Find the net work required to set it into motion. (c) Find the force the hand exerts on the wheel. (d) What is the torque due to friction in its bearings?
26. A solid cylinder rolls down a ramp inclined at angle θ above horizontal. Determine its acceleration.
A spool has properties: inner radius = r, outer radius = R, mass = M, and moment of inertia I = MR2/3. A string is wrapped around the inner cylinder and a force F is applied to the end, as shown in the diagram below. The spool rolls without slipping. Find the acceleration, including direction.
28. Repeat the previous problem, but now the string is pulled straight upward with force F.
29. A string is wrapped around the circumference of a solid cylindrical disk of mass M and radius R. The string is held by a person and the disk is released so that it falls as the string unwinds (like a yo-yo). (a) Find the downward acceleration of the disk if the string does not slip. (b) Repeat for the same situation but this time the object is an “empty” cylindrical shell (like a can with the ends cut out).
30. A solid sphere of mass 4.00 kg and radius 10.0 cm is pushed across a horizontal surface. The force is applied by pushing horizontally on it with a wood block. Friction for all surfaces can be modeled by: μs = 0.40 and μk = 0.30. (a) Find the linear acceleration of the sphere if the normal force of the wood on the sphere is 10.0 N, 0.0°. (b) What is the maximum normal force that the wood can exert on the sphere without slippage on the horizontal surface? (c) If this is just barely exceeded, what is the angular acceleration of the sphere?
31. A yo-yo of diameter 6.00 cm is spinning freely on the end of its string and is hovering just above the floor when the string breaks. At that instant it is on the floor spinning at 175 rad/s but not yet moving across the floor. As it spins, friction causes it to accelerate across the floor as it continues to slip; μk = 0.250. Eventually it rolls without slipping on the floor. Use an appropriate model for moment of inertia. (a) Find the linear acceleration. (b) Find the angular acceleration. (c) Determine the eventual speed attained by the yo-yo.
32. A bowling ball is thrown straight down the alley with speed vo. Initially the ball is not spinning, but as it slides across the floor it starts to spin. For a brief period it continues to slide as it spins faster and faster. Eventually it is rolling without slipping. (a) Find the angular acceleration of the ball as it slips (in terms of g and μk). (b) Find the linear acceleration as it slips. (c) Find the distance it moves before it rolls without slipping. (d) Find the final speed of the ball in terms of its initial speed.
33. A unicyclist applies a downward force to the front pedal when it is in a horizontal position (and no force on the rear pedal). Find the forward acceleration in terms of: M = mass of person and unicycle frame, m = mass of wheel, r = radius of wheel, F = downward force applied to pedal, l = length of pedal crank. Hint: analyze external force(s) on the system of masses and also analyze torque(s) on the wheel. Ignore friction in the bearing of the wheel.
34. A marble of mass 20.0 g rolls across the floor at 5.00 m/s and then up an incline of 30.0°. (a) Find the kinetic energy of the marble. (b) How far up the ramp will it roll? (c) Find the speed of the marble when it is halfway back down the ramp.
35. A solid steel ball of radius r is placed on a track that leads to a loop-the-loop of radius R. See the diagram below. (a) Determine the minimum value of h from which the ball can be released and it will roll without slipping all the way around the loop. (b) Find the speed of the ball as it exits the loop on a horizontal path.
36. Inspect the diagram below. The solid cylindrical disk of mass 2.0 kg is pulled across a table by a falling mass of 0.50 kg. The string is wrapped around the disk. Ignore friction and mass of the pulley. (a) Find the acceleration of each object. (b) Find the speed of each object after the falling mass has moved downward 0.30 m (starting from rest).
37. An airplane of mass 3500 kg flies with velocity 75 m/s, north at altitude 750 m above ground. (a) Find the angular momentum of the airplane about a point on the ground directly beneath it. (b) Would this value change as the plane moves away from the same point? Explain.
38. A ball of mass 175 g is attached to a string and it is twirled around in a horizontal circle of radius 75.0 cm at a frequency of 2.00 Hz. It revolves clockwise as seen from above. (a) Find the magnitude of its linear momentum. (b) Find its angular momentum vector about the center of the circle. (c) If the string breaks and the ball flies off through the air, explain the effect on its angular momentum, if any.
39. A mass of 3.00 kg is moving with velocity 2.00 m/s, 30.0° when it is at a position of (0 m, 4.00 m) in the xy plane. (a) Find its angular momentum about the origin. (b) Find its angular momentum about the point (−5.00 m, 4.00 m).
40. A door has mass 10.0 kg and dimensions 0.900 m × 2.15 m. It is hinged with negligible friction along one of the long edges. Suppose a kid shoves the door exactly in its center with an average force of 10.0 N normal to the surface for 0.50 s. (a) Find the magnitude of the angular impulse. (b) Find the resulting angular speed of the door. (c) Now suppose the door’s swinging motion is stopped in 0.10 s by a bumper that is hit by the handle, which is located 0.800 m from the hinge. What is the average normal force on the bumper?
41. An 80.0 g meter stick is at rest on the floor when a kid kicks it at the 90.0 cm mark. He delivers a rapid impulse of 0.300 Ns, north in a direction perpendicular to the stick. Ignore friction with the floor. (a) What is the resulting linear momentum of the stick? (b) Find the speed of the center of mass of the stick. (c) Find the angular impulse about the center of mass. (d) What is the resulting angular momentum of the stick’s rotation? (e) Find the angular speed of the stick. (f) What are the instantaneous velocities of each end the stick? Hint: use velocity relative to CM to find velocity relative to floor.
42. Suppose the kid in the previous problem kicks the stick at the center of percussion or “sweet spot”, which is at a point two thirds of the length from one end. Show that for any mass, length, or amount of force, one end of the stick will have an instantaneous velocity of zero immediately after the impact.
43. A solid disk of mass 200.0 g and radius 10.0 cm is rotating freely about a vertical axis through its center at 20.0 rpm. A ring of mass 100.0 g and radius 5.00 cm that is not rotating is centered above the spinning disk and dropped onto it. The ring “sticks” to the disk such that both objects rotate at the same rate. (a) Find the resulting angular speed. (b) What amount of angular momentum is transferred? (c) What is the change in kinetic energy of the system? What energy transformation occurs to explain this change?
44. An ice skater starts to spin about a point on the ice and begins rotating 2.00 times per second. Initially the moment of inertia of the skater is 5.00 kg m2, but this is reduced to 2.1 kg m2 as the skater draws his arms and legs inward toward the axis of rotation. (a) Find the angular momentum of the skater. (b) Find the final rotation rate of the skater. (c) Find the change in the kinetic energy of the skater. (d) Based on the work-energy principle work must be done to account for a change in kinetic energy – explain how and what work is done in this case.
45. The speed of Earth is 30.3 km/s when it reaches its perihelion distance of 1.47 × 1011 m. What is the speed of Earth when it reaches its aphelion distance of 1.52 × 1011 m?
46. Pluto orbits the Sun in an elliptical orbit. The distance from the Sun varies from 4.437 × 1012 m to 7.376 × 1012 m. Use conservation of energy and conservation of angular momentum to solve for the speed of Pluto at these two points. Mass of Sun = 1.989 × 1030 kg.
47. A playground merry-go-round, initially at rest, has a diameter of 4.00 m and a mass of 500.0 kg. A 40.0 kg kid comes running at it along a tangent line at speed 8.00 m/s and jumps on to its edge and hangs on. (a) Approximate the rotational inertia of the merry-go-round as a solid disk in order to find its resulting angular speed. (b) What mass kid performing the same maneuver (running at the same speed) would cause it to rotate at 0.75 rad/s?
48. A thin rod of mass 300.0 g and length 90.0 cm is mounted on a horizontal axle (with negligible friction) that passes through its center. The rod is initially at rest in a vertical position. A 50.0 g lump of clay is launched at the rod with velocity 10.0 m/s, 0.0°, hitting and sticking to the rod at a point 35.0 cm above the axle. (a) Find the angular momentum of the clay about the axle. (b) Find the angular velocity of the rod and clay immediately after impact. (c) Find the linear velocity of the clay after the rod has turned 180.0°. (d) If the experiment is repeated, but this time the clay hits below the axle, what is the minimum distance below the axle at which the impact of the clay would cause the rod to spin 180.0° in the opposite direction?
49. An ice cube with sides of length L is sliding at speed v with very little friction across a level countertop. At the edge of the counter is a small ridge that stops the sliding of the cube. If the cube is moving fast enough it will rotate “up and over” the ridge and fall off the table. Given the moment of inertia of a solid cube around one edge is 2ML2/3, show that the minimum speed that will make it fall off the table is approximately: . Hint: use conservation laws.
c. −0.439 mJ – explain! | http://swansonphysics.com/APPhysicsC/AP%20Physics%20Assignment%20-%20Rotational%20Mechanics.htm |
the universe, some of it was denser and some thinner. Something, perhaps a nearby supernova,
caused the gases to begin to coalesce, and as the gravity of these particles increased,
they fell toward each other — and began to spin. Funnily enough, every time this happens, the
spin rotates the same direction, counter-clockwise. There’s no UP in space, of course, but if
you think about the angular moment of the spin as a FORWARD direction, then most things,
Earth, Mars, the Sun… they all rotate counterclockwise. Because they’re all conserving their angular
momentum. As the gases continued to gravitate toward each other, constantly moving, they
formed a tossed pizza dough shape. A ball in the middle, slowly expanding outward into
a disc. This is the shape we see most often in the universe, because of the laws of physics. As interstellar clouds rotate and collapse
onto themselves they fragment, according to Scientific American, and then those smaller
parts collapse again, and again. And over the next few hundred million years, all that
gas gathers and fuses into suns, planets, asteroids and (eventually, after lots more
time) you and me! All the while, the angular momentum of the original cloud it maintained;
that original gaseous angular momentum set the stage for all the rotation to follow — inertia
keeps it going. Yes, it IS slowing over time. A day in 100 years will be 2 milliseconds
longer, but ultimately we’ll all keep spinning unless something big smacks into us. Strangely, Venus rotates clockwise, and we’re
not sure why. Either the axis of the planet was flipped upside down at some point, or
it slowed rotating counterclockwise, stopped and began to rotate the opposite — possibly
due to its dense atmosphere and closeness to the sun. It’s not the only weirdo; Uranus
was knocked on it’s side, her rotation is ALL screwed up. Even on a macro level, everything is spinning.
But galaxies, relative to Earth, spin both clockwise and counterclockwise. Though spiral
galaxies DO tend to spin with their arms trailing behind them, but even that isn’t a hard rule.
In 2002, the Hubble spotted galaxy NGC 4622 whose arms LEAD her rotation, but they believe
it’s because it interacted with another galaxy. Sounds hot. In the end, everything in the universe is
spinning. Energy must be preserved over time; so when a figure skater spins with his arms
in, he’ll spin faster, but with his arms out he’ll move slower. That’s simple physics,
but it operates on a galactic level too! Does a science question have your head spinning?? | https://significantliving.org/why-does-everything-in-the-universe-spin/ |
This is a demonstration of the basic principles underlying the behavior of rotating bodies. A cylinder “race” is used to show that the closer the mass of an object is concentrated to an axis of rotation, the faster it will spin because it has a lower moment of inertia, which is a measure of a body’s resistance to rotation. The video below provides a brief synopsis of the demonstration.
For a solid cylinder and a hollow cylinder, the equations for the mass moment of inertia about the axis of interest in our demonstration reduce to those in figure 1 (http://hyperphysics.phy-astr.gsu.edu). “M” represents the mass of the rigid body and “R” represents the radius of the solid cylinder, and “a” and “b” represent the inner and outer radii of the hollow cylinder.
These materials can be easily manufactured. Additionally they (or similar materials) can be obtained from businesses that specialize in building teaching aids, such as Arbor Scientific.
Before Class: Obtain materials. Measure and calculate the basic parameters (mass, diameters). Practice demonstration.
In Class: First establish the scenario by introducing and describing “the players” in your best race announcer’s voice. Then, without any analysis, ask the students to guess which “player” will win the competition. Ensure you record this on the board. Build up to the start of the race and stop just short of letting them go. Ask students to consider if weight might be a factor affecting the outcome, then provided the class with the weight of each player and see if their guesses change (record on the board). Again, build up to the start and stop short to discuss the concept of mass being a resistance to translation and moment of inertia as resistance to rotation. Have students help you calculate the moment of inertia for each of the players (figure 4) and take a final tally of the bets. Finally, let the race happen!
Observations: The student should observe that the outcome is a factor of the moment of inertia and not the weight (given the overall dimensions and the weight of the players are approximately the same.
Hype up the event by dramatizing the race! Consider using racetrack videos and noises which can easily be found on the internet. Ask students to guess which will win once the scenario is set up and before the principles are discussed. Build suspense by working your way to the “gun shot” starting the race, then backing off the start to analyze another aspect. If you’re using Power Point, incrementally build your slides to build suspense!
Furthermore, cylinders of different materials and diameters might be considered (figure 5).
Al Estes, Charlie Packard, and Tom Messervey (2014), "A Day at the Races: Moment of Inertia," https://www.handsonmechanics.org/dynamics/106. | https://www.handsonmechanics.org/dynamics/106 |
Dynamics of Geared Systems
Acceleration of geared systems, and analysing combined angular and linear motions.
Contents
IntroductionWhen a geared system accelerates or decelerates there is a change in the total inertia of the system. Clearly, the value of this increase or decrease depends upon the speed ratios of the various parts of the system.
In this section we use the moments of inertia of geared shafts to identify the total torque and the Equivalent moment of Inertia for any given shaft in a state of acceleration.
The Acceleration Of Geared Systems
Acceleration is the rate of change of velocity as a function of time and it is vector. Acceleration is the second derivative of position with respect to time or, alternately, the first derivative of the velocity with respect to time.
Clearly the argument above could be extended to any number of shafts which have fixed speed ratios with the reference shaft. It is worth noting that the same acceleration of the system shown in the above diagram could be produced by a torque applied to shaft of:
- Suppose that two shafts and are geared together and the speed ratio is
- The total moments of inertia on the shafts are and
- The torque on shaft required to accelerate is
- This will produce an acceleration of shaft of
- The torque required on shaft to achieve the above acceleration on is
- This would require a torque on shaft of
- The Total torque on is thus given by:
- is called the Equivalent moment of Inertia referred to shaft .
The Total Kinetic Energy Of A And B
The kinetic energy of an object is the energy which it possesses due to its motion. It is defined as the work needed to accelerate a body of a given mass from rest to its stated velocity.
The Total Kinetic Energy of and But since ,
Combined Angular And Linear MotionThere are many cases where masses moving with linear motion are connected by a fixed speed ratio to inertias in angular motion. Instead of dealing with the two motions separately, it is possible to reduce the system to one of the following:
An Equivalent Angular Motion
The angular motion is one type of motion in which a body acts as a radius and all parts of the moving body rotate in the same angular direction and follow a circular path about a pivot point.
To find the torque required to give an angular acceleration of to the winding drum of moment of inertia and radius , which is being used to raise a load on a cable wrapped round the drum.This method is used in worked examples 2, 7, and 8.
An Equivalent Linear Motion
Linear motion is motion along a straight line, and can therefore be described mathematically using only one spatial dimension.
The linear acceleration on the level of a car of weight and engine torque , whose wheels have an inertia of and radius , and an engine with an inertia and a gear ration , is found from: The equivalent tractive effort = = Acceleration Equivalent mass + Losses LossesSee Example 9.
Example:
[imperial]
Example - Example 1
Problem
Two gear wheels and are mounted on parallel shafts so that they may revolve separately or may be meshed together externally. The wheels were originally turning freely in the same direction. Find:
- a) The speed and direction of rotation of wheel , if the gears are suddenly meshed, assuming that there is no back-lash of the teeth.
- b) The loss of energy in ft. lb. due to impact.
Workings
- a) Let r.p.m. be the final speed of so that the final speed of is in the opposite direction. | https://www.codecogs.com/library/engineering/theory_of_machines/dynamics-of-geared-systems.php |
Scientists at the Laser Interferometer Gravitational-Wave Observatory announced Thursday the detection of ripples through the fabric of space-time. They're called gravitational waves, and their discovery changed astronomy and physics forever.
The waves came from the collision and merging of two black holes, and scientists were able to transform the signal into an audible chirp.
As the two masses rotate around each other, their orbital distances decrease and their speeds increase, much like a spinning figure skater who draws his or her arms in close to their body. This causes the frequency of the gravitational waves to increase until the moment of coalescence. The sound these gravitational waves would produce is a chirp sound (much like when increasing the pitch rapidly on a slide whistle) since the binary system's orbital frequency is increasing (any increase in frequency corresponds to an increase in pitch).
Scientists will continue hunting for more signs of gravitational waves. Soon there could be enough for someone to create a whole gravitational waves playlist. | https://mic.com/articles/135522/song-created-from-einsteins-gravitational-waves |
I have known the moon as a celestial body since childhood. It was just there and it looked like this and looked like that. Full moon, new moon, half moon waxing, waning etc.. You just get older, also with the moon…. what a luck, coincidence, and …much more…???
A recurring question of very few people is still called :
Have men already been on the moon ?
I say only one word: „Yes“.
A probe has been on its way for some time now and it shows all the landing sites of the American moon landings : Apollo 11, 12, 14, 15, 16,17. Instruments, mirrors for laser measurements from Earth, vehicle tracks and footprints of the former visitors…….so that’s what we have now…..
For us humans, the moon has always had a special significance for life on Earth in general, and this is because its gravitational pull causes the tides (i.e. low and high tides) and thus controls and regulates the life of numerous beach animals and marine organisms, for example.
For example, in moonlight, numerous algae come to the surface of the sea and form the food base for a large number of marine life.
But one thing is for sure, today I really love this satellite of the earth. Why ?, some people ask…. and I answer…
Why roam far away, when the good is so close……!!
He!!! ….. is it, …. without which mankind would not have had the slightest chance to live on planet earth. Since time immemorial, he has mainly been said to have magical powers. I consider a list of this nonsense simply nonsensical and prefer to turn to what we really know today, it has to do with the physical interaction of earth and moon.
There is another meaning that our moon has for the complete life of all living beings on planet earth. One of the truly greatest meanings of the moon for life on earth is that the moon keeps the inclination of the earth’s axis largely stable.
However, even the smallest variations in this angle (it is currently 23 degrees and 27.54 minutes) can have very profound effects. As everyone should know, the surface of our planet is heated up in a damned different way. And this is mainly due to the angle of incidence of the sun’s rays. Further, ……this in turn is also the main cause for the existence of the large climate zones, and they are the large heat zones, because they are directly related to the angle of inclination of the earth’s axis.
The Moon is heavy, the Earth has the heaviest Moon of all the planets in the solar system. Nevertheless, the moon/earth ratio is relatively small ?! !!!!!!!!!!!
What happens if the moon suddenly disappears?
A very speculative question…. I have to admit. It is only meant to be an insignificant thought experiment – but it makes it clear what an enormous influence the Moon’s ever-present gravitational pull has on our Earth. Should the moon suddenly disappear from our orbit, nothing would be the same as it was.
Simply stated, the moon, which we know well, is unusually large for a planet of Earth format. Astronomers say that it orbits very close to us, averaging only 380,000 kilometres, if you look at these two enormous and therefore extraordinarily close bodies. ….I hear you calling…gravitational force !!!
We experience the effect of the moon very simply. I like to remind you of the following: we stay on the North Sea coast and feel the tide. But also the attraction of the moon is still to be felt several times, namely: The tidal mountains are pulled out of the direct line to the moon by the fast rotating movement of our earth and therefore they follow the law of inertia as a part of the earth, and the gravity of the moon counteracts this very effectively. It attacks the tidal mountains and pulls them back, so to speak, until a new balance of forces is established. This equilibrium means that the rotation of the earth is actually gradually slowed down.
It follows naturally that if the moon did not exist, the water mountains of the tides would be much smaller, because the attraction of our sun is only about one third as strong as that of the moon. Further: This would mean that the mass of the oceans would be closer to the axis of rotation of the earth, and the moment of inertia would be reduced. Since the total angular momentum of the Earth is maintained, the speed at which the Earth rotates around its own axis increases. But this is already generally known.
By whom or what ? of course, as almost everyone knows about figure skating, or almost everyone has already seen, when the figure skater stretches out her arms during a pirouette, she turns slower; when she puts her arms back on, she turns faster. Because of the moon, a rotation of the earth on its own axis – that is, one day – takes about 24 hours. As mentioned before, the moon stabilises the inclination of the earth’s axis at about 23.5 degrees, like a „magnet“ (as mentioned before).
Furthermore, the consequences would be devastating, first for the world’s coastlines. Because of the moon’s lack of attraction to the oceans, the water mountains on the side facing the moon would collapse – they would melt and redistribute themselves – ultimately following the much weaker attraction of the sun. In the meantime, however, a gigantic global wave would first flood the mainland.
And there would be other consequences as well. Without significant tides, there would be no more braking effect of the water mountains on the torque of the earth: our own rotational speed would gradually increase so much that a day would pass three times as fast: because after 8 instead of 24 hours it would be over.
Next : the stabilizing influence of the moon on the earth’s axis prevents our planet from spinning like a spinning top about to topple over. Without the moon, the axis of the earth would sway back and forth regularly. As a result, in a hundred thousand years the inclination of the earth would change dramatically again and again. This would result in serious climate changes, they would be unavoidable. Different regions of the earth would tilt towards the pole or equator and would be more towards or away from the sun. This would lead to strange results: the poles would turn into the tropics and the equator into an ice desert – and this neither once nor very gradually. | http://wolfgang.korsus.net/good-dayi-am-the-moon |
In Running Reports in SoGoSurvey: Part 1, we started with a basic Bar Graph Report to show you how to generate reports in SoGoSurvey. Now we’re ready to go a little deeper and look at some of the other popular reports for analyzing survey results.
The Conditional Report is a great next step for digging deeper into the data and understanding what different groups think. For example, you could set the condition to only display results from male participants.
Select the question from the drop down that you want to use to create your condition (e.g., what is your gender?).
Select the condition (Is or Is Not) and select the answer that will serve as your condition (e.g., Male).
If you like, you can keep adding conditions to further segment your data.
Another popular report is the Individual Report, which shows how each person responded to all or selected questions in your survey. From this report, you can edit responses in case someone accidentally gave the wrong response, for example, their office location. You can also delete responses if someone took your survey and should not have.
Another report our users find helpful is the Verbatim Report. The report allows you to see all the open-ended (text) responses in a readable format. This makes it easy to identify common themes among open-ended responses.
For example, you could export the report to Excel and sort responses in random order. Then take a percentage of those responses to look for themes (e.g., for 1,000 responses, analyze 10% or 100 responses). Then set criteria for considering something a theme, such as having at least 10 similar responses.
To view an individual’s responses to the entire survey, click on the survey number next to their response. For example, if someone responded negatively in an open-ended question, you may want to see the rest of their responses.
The Response Table allows you to view how each participant answered each question in a neat table format. You can also select the survey number in the first column to edit or delete responses.
Come back for Part 3, where we’ll talk about using the Filter Manager, as well as some of the more advanced reports. | https://www.sogosurvey.com/blog/survey-results/ |
To find the Dashboard click on Analyze in the navigation bar and select Dashboard next. Now you'll be taken to the overview dashboard.
The overall dashboard shows all responses to the selected survey. By using one of the options in the left navigation you can filter the responses:
- Data range: filter feedback on a specific timeframe
- Data Access Policy: filter feedback for specific Data Access Policies
- Filter: Filter feedback on specific custom fields
Dashboard insights
The overview dashboard has several cards and graphs that give you quick insight into your responses.
Response Rate
The percentage of respondents that respond to your survey invite and number of responses
Net Promoter Score
NPS and percentages of promoters, passives and detractors)
NPS Development Line Chart
The development of the NPS over time
NPS Distribution Histogram
The NPS distribution per score
NPS Follow-up Question
An overview of comments on the NPS follow-up question
NPS Open Question
An overview of comments on the NPS open question
Average Rating
The overall average of Star rating and/or Grade questions.
Block summary chart
An average score per question block of Star rating and/or Grade questions.
Question Ranking Chart
An average score of the individual Star ratings and/or Grade questions.
Multiple Choice Question
Distribution of answers to the MC question
Multiple Choice Question feedback
An overview of comments on the MC question
Open-Ended Feedback
An overview of responses to the Open-Ended feedback questions
Updated about 21 hours ago
What's Next
How to drill down in Dashboard insights? | https://support.starred.com/docs/using-the-new-beta-dashboard |
On December 2nd, 2020, faculty were invited to complete the UFF-FIU Faculty Survey via Qualtrics. The survey was designed to help the union have a better understanding of faculty experiences at FIU and faculty’s priorities as we negotiated a new contract. All responses were kept confidential and results were be reported in the aggregate.
An executive summary and results of the survey were published on February 3rd. We received 579 responses (response rate = 42.3%), which is extremely high for an online survey and one conducted at the end of an extremely challenging semester. Demographic responses indicate that the respondents are representative of the bargaining unit in terms of title, rank, gender identity, racial/ethnic identity, and contract length. The report contains the overall item-by-item results of the survey, and can be found here.
Past Reports on “How FIU Spends its Money”
Every few years, UFF-FIU commissions labor and economic policy experts to analyze trends in Florida International University (FIU)
expenditures on faculty and administration personnel. Particular attention is given to the changes in the number, composition, and salaries of faculty and administrators at FIU in the context of increasing university expenditures and rising student enrollment, and changes in tuition revenue.
Over the years, our reports show that the FIU administration has been shifting personnel and resources away from the faculty who do the teaching, research, and service that constitute the University’s mission. No matter how conservatively we analyze the data, the University has been increasing both the total numbers and total salaries of administrators relative to faculty. Furthermore, faculty’s share of
personnel expenditures has declined as the share of administration personnel expenditures has increased. Even when tuition and student fees increased, funds have been steered away from students and those who are directly responsible for the academic mission of this University.
We urge you to read these reports carefully. As we approach the difficult process of negotiating the CBA, including salary increases for faculty, keep in mind that the FIU administration can make different choices in the face of budget constraints to improve the academic infrastructure and mission of this University.
If you are not yet a member of UFF JOIN UFF-FIU NOW! | https://www.uff-fiu.net/wordpress/reports/ |
View the latest result for a GP practice by entering practice name or postcode here.
The results of previous GP Patient Surveys can be found here.
The data in the GPPS outputs is presented in three different ways:
• Results - exclude non-specific response options (e.g. haven’t tried, can’t remember, don’t know) in order to provide a more accurate reflection of how those using a service evaluate it. Non-specific responses in general do not provide information about the patient’s experience of GP services.
• Summary results - provide a quick way of viewing the result of a question, usually these are an aggregation of two individual responses (e.g. ’Easy’ is a combination of ‘Very easy’ and ‘Fairly easy’) or it can be a single response. In many cases these also exclude the non-specific response options, as indicated alongside each result.
• All responses - full breakdown of all response options, including non-specific options to allow a complete picture of responses and manipulation of the full data.
The majority of outputs, designed to be used by patients, GP practices and CCGs, lead with the summary results because these provide a robust assessment of patient experience across organisations. These outputs are:
• National summary report
• National results
• Excel / csv practice and CCG level data
• Analysis tool
For more information see the Technical annex or the Presentation of statistics document, both on the GPPS website.
These are calculated by comparing a practice’s results to the average results for the practice’s Clinical Commissioning Group (CCG).
Where patient experience is best: These are the three results for this practice that are the highest compared to the CCG average. These results may be above the CCG average, or if none of the results for a practice are above CCG average, then the results that are closest to the CCG average have been chosen.
Where patient experience could improve: These are the results for this practice that are the lowest compared to the CCG average. If no results are below the CCG average, this is indicated.
Clinical Commissioning Groups (CCGs) are NHS organisations set up to organise the local delivery of NHS services in England. They are GP-led organisations which have a duty to support NHS England in improving the quality of primary medical care and have taken on many of the functions of PCTs and in addition some functions previously undertaken by the Department of Health. Every GP practice must belong to a CCG, and there are currently 209 CCGs across England, all of which are overseen by NHS England. You can find your local CCG here.
You can also find more information about CCGs: http://www.nhs.uk/NHSEngland/thenhs/about/Pages/nhsstructure.aspx
Filters can only be selected when there is enough data to display results for that filter.
This may be because there are simply no survey responses from a particular group of patients at a practice. Or it may be because data has been suppressed.
If a filter is greyed out, it means that data for this filter is limited due to another filter you have added. Try removing any previous filter or filters.
In cases where fewer than 10 people have answered the question, the data has been suppressed. This is to prevent individuals and their responses being identifiable in the data.
In the weighted reports, there are some cases where this suppression is also applied to questions where the total number of responses is 10. This is again due to rounding. If the total number of responses when weighted is less than 10 (e.g. 9.856), but has been rounded to 10 in the report, then the data will be suppressed. If the weighted total number of responses is, for example, 10.245, then the total number of responses will also show as 10 but the responses will be shown.
This is because in the unweighted data there may be a single person who has given a response. However, when weighted, this response has been given a value of greater than zero but less than 0.5 and, therefore, rounded to 0. Because the actual value is still greater than zero, in percentage terms this shows as a %.
Unfortunately, for a few practices data is limited or unavailable due to a small number of returned surveys.
This can happen when weighted data is rounded to a whole number.
When weights are applied, decimals are added to the number of responses in each category and the total number of responses. This means that, occasionally, there can be cases where the number of responses differs from the base size. For example, if a report says that 59 people say ‘yes’ and 14 say ‘no’, but the number of responses is 74 (not 73), that means that the weighted values could actually be 59.345 and 14.456, which add up to 73.801 (which is then rounded up to 74).
There are examples in the reports where, for example, it looks like one person has selected 'Other' and one person selected 'I would prefer not to say', but their corresponding percentages are 1% and 2%. Again, this happens when the results and number of responses are rounded but the percentages are calculated on un-rounded data.
NHS England has made changes to its internal structure as part of its Organisational Change Programme 2014/15. These changes came into effect in April 2015. As part of this process NHS England’s Area Teams were integrated into the four existing regional divisions: London, Midlands and East, North and South. More detail on the changes and the role of the regional divisions is available here.
You may notice references to confidence intervals (lower and upper limits) for selected questions within the practice and CCG reports. These are reported to give users an indication of the accuracy of the findings for individual questions.
Because we hear from a sample of people at each practice, rather than all registered patients, we cannot be certain that the results to a question are exactly the same as if everyone had taken part. The confidence interval is a statistical measure that shows the range of values within which the true value is estimated to lie (i.e. what we would have found if everyone had been given and completed the survey).
In this survey, as in most other surveys, we use a “95% confidence interval” – this means that we are 95% sure that the true value lies within the upper and lower limits of the confidence interval. Another way of looking at this is that, if we were to re-run the survey over and over again, we believe we would get a result that falls within the confidence interval 95 times out of 100 (or 19 times out of 20).
For more information on statistical reliability and confidence intervals, please see the latest Technical Annex.
This website is being administered by Ipsos MORI, the survey provider for the GP Patient Survey© 2019 GP Patient Survey
Your feedback is really important to us. It tells us who is using the GP Patient Survey and allows us to improve your experience in the future.
While all the information below is helpful for us, none of the fields are mandatory.
No thanks – continue
There are many changes currently being made to GP services. These changes are set out in the
GP Forward View, which outlines
NHS England’s vision for primary care. These changes include, but are not limited to, offering appointments in extended hours,
and providing new ways for people to access GP services. These changes will affect how patients and the
public experience their local services, and as such we need to ensure that the GP Patient Survey (GPPS) remains relevant in this context,
and continues to ask about the right things and in the right way. As a consequence, questions across the survey may be modified,
removed or replaced. While we intend to retain questions that are widely used and which remain relevant where possible, it should be
noted that wider changes to the questionnaire mean that we cannot be certain that data trends will be maintained.
As this is a good opportunity to take stock, as well as anticipating changes to the ‘GP Access’ questions (Q1-Q20), we also anticipate considering other improvements to the survey.
This will include the sections of the questionnaire that deal with ‘Managing your health,’ ‘Your state of health today,’ and ‘Planning your care’ (Q30-Q39). This is so that we can
get a better understanding of how local care services are supporting patients to live well, particularly patients with long-term care needs.
However, the questionnaire as a whole is under review, so please take this into account when providing feedback.
In this context, GPPS data users are invited to give any comments they may have about proposed changes to the questionnaire.
To view a copy the current GP Patient Survey questionnaire, please click here. | https://www.gp-patient.co.uk/reading-results |
This week, we are moving forward on the survey portions that we can complete without being able to actually survey the community. During our last trip in Salinas, our meeting with the stakeholders (Building Healthy Communities and Code Enforcement) led us to realize that our survey previously was insufficient to capture the nuances of the housing state. Therefore, we spent the rest of the trip reassessing the neighborhoods with this new information in mind, but we were not able to collect any data. However, this was still useful because now we know how to better cater to the needs of the community. Although the process to build the best survey possible is very long, we want this project result to be as valuable as possible to the city and to be more than just a thought experiment. Therefore, we will take the time needed to adjust our survey.
Jonathan from the City of Salinas has also sent us some GIS data from the city’s databases on physical property characteristics for each parcel, which has structural elements such as the number of bedrooms, age of the house, and more that could be useful for an initial analysis of whether the Alisal could qualify for some of the grants they are interested in. This file also has the parcel boundaries for the Alisal area, so our data from the survey will need to fit within these boundaries for it to be integrated back into the database.
Additionally, Jonathan mentioned that he has been thinking about best practices around sustainability and would like us to consider more deeply issues like parking, transportation, affordability and their intersection with housing. If we were to build more high density, which is very sustainable, would this still be comfortable? With the lack of public transportation and parking in Alisal, how could the city respond with regard to these public services if they were to build more housing? How would this affect sustainability?
Because we did not have a site visit this week, there isn’t as much new information for us to absorb. However, we have learned that we need to consider all the factors at play when it comes to housing such as the issues of transportation mentioned above and also the overcrowding that can occur in public spaces if more housing is built. The trip to the Stanford educational farm was also informative as it helped us learn more about ways to make urban areas more sustainable through city gardens and the self-sufficiency that comes with planting your own food.
We observed that because of the historical inequities of the Alisal area, as Carol discussed with us during our trip to Carmel last week, we need to be sensitive of what the community really wants and understand the difficulties surrounding a very comprehensive survey of all the neighborhoods. Since volunteers will be surveying their own communities, and possibly their own neighbors, and given the current political climate on issues of immigration and deportation, there may be some nervousness on the residents' part to let other people assess their housing. If building codes aren't met, we want to make sure that residents who may be forced out still have resources to support them in finding another home. This will be a tremendously long process, since the City is limited in its capacity currently to help these residents, so we do not want to shake things up too much.
In the next week, we need to fully process the new GIS data we’ve been given by Jonathan and decide if there are additional variables that we would like from the GIS team at Salinas. Looking over the data will also tell us what information they are missing and should be added to the housing survey. We might also make some summary statistics on what is in the GIS data since they will give us and the City more context on the overall state of housing. Because we did not collect survey data last time, we may need to schedule another field visit if possible. However, the requirements noted by the community groups (having Spanish speakers and community members accompanying us during surveying, and really having the community do the survey instead of an outside group) might make this more difficult. We would like to respect the community’s wishes, so we will do our best to provide some analysis to the City even if we cannot collect data now.
We will also be meeting to begin working on the final presentation and completing the portions that we can do now. Jonathan will also be giving us feedback on our first survey, so we will have to further make some changes on the survey before we hand it over to the community groups. Lastly, we will start incorporating the issues of sustainability and transportation into our final analysis and report, using the GIS data that we were given and also through our own research of the literature that already exists.
Though this week started a little bit slow, after we conducted our first field day last Friday, Adina and Chris with one group member discussed how the first field day went and how to continue moving forward. First, we went over the field day itself and the survey so far. We presented the digital version of the survey to Adina and Chris, and they offered feedback throughout the week so we could finalize the survey. Adina shared the online survey with a semi-public organization, Commute.org, that may help with distribution. Unfortunately, they will not be able to do so until March 9th, which means the responses gathered would be too late to influence our pilot study.
We then went on to discuss how to gather more responses and fine tune our administration process. Chris and Adina suggested first standardizing the way in which we choose businesses to survey, to make sure it is consistent and we don’t bias our results in any way. They also suggested that we survey next at Redwood City, which we intended to do on Thursday. Though we were not able to do it this past week, two group members surveyed at the University Avenue area, and were able to gather two physical responses from retail businesses in about two hours, which is slower than our rate last Friday. However, they found that people were generally very receptive to the project and handed out many information cards to workers in about half of the businesses on University Ave. Tomorrow, more group members are planning to make the trip to Redwood City to continue expanding the survey catchment area.
Surprisingly enough, we generally noticed that people were easy to approach and talk about their commute. Over the two survey days, we noticed a variety of different responses though we approached folks at work. Some people were quick to express their opinions and were able to take about 10 minutes to answer the paper version of the survey. A greater number of people expressed a lot of interest in answering the survey, but because they were at work, they said that they would be happy to take the online survey on their own time. After we realized that the majority of people had this response, especially in the food and beverage sector, we printed out smaller, less ink-intensive cards with the link and our contact information so we could disseminate the link to more people. However we did realize that we had more luck surveying people in person in the retail sector, simply because they were able to step away from their work. Though we tried asking workers in busy businesses if there were folks on their breaks that were able to take the in-person survey, people were more open to receiving the link information. We crossed our fingers hoping that they would eventually take the survey after we left the business. Other options for “closing the loop” is re-scheduling a time to come back and survey workers in that particular business or incentivizing folks with random Amazon gift card drawings. We will continue to test our survey methods these next 2 weeks, but we have had some successes in getting both physical and online responses in these past two survey outings.
As a whole, folks seemed pretty despondent about the likelihood of the 101 ever having better traffic conditions, and some focused on the deteriorating condition of the road in their commute such as the presence of potholes on the road. In terms of the survey format, we made many shifts according to Chris and Adina’s suggestions and to folks responses during the survey times. For example, we changed a lot of the questions to not being required on the online form since there were a bunch of places that folks preferred not to fill out. When we gave our 101 Express Lanes pitch, we realized that the trifold handout that we made was invaluable with its visuals in explaining the concept of express lanes to workers.
During field days, we have not been able to get as many in-person responses as we would have liked in the past two outings. Considering we only have 8 responses so far and are aiming for a minimum of 30 to analyze in our final report, our number of survey days and our survey methodology could use improvement. However, with some contacts that Adina and Chris can provide with extra volunteers to help us collect more surveys and potential help from business organizations find a way to speed up surveying, we will be able to adequately present a concise presentation on survey results and methodology. Regardless, what we have tested thus far and our literature review will remain a large bulk of our final report to our community partners as we pass our this project to them. In the coming weeks as we continue to survey, we will simultaneously work on the final report and will analyze survey responses nearer the end of these ten weeks.
Another major factor to consider is the diversity of respondents. In these past two weeks, we only had the opportunity to survey folks in the Palo Alto area which is not representative of the population we want to hear from. Therefore looking forward towards these next two weeks, we will focus our efforts on expanding our survey area. While our survey will be scaled up on, we still want our results to be somewhat representative of the San Mateo worker population.
In this back half of the quarter communication within the group has lapsed as the quarter as become busier, but this has been brought to our attention as we have begun to operationalize our surveying. We’re excited that our community partners are optimistic for our efforts in this project, however our capacity in these final weeks will solely be focused on creating a scalable survey model to continue getting worker responses in San Mateo County.
This week we primarily focused on finalizing and prototyping our project documents. We created our handout that we handed out to folks on El Camino Real on our first survey session, and we finalized and printed out our survey. We also digitized our survey so we could reach more folks during the survey period. Our site is under construction, but we linked and posted our contact information on the site. This was a pivotal week in beginning our survey period, and we will continue to iterate on our documents as needed.
Our survey field was mostly in the rain from 3pm-5:30pm on Friday, which proved beneficial because of decreased store traffic. Sales and customer interactions took precedent over survey interactions but any interruptions in survey completion were simply breaks not endpoints. Employees mostly responded positively to our requests, despite being on the clock. This might have to do with the mom & pop style management near California Ave in Palo Alto.
Stores broke down into two categories food and retail services. Interactions with the former were heavily influenced by the presence of customers since food preparation demands full attention and hands on meal assembly. One individual mentioned that he couldn’t complete the survey because the store was shorthanded and he needed to cover multiple roles. Retail however proved more flexible because cashiers or receptionists typically had less demanding tasks to complete. Perhaps an added benefit was the customer service focus intertwined within these roles. Several individuals even asked follow-up questions about survey purpose. Overall nobody rejected brochures after declining to complete the paperwork.
Moving forwards, we plan to expand our survey areas to other stretches of El Camino. This will require significantly more planning than our first field day, which was readily accessible. Seeing as none of us have cars, we plan to use Caltrain to commute to our next few survey locations. Furthermore, after testing the survey on each other and on actual workers and receiving no negative feedback, it would be helpful to meet with Chris and Adina again to discuss next steps. We also need to establish continued contact with business organizations, to allow us to reach more people. Though we sent an initial email this week, we have yet to receive a response and should follow up soon.
Overall, the structure we had of having a field day of about 3 hours seem like a manageable and effective time commitment. However, there are certain things we must do to make our survey more streamlined so we can collect an adequate number of responses. None of the people we talked to today had any problems with the survey when we asked— however, it was certainly slow going. Over the span of 2.5 hours, we collected 5 confirmed survey responses, and handed out 6 additional brochures. It is very unlikely that all 6 of the people we gave brochures to will fill out the survey on their own time— with luck perhaps 1 or 2 will. However, this puts as at 6-7 responses per trip, which means a significant number of field days in the future.
1. Avoid verbal surveys if possible— the person we spoke to at Pieology agreed to take the survey verbally, as she needed to have her hands clean to prepare food. However, administering the survey this way took over half an hour, using a significant amount of her time. Moving forward, it may be helpful to leave the survey for food workers to take when their shift ends and picking up the survey the next day. We could also ask if any folks are on a break and are able to complete the survey then.
2. Pick slow moving times— though we were unhappy about the rain at first, it proved to be a huge blessing. Almost all of the businesses we visited were nearly empty, allowing workers time to complete the survey. Moving forward, we should keep this in mind, visiting stores during slow hours, on rainy days, and entering stores that appear more empty or slow moving. One thing we noticed is certain stores— such as repair shops or food places— have more constant work, as even when there are not customers, workers must still complete tasks. It would be best to offer these locations brochures, while focusing on in person surveys at empty retail locations.
3. Use incentives— something we are considering looking into going forward, mentioned by the economic planner who visited class, is offering Yelp reviews to locations we visit. This way, we can make sure that the survey is a reciprocal relationship, and make sure people have a good experience filling it out it. | http://sustainablecities.weebly.com/blog/archives/02-2017 |
One of the important duties of a nurse leader is to manage personnel and personnel budgets. In this assignment, you will assume the role of a nurse manager. You will use given data to make important decisions regarding budgets and staffing.
Some nurse managers have computer spreadsheets or software applications to help them make decisions regarding budgets and staffing. You will only need simple mathematical operations* to perform the needed calculations in this assignment because the scenario has been simplified. Furthermore,some data have been provided for you that a nurse leader might need to gather or compute in a real setting. Still, you will get a glimpse of the complexity of responsibilities nurse leaders shoulder regarding financial management.
• To calculate the percent of the whole a given number represents, follow these steps:
Change the percentage to a decimal number by moving the decimal twice to the left (or dividing by 100).
Multiply the new decimal number by the whole.
Example: What is 30% of 70?
30%= .30; (.30) × 70 = 21
• To find out what percentage a number represents in relation to the whole, follow these steps:
Divide the number by the whole (usually the small number by the large number).
Change the decimal answer to percent by moving the decimal twice to the right (or multiplying by 100).
Example: What percent of 45 is 10?
10 ÷ 45 = .222; so, 10 is 22% of 45.
* You will only need addition, subtraction, multiplication, and division.
Case Study
You are the manager for 3 West, a medical/surgical unit. You have been given the following data to assist you in preparing your budget for the upcoming fiscal year.
Patient Data
Average Daily Census (ADC): 55
Budget based on 6.2Avg.Hours per patient day (HPPD)
(6.2HPPD excludesNurse Manager and unit secretaries)
Staff Data
Total FTEs 40.0 Variable FTEs
1.0 Nurse Manager
2.5 Unit Secretaries
43.5Total FTEs
Staffing Mix
RN 75%
LVN 10%
NA 15%
Average Salary Scale per Employee
Benefits are 35% of annual salaries
Nurse Manager $76,000.00 per year
Registered Nurses (RN) $34.00 per hour
Licensed Vocational Nurses (LVN) $20.00 per hour
Nurse Aides (NA) $12.50 per hour
Unit Secretary (US) $11.00 per hour
Click here to ORDER an A++ paper from our Verified MASTERS and DOCTORATE WRITERS: NURS 4455 Module 3 Assignment Financial Management Case Study
ADDITIONAL INSTRUCTIONS FOR THE CLASS
Discussion Questions (DQ)
Initial responses to the DQ should address all components of the questions asked, include a minimum of one scholarly source, and be at least 250 words.
Successful responses are substantive (i.e., add something new to the discussion, engage others in the discussion, well-developed idea) and include at least one scholarly source.
One or two sentence responses, simple statements of agreement or “good post,” and responses that are off-topic will not count as substantive. Substantive responses should be at least 150 words.
I encourage you to incorporate the readings from the week (as applicable) into your responses.
Weekly Participation
Your initial responses to the mandatory DQ do not count toward participation and are graded separately.
In addition to the DQ responses, you must post at least one reply to peers (or me) on three separate days, for a total of three replies.
Participation posts do not require a scholarly source/citation (unless you cite someone else’s work).
Part of your weekly participation includes viewing the weekly announcement and attesting to watching it in the comments. These announcements are made to ensure you understand everything that is due during the week.
APA Format and Writing Quality
Familiarize yourself with APA format and practice using it correctly. It is used for most writing assignments for your degree. Visit the Writing Center in the Student Success Center, under the Resources tab in LoudCloud for APA paper templates, citation examples, tips, etc. Points will be deducted for poor use of APA format or absence of APA format (if required).
Cite all sources of information! When in doubt, cite the source. Paraphrasing also requires a citation.
I highly recommend using the APA Publication Manual, 6th edition.
Use of Direct Quotes
I discourage overutilization of direct quotes in DQs and assignments at the Masters’ level and deduct points accordingly.
As Masters’ level students, it is important that you be able to critically analyze and interpret information from journal articles and other resources. Simply restating someone else’s words does not demonstrate an understanding of the content or critical analysis of the content.
It is best to paraphrase content and cite your source.
LopesWrite Policy
For assignments that need to be submitted to LopesWrite, please be sure you have received your report and Similarity Index (SI) percentage BEFORE you do a “final submit” to me.
Once you have received your report, please review it. This report will show you grammatical, punctuation, and spelling errors that can easily be fixed. Take the extra few minutes to review instead of getting counted off for these mistakes.
Review your similarities. Did you forget to cite something? Did you not paraphrase well enough? Is your paper made up of someone else’s thoughts more than your own?
Visit the Writing Center in the Student Success Center, under the Resources tab in LoudCloud for tips on improving your paper and SI score.
Late Policy
The university’s policy on late assignments is 10% penalty PER DAY LATE. This also applies to late DQ replies.
Please communicate with me if you anticipate having to submit an assignment late. I am happy to be flexible, with advance notice. We may be able to work out an extension based on extenuating circumstances.
If you do not communicate with me before submitting an assignment late, the GCU late policy will be in effect.
I do not accept assignments that are two or more weeks late unless we have worked out an extension.
As per policy, no assignments are accepted after the last day of class. Any assignment submitted after midnight on the last day of class will not be accepted for grading.
Communication
Communication is so very important. There are multiple ways to communicate with me:
Questions to Instructor Forum: This is a great place to ask course content or assignment questions. If you have a question, there is a good chance one of your peers does as well. This is a public forum for the class.
Individual Forum: This is a private forum to ask me questions or send me messages. This will be checked at least once every 24 hours. | https://nursingpaperslayers.com/nurs-4455-module-3-assignment-financial-management-case-study-2/ |
When the COVID-19 pandemic started in 2020, it transformed all of our lives. People were forced to work from home, students had to go to class online, and internet access became more important than ever. The City, recognizing that Philadelphia was looking at months of remote learning, launched PHLConnectED—a program that provides free internet to eligible pre-K–12 households in need of reliable access.
While PHLConnectED has enabled over 17,500 internet connections, we know there’s more work to be done. To address digital equity strategically, we had one key question: how many people still don’t have internet access or a reliable connection in Philadelphia?
Enter: the Philadelphia Household Internet Assessment survey to analyze internet needs.
An important survey
The City is working with Wilco Electronic Systems and its team of partners to conduct a month-long survey to learn more about the digital access needs in Philadelphia. Wilco brought together a team that includes Centri Tech, SSRS, and national broadband expert John Horrigan to create, release, and analyze the survey data.
This survey is about more than just gathering numbers. We want to understand how people view available internet options and barriers to internet access. We also want to make an easy, useful process for residents to get connected. Responses to the survey will help measure the progress of current programs and will help the City create policies and programs to support digital equity.
The survey is over the phone and offered in six languages. If you receive a phone call to take this survey, take it—you’ll be helping Philadelphia overcome the digital divide by answering just a few simple questions.
What’s the point?
The goals of this survey are to:
- Learn how many households do not have reliable broadband internet or devices at home;
- Examine whether households have taken advantage of pandemic programs to get internet services and computers;
- Explore whether people are satisfied with their online connection to carry out key tasks;
- Identify barriers to getting in-home internet access; and
- Understand affordability of service for households.
The current public data about digital and internet access needs is outdated. To truly understand the digital divide in 2021, we need new data. The more people who take this survey, the better our future digital equity plans will be.
Philadelphians can look out for a caller ID that either appears as “SSRS” or a local “267” number when receiving a call.
So, are you getting a call to take a survey about digital access needs in Philly? Take the time to participate—you’ll be helping residents across the city get the support they need.
About the partners
Wilco is a Philadelphia-based telecommunications and systems integrator that has serviced multifamily housing developments and underserved communities for the past 40 years.
Centri Tech, one of Wilco’s key partners, is a national social impact venture that aims to optimize technology to improve lives with a focus on serving people at home. Wilco and Centri Tech have joined forces to meet the needs of today’s housing communities and deliver both infrastructure—as well as impact—for underserved communities.
John Horrigan is the former Research Director for the National Broadband Plan at the Federal Communications Commission. He is a nationally recognized expert in research into home broadband (in-home internet access) adoption. His past work focused on libraries, technology, and communities.
SSRS, a non-partisan public opinion research firm based in Pennsylvania that conducts studies for clients such as news organizations, universities, foundations, and local government. | https://www.phila.gov/2021-06-15-support-digital-equity-and-help-philadelphia-understand-its-digital-divide/ |
Student Begins Research On Migraine Triggers
My name is Stella and I am a senior in high school in New York State. I have spent four years in an original science research class studying the complexities of migraines in an attempt to help myself and others. It has been established that about 39 million Americans suffer from migraines and one billion people suffer worldwide, myself being one of them.1 I have had a constant migraine for the past five and a half years and have recently found a medication that provides some relief for me.
In an attempt to understand migraines better and their triggers, I have created a non-biased, seven-question survey and your responses would be greatly appreciated. Please note: I do not ask about specific triggers in this survey as I'll be reviewing patterns after the results are collected. I hope to present my work and findings at my school's symposium and hopefully continue my research on this topic in college.
Known migraine triggers
As many of you probably know, there are many established migraine triggers in the world such as wine, certain cheeses, exercise, sleep cycles, and more, but what I intended on doing through this survey is to establish a new migraine trigger that hasn’t been thoroughly studied yet.
How you can help
By taking my survey you can help contribute your data to my research project! The more responses, the better. I will be able to compile all of the responses and analyze the data to hopefully provide the migraine community with more information on migraine.
This survey is anonymous. Responses are being collected through a Google Form by my teacher. Email addresses will not be collected. My teacher will provide me with data to analyze in an anonymous form. Results will be reported in summary form (comparing average results from different groups of participants). Thank you!
Join the conversation
Community Poll
When was your last migraine check-up? | https://migraine.com/living-migraine/student-research-survey |
5 Approaching Survey Research
What Is Survey Research?
is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports (using questionnaires or interviews). In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts, feelings, and behaviors. Second, considerable attention is paid to the issue of sampling. In particular, survey researchers have a strong preference for large random samples because they provide the most accurate estimates of what is true in the population. Beyond these two characteristics, almost anything goes in survey research. Surveys can be long or short. They can be conducted in person, by telephone, through the mail, or over the Internet. They can be about voting intentions, consumer preferences, social attitudes, health, or anything else that it is possible to ask people about and receive meaningful answers. Although survey data are often analyzed using statistics, there are many questions that lend themselves to more qualitative analysis.
Most survey research is non-experimental. It is used to describe single variables (e.g., the percentage of voters who prefer one presidential candidate or another, the prevalence of schizophrenia in the general population, etc.) and also to assess statistical relationships between variables (e.g., the relationship between income and health). But surveys can also be used within experimental research; as long as there is manipulation of an independent variable (e.g. anger vs. fear) to assess an effect on a dependent variable (e.g. risk judgments).
Chapter 5: Learning Objectives
If your research question(s) center on the experience or perception of a particular phenomenon, process, or practice, utilizing a survey method may help glean useful data. After reading this chapter, you will
- Identify the purpose of survey research
- Describe the cognitive processes involved in responding to questions
- Discuss the importance of context in drafting survey items
- Contrast the utility of open and closed ended questions
- Describe the BRUSO method of drafting survey questions
- Describe the format for survey questionnaires
The heart of any survey research project is the survey itself. Although it is easy to think of interesting questions to ask people, constructing a good survey is not easy at all. The problem is that the answers people give can be influenced in unintended ways by the wording of the items, the order of the items, the response options provided, and many other factors. At best, these influences add noise to the data. At worst, they result in systematic biases and misleading results. In this section, therefore, we consider some principles for constructing surveys to minimize these unintended effects and thereby maximize the reliability and validity of respondents’ answers.
Cognitive Processes of Responses
To best understand how to write a ‘good’ survey question, it is important to frame the act of responding to a survey question as a cognitive process. That is, there are are involuntary mechanisms that take place when someone is asked a question. Sudman, Bradburn, & Schwarz (1996, as cited in Jhangiani et. al, 2012) illustrate this cognitive process here.
Framing the formulation of survey questions in this way is extremely helpful to ensure that the questions posed on your survey glean accurate information.
Example of a Poorly Worded Survey Question
How many alcoholic drinks do you consume in a typical day?
- A lot more of average
- Somewhat more than average
- Average number
- Somewhat fewer than average
- A lot fewer than average
Although this item at first seems straightforward, it poses several difficulties for respondents. First, they must interpret the question. For example, they must decide whether “alcoholic drinks” include beer and wine (as opposed to just hard liquor) and whether a “typical day” is a typical weekday, typical weekend day, or both. Even though Chang and Krosnick (2003, as cited in Jhangiani et al. 2012) found that asking about “typical” behavior has been shown to be more valid than asking about “past” behavior, their study compared “typical week” to “past week” and may be different when considering typical weekdays or weekend days). Once respondents have interpreted the question, they must retrieve relevant information from memory to answer it. But what information should they retrieve, and how should they go about retrieving it? They might think vaguely about some recent occasions on which they drank alcohol, they might carefully try to recall and count the number of alcoholic drinks they consumed last week, or they might retrieve some existing beliefs that they have about themselves (e.g., “I am not much of a drinker”). Then they must use this information to arrive at a tentative judgment about how many alcoholic drinks they consume in a typical day. For example, this mental calculation might mean dividing the number of alcoholic drinks they consumed last week by seven to come up with an average number per day. Then they must format this tentative answer in terms of the response options actually provided. In this case, the options pose additional problems of interpretation. For example, what does “average” mean, and what would count as “somewhat more” than average? Finally, they must decide whether they want to report the response they have come up with or whether they want to edit it in some way. For example, if they believe that they drink a lot more than average, they might not want to report that for fear of looking bad in the eyes of the researcher, so instead, they may opt to select the “somewhat more than average” response option.
From this perspective, what at first appears to be a simple matter of asking people how much they drink (and receiving a straightforward answer from them) turns out to be much more complex.
Context Effects on Survey Responses
Again, this complexity can lead to unintended influences on respondents’ answers. These are often referred to as context effects because they are not related to the content of the item but to the context in which the item appears (Schwarz & Strack, 1990, as cited in Jhangiani et al. 2012). For example, there is an item-order effect when the order in which the items are presented affects people’s responses. One item can change how participants interpret a later item or change the information that they retrieve to respond to later items. For example, researcher Fritz Strack and his colleagues asked college students about both their general life satisfaction and their dating frequency (Strack, Martin, & Schwarz, 1988, as cited in Jhangiani et al. 2012) . When the life satisfaction item came first, the correlation between the two was only −.12, suggesting that the two variables are only weakly related. But when the dating frequency item came first, the correlation between the two was +.66, suggesting that those who date more have a strong tendency to be more satisfied with their lives. Reporting the dating frequency first made that information more accessible in memory so that they were more likely to base their life satisfaction rating on it.
The response options provided can also have unintended effects on people’s responses (Schwarz, 1999, as cited in Jhangiani et al. 2012) . For example, when people are asked how often they are “really irritated” and given response options ranging from “less than once a year” to “more than once a month,” they tend to think of major irritations and report being irritated infrequently. But when they are given response options ranging from “less than once a day” to “several times a month,” they tend to think of minor irritations and report being irritated frequently. People also tend to assume that middle response options represent what is normal or typical. So if they think of themselves as normal or typical, they tend to choose middle response options. For example, people are likely to report watching more television when the response options are centered on a middle option of 4 hours than when centered on a middle option of 2 hours. To mitigate against order effects, rotate questions and response items when there is no natural order. Counterbalancing or randomizing the order of presentation of the questions in online surveys are good practices for survey questions and can reduce response order effects that show that among undecided voters, the first candidate listed in a ballot receives a 2.5% boost simply by virtue of being listed first!
Writing Survey Items
Types of Items
Questionnaire items can be either open-ended or closed-ended. items simply ask a question and allow participants to answer in whatever way they choose. The following are examples of open-ended questionnaire items.
- “What is the most important thing to teach children to prepare them for life?”
- “Please describe a time when you were discriminated against because of your age.”
- “Is there anything else you would like to tell us about?”
Open-ended items are useful when researchers do not know how participants might respond or when they want to avoid influencing their responses. Open-ended items are more qualitative in nature, so they tend to be used when researchers have more vaguely defined research questions—often in the early stages of a research project. Open-ended items are relatively easy to write because there are no response options to worry about. However, they take more time and effort on the part of participants, and they are more difficult for the researcher to analyze because the answers must be transcribed, coded, and submitted to some form of qualitative analysis, such as content analysis. Another disadvantage is that respondents are more likely to skip open-ended items because they take longer to answer. It is best to use open-ended questions when the answer is unsure or for quantities which can easily be converted to categories later in the analysis.
items ask a question and provide a set of response options for participants to choose from.
Examples of Closed-Ended Questions
How old are you?
- Under 18
- 19-34
- 35-49
- 50-70
On a scale of 0 (no pain at all) to 10 (the worst pain ever experienced), how much pain are you in right now?
Closed-ended items are used when researchers have a good idea of the different responses that participants might make. They are more quantitative in nature, so they are also used when researchers are interested in a well-defined variable or construct such as participants’ level of agreement with some statement, perceptions of risk, or frequency of a particular behavior. Closed-ended items are more difficult to write because they must include an appropriate set of response options. However, they are relatively quick and easy for participants to complete. They are also much easier for researchers to analyze because the responses can be easily converted to numbers and entered into a spreadsheet. For these reasons, closed- ended items are much more common.
All closed-ended items include a set of response options from which a participant must choose. For categorical variables like sex, race, or political party preference, the categories are usually listed and participants choose the one (or ones) to which they belong. For quantitative variables, a rating scale is typically provided. A rating scale is an ordered set of responses that participants must choose from.
The number of response options on a typical rating scale ranges from three to 11—although five and seven are probably most common. Five-point scales are best for unipolar scales where only one construct is tested, such as frequency (Never, Rarely, Sometimes, Often, Always). Seven- point scales are best for bipolar scales where there is a dichotomous spectrum, such as liking (Like very much, Like somewhat, Like slightly, Neither like nor dislike, Dislike slightly, Dislike somewhat, Dislike very much). For bipolar questions, it is useful to offer an earlier question that branches them into an area of the scale; if asking about liking ice cream, first ask “Do you generally like or dislike ice cream?” Once the respondent chooses like or dislike, refine it by offering them relevant choices from the seven-point scale. Branching improves both reliability and validity (Krosnick & Berent, 1993, as cited in Jhangiani et al. 2012) . Although you often see scales with numerical labels, it is best to only present verbal labels to the respondents but convert them to numerical values in the analyses. Avoid partial labels or length or overly specific labels. In some cases, the verbal labels can be supplemented with (or even replaced by) meaningful graphics.
Writing Effective Items
We can now consider some principles of writing questionnaire items that minimize unintended context effects and maximize the reliability and validity of participants’ responses. A rough guideline for writing 9 questionnaire items is provided by the BRUSO model (Peterson, 2000, as cited in Jhangiani et al. 2012 ) . An acronym, BRUSO stands for “brief,” “relevant,” “unambiguous,” “specific,” and “objective.” Effective questionnaire items are brief and to the point. They avoid long, overly technical, or unnecessary words. This brevity makes them easier for respondents to understand and faster for them to complete. Effective questionnaire items are also relevant to the research question. If a respondent’s sexual orientation, marital status, or income is not relevant, then items on them should probably not be included. Again, this makes the questionnaire faster to complete, but it also avoids annoying respondents with what they will rightly perceive as irrelevant or even “nosy” questions. Effective questionnaire items are also unambiguous; they can be interpreted in only one way. Part of the problem with the alcohol item presented earlier in this section is that different respondents might have different ideas about what constitutes “an alcoholic drink” or “a typical day.” Effective questionnaire items are also specific so that it is clear to respondents what their response should be about and clear to researchers what it is about. A common problem here is closed- ended items that are “double barreled.” They ask about two conceptually separate issues but allow only one response.
Example of a “Double Barreled” question
Please rate the extent to which you have been feeling anxious and depressed
- Very
- Somewhat
- Not at all
Note: The issue in the question itself is that anxiety and depression are two separate items and should likely be separated
Finally, effective questionnaire items are objective in the sense that they do not reveal the researcher’s own opinions or lead participants to answer in a particular way. The best way to know how people interpret the wording of the question is to conduct a pilot test and ask a few people to explain how they interpreted the question.
For closed-ended items, it is also important to create an appropriate response scale. For categorical variables, the categories presented should generally be mutually exclusive and exhaustive. Mutually exclusive categories do not overlap. For a religion item, for example, the categories of Christian and Catholic are not mutually exclusive but Protestant and Catholic are mutually exclusive. Exhaustive categories cover all possible responses. Although Protestant and Catholic are mutually exclusive, they are not exhaustive because there are many other religious categories that a respondent might select: Jewish, Hindu, Buddhist, and so on. In many cases, it is not feasible to include every possible category, in which case an ‘Other’ category, with a space for the respondent to fill in a more specific response, is a good solution. If respondents could belong to more than one category (e.g., race), they should be instructed to choose all categories that apply.
For rating scales, five or seven response options generally allow about as much precision as respondents are capable of. However, numerical scales with more options can sometimes be appropriate. For dimensions such as attractiveness, pain, and likelihood, a 0-to-10 scale will be familiar to many respondents and easy for them to use. Regardless of the number of response options, the most extreme ones should generally be “balanced” around a neutral or modal midpoint.
Example of an unbalanced versus balanced rating scale
Unbalanced rating scale measuring perceived likelihood
Unlikely | Somewhat Likely | Likely | Very Likely | Extremely Likely
Balanced rating scale measuring perceived likelihood
Extremely Unlikely | Somewhat Unlikely | As Likely as Not | Somewhat Likely |Extremely Likely
Note, however, that a middle or neutral response option does not have to be included. Researchers sometimes choose to leave it out because they want to encourage respondents to think more deeply about their response and not simply choose the middle option by default. However, including middle alternatives on bipolar dimensions can be used to allow people to choose an option that is neither.
Formatting the Survey
Writing effective items is only one part of constructing a survey. For one thing, every survey should have a written or spoken introduction that serves two basic functions (Peterson, 2000, as cited by Jhangiani et al. 2012). One is to encourage respondents to participate in the survey. In many types of research, such encouragement is not necessary either because participants do not know they are in a study (as in naturalistic observation) or because they are part of a subject pool and have already shown their willingness to participate by signing up and showing up for the study. Survey research usually catches respondents by surprise when they answer their phone, go to their mailbox, or check their e-mail—and the researcher must make a good case for why they should agree to participate. This means that the researcher has only a moment to capture the attention of the respondent and must make it as easy as possible for the respondent to participate. Thus the introduction should briefly explain the purpose of the survey and its importance, provide information about the sponsor of the survey (university-based surveys tend to generate higher response rates), acknowledge the importance of the respondent’s participation, and describe any incentives for participating.
The second function of the introduction is to establish informed consent. Remember that this involves describing to respondents everything that might affect their decision to participate. This includes the topics covered by the survey, the amount of time it is likely to take, the respondent’s option to withdraw at any time, confidentiality issues, and so on. Written consent forms are not always used in survey research (when the research is of minimal risk and completion of the survey instrument is often accepted by the IRB as evidence of consent to participate), so it is important that this part of the introduction be well documented and presented clearly and in its entirety to every respondent.
The introduction should be followed by the substantive questionnaire items. But first, it is important to present clear instructions for completing the questionnaire, including examples of how to use any unusual response scales. Remember that the introduction is the point at which respondents are usually most interested and least fatigued, so it is good practice to start with the most important items for purposes of the research and proceed to less important items. Items should also be grouped by topic or by type. For example, items using the same rating scale (e.g., a 5-point agreement scale) should be grouped together if possible to make things faster and easier for respondents. Demographic items are often presented last because they are least interesting to participants but also easy to answer in the event respondents have become tired or bored. Of course, any survey should end with an expression of appreciation to the respondent.
Coding your survey responses
Once you’ve closed your survey, you’ll need to identify how to quantify the data you’ve collected. Much of this can be done in ways similar to methods described in the previous two chapters. Although there are several ways by which to do this, here are some general tips:
- Transfer data: Transfer your data to a program which will allow you to organize and ‘clean’ the data. If you’ve used an online tool to gather data, you should be able to download the survey results into a format appropriate for working the data. If you’ve collected responses by hand, you’ll need to input the data manually.
- Save:ALWAYS save a copy of your original data. Save changes you make to the data under a different name or version in case you need to refer back to the original data.
- De-identify: This step will depend on the overall approach that you’ve taken to answer your research question and may not be appropriate for your project.
- Name the variables: Again, there is no ‘right’ way to do this; however, as you move forward, you will want to be sure you can easily identify what data you are extracting. Many times, when you transfer your data the program will automatically associate data collected with the question asked. It is a good idea to name the variable something associated with the data, rather than the question
- Code the attributes: Each variable will likely have several different attributes, or layers. You’ll need to come up with a coding method to distinguish the different responses. As discussed in previous chapters, each attribute should have a numeric code associated so that you can quantify the data and use descriptive and/or inferential statistical methods to either describe or explore relationships within the dataset.
This is relatively simple to accomplish with closed-ended questions. Because you’ve ‘forced’ the respondent to pick a concrete answer, you can create a code that is associated with each answer. In the picture above, respondents were asked to identify their region and given a list of geographical regions and in structed to pick one. The researcher then created a code for the regions. In this case, 1= West; 2= Midwest; 3= Northeast; 4= Southeast; and 5= Southwest. If you’re working to quantify data that is somewhat qualitative in nature (i.e. open ended questions) the process is a little more complicated. You’ll need to either create themes or categories, classify types or similar responses, and then assign codes to those themes or categories.
6. Create a codebook: This.is.essential. Once you begin to code the data, you will have somewhat disconnected yourself from the data by translating the data from a language that we understand to a language which a computer understands. Af ter you run your statistical methods, you’ll translate it back to the native language and share findings. To stay organized and accurate, it is important that you keep a record of how the data has been translated.
7. Analyze: Once you have the data inputted, cleaned, and coded, you should be ready to analyze your data using either descriptive or inferential methods, depend. ing on your approach and overarching goal.
Key Takeaways
- Surveys are a great method to identify information about perceptions and experiences
- Question items must be carefully crafted to elicit an appropriate response
- Surveys are often a mixed-methods approach to research
- Both descriptive and inferential statistical approaches can be applied to the data gleaned through survey responses
- Surveys utilize both open and closed ended questions; identifying which types of questions will yield specific data will be helpful as you plan your approach to analysis
- Most surveys will need to include a method of informed consent, and an introduction. The introduction should clearly delineate the purpose of the survey and how the results will be utilized
- Pilot tests of your survey can save you a lot of time and heartache. Pilot testing helps to catch issues in the development of item, accessibility, and type of information derived prior to initiating the survey on a larger scale
- Survey data can be analyzed much like other types of data; following a systematic approach to coding will help ensure you get the answers you’re looking for
- This section is attributed to Research Methods in Psychology by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted. ↵
- The majority of content in these sections can be attributed to Research Methods in Psychology by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted. ↵
A mixed methods approach using self-reports of respondents who are sampled using stringent methods
A type of survey question that allows the respondent to insert their own response; typically qualitative in nature
A type of survey question which forces a respondent to select a response; no subjectivity. | https://boisestate.pressbooks.pub/basicguidetopracticalresearch/chapter/survey-research/ |
To delete the individual response, go to Analyze > Dashboard > Responses. Here, you will find all the individual responses. Look for the response you want to delete.
By clicking on the respondent's name the individual rating will open. In the left navigation bar you can click on 'Delete the response'.
How to restore deleted responses?
Do you want to recover a response? Go to Settings > Survey settings > Deleted responses. Here you will find a list of deleted responses to your surveys. You can restore these responses by clicking on Restore behind the response. | https://support.starred.com/docs/deleting-a-individual-response |
Quantitative market research aims to identify standardized facts and statistics that can reliably guide business decisions. But no matter how well data is analyzed, if it’s not accurate or complete findings based on it are flimsy at best.
Think of it like baking a cake – if you use poor ingredients or measure incorrectly, the final product won’t be good. The same concept applies to data. That’s why ensuring you’re using quality data in quantitative research is so critical.
What is Quality Data?
Data quality gauges the value of data based on accuracy, completeness, consistency, reliability, and whether it’s current. Defining data quality can be complex, but it basically means that the data fit their intended purpose and are closely related to the construct it’s intended to measure.
Quantitative, Qualitative Data Focus on Different Points
Quantitative research is sometimes preferred over qualitative research because it is more scientific, objective, and focused. However, qualitative research also yields valuable results because it helps define an issue or develop a solution to a problem. In a nutshell, quantitative data is countable or measurable and numbers based. Qualitative data are descriptive and more related to words.
Examples of quantitative data are a company’s financial data, marketing results, statistical data on demographics, or revenue in dollars. Quantitative research takes a deep dive into specific markets and customer trends and provides insights that help companies better understand industry patterns.
Qualitative data is less concrete than its quantitative counterpart because it looks at certain qualities or characteristics. It is collected using questionnaires, interviews, and observations, and is generally summarized in narrative form. The data may include descriptive words that can be reviewed for patterns or meaning, sometimes through coding. Coding allows researchers to categorize qualitative data to identify themes related to the research questions and analyze them.
Methods to Measure Data Quality in Quantitative Research
Once the research phase of a project is underway and data collection begins, it’s important to take a close look at how reliable and accurate that data is before doing an in-depth analysis or coming to any conclusions. Here are some key areas used to gauge the quality of data to make sure findings are adjusted and weighed appropriately.
Correctness of answers:
Correct data accurately measures a construct of interest such as customer satisfaction or people’s intention to purchase a particular product or service.
Assessing the correctness of data involves evaluating whether people provided consistent responses to similar items. Researchers may ask survey questions that are either synonymous (“I would buy this product” or “I plan to buy this product in the future.”) or antonymous (I’d consider paying for this service” or “I would probably never need this service.”) The distance between each person’s responses to each question would then be reviewed.
Consistency in survey responses:
Evaluating the consistency of individual responses at the group level comes down to measures of reliability. This tells researchers how well a test measures what it should measure. Researchers look at response consistency by people surveyed by identifying logical contradictions in their responses or inconsistent answers to specific questions designed to elicit the same information (e.g., “What is your age?” “What year were you born?”). People who give too many inconsistent responses can be dropped from the dataset to avoid skewing overall results.
Completeness of responses:
Complete datasets are where most people answer all the survey questions and those who start the survey finish it. Researchers may specify before collecting data that people must have viewed or responded to specific questions that align with key outcome measures.
Respondent credibility:
Having credible datasets means respondents answer the survey questions honestly. Researchers use several tools to uncover individual participant responses that lack credibility. These tools range from measures designed to detect overly positive or negative self-presentation to methods that assess people’s attention, effort, unusual response patterns, speed of completing the survey, and intentional false responses to questions.
Expert Quantitative Marketing Research
ResponsivMR’s team of experts has decades of experience providing advanced analytics consulting support to market research agencies of all sizes. Our goal is to make the data work for you by leveraging standard statistical tools to analyze data and provide a one-stop solution for all facets of fieldwork. To find out how ResponsivMR can help your firm get the most value from survey data contact us at 561-983-4949. | https://www.responsivmr.com/news/how-to-effectively-measure-data-quality-in-quantitative-market-research/ |
Survey Options (EX)
What's on This Page:
About Survey Options
Survey options is where you go to change many general survey settings, such as the timeframe the survey’s available, the addition of a back button or visible question numbers, and more. To access these settings, click Survey options in the Survey tab.
Survey options is divided into several sections, as described below.
General
The General section contains survey options related to the survey’s default language, the title and description displayed in search results, question numbering, and Expert Review.
Responses
The survey options in the Responses section affect how responses are recorded and how respondents experience the survey-taking process. Here, you’ll find settings related to how respondents can change their answers during a survey session, the error messages they should receive, what to do with incomplete responses, and more.
Survey Retakes
When you initially invite participants to take the engagement survey, the invite should include a link to the engagement survey that is unique to them. When selected, Survey retakes allows participants to retake the survey using this unique link. All they have to do to retake the survey is revisit the link they received in the invitation and select the link to retake the survey.
Multiple responses
If you see the option Multiple responses, you’re probably using an Employee Lifecycle project. Check out the Allowing Lifecycle Participants to Submit Multiple Responses page to learn more about this option.
Incomplete survey responses
Attention: If a response in progress meets both of the following conditions, then it will be deleted regardless of your incomplete survey response settings:
- The response in progress is for a participant in your project who has already submitted a response.
- Your project does not have the multiple responses settings enabled.
Otherwise, incomplete survey response options are the same in EmployeeXM projects as they are in the survey platform. See Incomplete Survey Responses for more details.
Security
The survey options in the Security section can increase your survey’s security and protect it from unwanted responses.
Use Anonymous Link
When selected, Use Anonymous Link generates a general link to your survey. This allows you to copy a link to the engagement survey and paste it on a website instead of emailing all your participants. This option also creates a means for you to let participants opt into the survey instead of requiring it for everyone. Note that you will either need to add an org hierarchy question type to your survey or an authenticator to your survey flow for each participant’s response to be placed somewhere in your hierarchy.
For help determining if you should use an anonymous link for your survey, see Anonymous vs. Confidential Employee Surveys.
If you are only distributing via the anonymous link, make sure to manually activate your project so your participants can take the survey!
Add a Referral Website URL
See Add a Referral Website URL.
Prevent multiple sessions
Sometimes when you use an anonymous link, you may want to prevent participants from taking a survey more than once. When selected, Prevent multiple sessions helps keep participants from taking a survey multiple times by placing a cookie on their browser when they submit a response. The next time the participant clicks on the survey link, Qualtrics will see this cookie and not permit them to take the survey.
“Prevent multiple sessions” is a great deterrent, but can be circumvented by savvy participants clearing their browser cookies, switching to a different web browser, or using a different device. With surveys that have a higher incentive to cheat, consider distributing your survey using an authenticator or by messaging the participants with the Survey Retakes option deselected.
Bot detection
Look for respondents that could be bots and flag their responses. For more details, see the Bot Detection section of the Fraud Detection support page.
Security Scan Monitor
Prevent security scanners from accidentally starting a new session on your survey. See the Security Scan Monitor section of the Fraud Detection support page.
RelevantID
Analyze a respondent’s browser, operating system, and location to prevent fraudulent responses. For more details, see the RelevantID section of the Fraud Detection support page.
Prevent Idexing
See Prevent Indexing.
Post-Survey
The survey options under Post-Survey concern what happens after a survey ends. For Engagement and Lifecycle projects, there is only one option listed here: Message for revisiting a completed survey. Thank you messages and triggers are not available in EmployeeXM projects.
Advanced
In the Advanced section of your survey options, you can find some additional settings that can enhance your survey.
- Translations: Translate your survey into multiple languages, allowing respondents to choose which language to view the survey in. | https://www.qualtrics.com/support/employee-experience/creating-ee-project/survey-tab/survey-options-ee/?parent=p001287 |
Note: This post is very dated. Please see the YouTube video from the Tableau 2014 conference and the Visualing Survey Data page for more up-to-date information.
Overview
In my previous blog post I discussed how to reshape your survey data so that it plays nicely with Tableau. In this post we’ll explore how to visualize three types of survey questions:
- Yes / No / Maybe
- Check-All-That-Apply
- Likert Scale
Note: A fully working collection of interactive visualization may be found at the end of this blog post.
Connecting to the Data
We need to connect to the reshaped data source we created in Part 1. For those of you that are following along and downloaded the sample data, there are several tabs in the source Excel file. Make sure you connect to the one marked “Use This One” as shown below.
When asked how you want to connect to the data, select “Import all data”.
There are two reasons for doing this:
- With larger data sets your performance will be much faster; and,
- You will need Tableau’s COUNTD() function and that is not available when you connect directly to Excel (or Access, for that matter.)
Your dimensions and measures shelves should look like the one shown here.
Note that the ID “Measure” should in fact be a Dimension, so when you get a chance, drag the field up to the Dimensions shelf.
So, just who are these people?
Before visualizing any questions I like to get a sense of who participated in the survey. I’ll start by looking at Gender.
Notice that I place COUNTD(ID) on the columns shelf indicating that I want to count the number of unique. Had I used the regular COUNT() function I would count all of the rows that contain a respondent’s ID, and that is a large number given how we’ve reshaped the data.
The “Null” value indicates that some people chose not to respond to this demographic question. I suggest creating an alias called “Did not respond” to handle these non-responses.
From here I will create views for any other demographic groups (e.g., Generation, Location, etc.).
Visualizing a Yes / No / Maybe question
Let’s start by seeing what questions are available by placing Question on the Rows shelf.
As you may recall from the previous blog post, our sample survey contained the question “Do you plan to vote in the upcoming election?” for which there were three possible responses (“Yes”, “No”, and “I don’t know.”)
As I’m not interested in all the questions in the survey I need to add a filter that just keeps the first question in the list.
To see preliminary results for this question I will place “Response” on the rows shelf and COUNTD(ID) on the columns shelf. While not particularly pretty, it does show us something useful.
I will next alias the Question (and Responses), sort the Responses, and place Response on the color shelf. I will also add a table calculation to the Columns shelf so that instead of showing a count we show the percentage of the total.
Now we’re talking!
The nice thing about the way we have this set up is that it’s easy to break down the results by our demographic segments simply by dragging a dimension to either the rows or column shelf, as shown below.
Check-All-That-Apply Questions
To get a sense of what we have going on for this question I duplicated the sheet from the previous visualization and then changed the Question filter so that I’m looking at the check-all-that-apply questions. Here are the filter settings.
Do you now see the major advantage of having reshaped the data? Had I kept the one column / one question approach I would have to use Tableau’s Measure Name / Measure Names feature and I would likely have to build a separate calculated field for every question I want to analyze. With reshaped data I just put Question on the Rows shelf and filter by the questions I want to analyze.
Tip: I will usually create a set for each question grouping I want to review, sort of a “named filter”, as it were. In the accompanying workbook you will find two sets, one called “Check All That Apply” and the other called “LikertQuestions”. You can then drag either of these sets to the filter shelf.
Here’s the resulting visualization.
Well, it’s a start, but it’s a not a a very good visualization. For one thing, I only care about the percentage of people that selected “yes”; I don’t even need to consider the “no” folks.
I need to fashion a formula that counts of how many “yes” responses there were for each question, then divide by the number of responses. This would be simple of the responses for these questions were 1 and 0, but unfortunately the responses are labels that look like this:
1: Yes 0: No
So, I need to perform a little string arithmetic. The calculated field that does what we want is shown here.
SUM( IIF (Left([Response],1)="1" ,1,0) ) /COUNTD([ID])
And a visualization that employs this formula is shown here.
That’s much better. And of course, I can add a demographic dimension to glean additional insights. In the screen below I show a comparison between male and female respondents, using an overlapping bar chart.
Likert Scale Questions
Note: Before continuing, you must promise that upon reading the rest of this blog post you read the following two blog posts:
http://www.datarevelations.com/the-likert-question-question.html
http://www.datarevelations.com/likert-scales-the-final-word.html
You promise?
Okay, since you promised…
As with the previous two question types, the first things I need to do is place “Question” on the filters shelf and select the group of questions we want to analyze in this visualization:
Now I’ll create a stacked bar chart, by placing “Question” on the Rows shelf, CNTD(ID) on the Columns shelf, and response on the Color shelf.
Next I need to replace the CNTD(ID) measure with a table calculation that shows the percent of the total based on Response.
Since the questions are sorted in alphabetical order it’s hard to gauge which questions garner the greatest amount of agreement. This is where using the Likert scores (4 for “Very high degree”, 3 for “High degree”, etc.) can help add clarity.
While I go into much greater detail in the two blog posts YOU PROMISED TO READ, I’ll walk you through how to calculate the average Likert score and superimpose if atop the stacked bars.
In our example, the value I need is the first character in the Response string, so we just need to grab the left-most character and convert it into a number. I can do that using the following calculated field called LikertValue.
Float(Left([Response],1))
Next, I’ll place the Average of this calculated field on the Columns shelf next to the existing table calculation (don’t worry that the result will not look good.) I now need to indicate that we have multiple marks and make the first chart a Bar chart and the second a Circle chart, as shown here.
This still looks pretty dumb, but if I remove “Response” from the Color shelf, then right-click AVG(LikertValue) on the columns shelf and select “Dual Axis” we get something that starts to be useful.
Finally, if I show labels for the circle chart and sort the questions by Average LikertValue we get something that tells a clear story.
Conclusion
In this and the accompanying blog post we’ve explored how to reshape survey data and visualize three different types of survey question. Granted, there are other question types (e.g., “add up to 100”, “rank these in order”, etc.) as well as intra-question analysis (e.g., create a scatterplot showing respondent’s age vs. annual income) that are a bit trickier to implement, but what we’ve delved into here should cover a wide variety of needs.
Question – any interest in a one or two day course devoted to visualizing survey data in Tableau? | http://www.datarevelations.com/2012/01 |
Open-ended survey responses — in which respondents answer poll questions in their own words — are never easy for researchers to analyze. But they can present even bigger challenges when they are written in multiple languages. Few research organizations are likely to have staff members who are fluent and well-versed enough in all of the relevant languages and cultures to analyze such responses with confidence.
Pew Research Center recently tackled this challenge for a cross-national survey in which people around the world described, in their own words, where they find meaning in life. The analysis plan for this project hinged on developing a closed-ended codebook and applying it to nearly 19,000 open-ended responses, drawn from 17 societies and spanning 12 languages.
In this post, we provide more background about this project and look more closely at a key question we faced along the way: whether to use professional translators or Google Translate to help make sense of the many thousands of open-ended responses we received.
Background
At the beginning of this project, we wrestled with three options for translating and coding the more than 11,000 non-English responses we collected about where people find meaning in life.
The first option was to hire a translation firm to translate the responses into English so our researchers could read them and apply a Center-developed codebook to them. On the plus side, this approach would allow researchers to read each verbatim response themselves and have total control over how the codebook would be applied. On the negative side was the huge expense this method would entail. It would be costly not just financially, but also in terms of time spent, since it would require both a translator to read every response as they translated it and a researcher to read each response to code it, potentially extending the length of the project.
A second option was to translate all of the non-English responses using Google Translate. This approach would be free and extremely fast, while again allowing researchers to have total control over how to apply the codebook. However, this approach might have introduced the possibility of error if the Google-translated English text was inaccurate or difficult for researchers to parse — a key concern since the project involved nuanced, highly personal responses about where people find meaning in their lives.
A third option — and the one that we ended up using — was to employ a professional translation firm, but rather than having the translators translate the responses into English, they would simply code the answers in their original language, using the Center’s codebook. This method had a few advantages. First, it would only cost about a quarter as much as having full translations for each verbatim response. Second, it would allow local experts to offer their cultural expertise to ensure that all nuances in the verbatim responses were captured. Third, it would be relatively fast since the translators themselves would do the coding. However, this approach had one large drawback: Researchers at the Center would not be in control of how the codebook was applied and did not necessarily have the language skills needed to verify that it was being applied correctly.
In the end, we mitigated this problem by having the professional translators send us full verbatim translations and their applied codes for a random 10% of open-ended non-English responses from each country and language pairing in our study (e.g., Mandarin in Singapore, Mandarin in Taiwan, Spanish in Spain, Spanish in the United States, etc.). We were then able to clarify any confusion the translators had with the Center’s codebook and verify that the translators and our own researchers had appropriately high levels of intercoder reliability before they began independently coding the responses.
Exploratory analysis
While we were satisfied with the option that we ultimately chose, we still wanted to understand whether Google Translate would be a viable option for potential future research projects of this kind. To do this, we went back and recoded the 10% of non-English responses that we had professionally translated — but this time, we coded versions from Google Translate instead. In the remainder of this post, we’ll explore how our results would have been similar (or different) had we applied our codebook to English translations from Google Translate instead of using professional translators who applied the codebook themselves.
As a benchmark, we’ll compare these two approaches against the decisions our own coders made using the professionally translated versions of the same responses. The two measures presented in this analysis are (1) the intercoder reliability of in-house researchers coding professionally translated responses versus in-house researchers coding responses translated by Google Translate (“In-house researcher reliability using Google Translate” in the table below) and (2) the intercoder reliability of in-house researchers coding professionally translated responses versus professional local researchers coding original-language responses themselves (“Professional translator reliability using native language” in the table below).
To evaluate our intercoder reliability, we used two different versions of Krippendorf’s alpha, a common measure of coding agreement. When looking at each of the 20 codes in our codebook by themselves, we computed alphas traditionally using simple binary flags: whether a coder labeled the response with the code or not. When looking at the codebook overall across all 20 codes, we used a version of Krippendorf’s alpha weighted by Measuring Agreement on Set-Valued Items (MASI) distance. Whereas a traditional Krippendorf’s alpha treats any disagreement on any code as a complete disagreement, weighting by MASI distance provides a small amount of “partial credit” if coders agree on most of the labels in the codebook.
Overall, our in-house coders seemed to code Google-translated responses with the same consistency as professional translators using original-language responses. Across the full non-English sample, the MASI-distance Krippendorf’s alphas for the professional translators and our in-house coding of Google-translated responses were 0.70 and 0.67, respectively. However, while the two methods appeared to be comparable on the whole, our results suggest that Google Translate may perform notably worse in certain languages and in certain contexts.
Results by language
We received roughly 1,000 non-English responses in each language in our survey, with the exception of Malay (under 100) in Singapore. Due to these relatively small sample sizes, we could only get a general sense of intercoder reliability for each of the languages in our sample. Still, the performance of our in-house coders using Google Translate was similar to that of the native-language coders at the professional translation company for most of the languages included in our study.
There were two notable exceptions to this pattern: For Greek and Korean responses, our Google Translate coding was markedly worse than the translators. With Greek responses, coding by professional translators resulted in a Krippendorf’s alpha of 0.76. By contrast, in-house coders using Google-translated responses only reached a score of 0.58. Similarly, the Krippendorf’s alpha for Korean responses coded by professional translators was 0.83, while the same responses coded by in-house coders using Google Translate had a Krippendorf’s alpha of 0.66.
Consider one Greek-language example that produced a notable difference between Center coders using Google Translate and professional translators. The Google-translated response — “No supply matter matter data of the statements we are” — made it difficult for our researchers to assign any of the codes in the codebook. On the other hand, the professional translation — “There is nothing that satisfies me now given the situation we live in” — clearly fell into the “difficulties and challenges” category, according to the rules of our codebook.
Results by code
When considering results by each code individually, the findings were generally similar: Both the in-house coding of Google-translated responses and the coding of nontranslated responses by an outside firm resulted in sufficient agreement with our baseline on most topics. But there were two exceptions: “Don’t know/refused” responses and mentions of service and civic engagement.
In-house coders using Google-translated responses were considerably less likely to agree with our baseline about when a participant did not answer the question (Krippendorf’s alpha of 0.45). By contrast, agreement by the outside firm reached 0.77. Part of this confusion likely stemmed from the fact that many Google-translated responses said “none” — making it difficult, without sufficient knowledge of the language and context, to clearly parse as people saying they had nothing which gave them meaning in life (which would be coded as “difficulties and challenges”) versus those who simply gave no response (which should have been coded as “don’t know/refused”).
The same was true for mentions of things like service and civic engagement. In-house coders achieved agreement with the baseline of just 0.56 on Google-translated responses, while the outside firm reached an agreement score of 0.73.
Across other codes, there were few differences. Both in-house coders and outside professionals achieved sufficient agreement with the baseline (at or above 0.70) for the remaining categories included in the analysis.
Conclusion
Our findings suggest that Google Translate may provide an adequate substitute for professional translation, but only for some languages and in some contexts.
While we were able to achieve acceptable intercoder reliability in most cases, our coding of Google-translated responses was noticeably worse than the native-language professional translators for Greek and Korean responses. And for a handful of particular topics — response refusals, as well as mentions of service and civic engagement — the Google-translated responses seemed to lack crucial context. In these cases, our own coding of the Google-translated versions of the responses was very different from how we coded the professionally translated versions, while the professional translators who coded the responses in their native language tended to agree with how we coded their translations ourselves.
Of course, there are many considerations in any research project, and the trade-offs related to coding complexity and language must be stacked up against other concerns, including cost and time. Overall, there is no one-size-fits-all answer for all research projects, and Pew Research Center will remain open to various options depending on the particular project at hand. | https://www.pewresearch.org/decoded/2021/12/17/the-pros-and-cons-of-using-professional-translators-vs-google-translate-to-analyze-open-ended-survey-responses/ |
Published Date:
November 2006Series:DHHS publication ; no. (NIOSH) 2007-102
-
Details:
-
Alternative Title:Air transportation safety in Alaska
-
Personal Author:Conway, George A. ; Mode, Nicolle A. ; Manwaring, Jan C. ; Berman, M. ; Hill, A. ; ... More ▼
-
Corporate Authors:
-
Description:"Because aviation crashes are one of the leading causes of occupational fatalities in Alaska, investigators at the Alaska Field Station of the National Institute for Occupational Safety and Health contracted with the Institute of Social and Economic Research at the University of Alaska Anchorage to administer two statewide aviation safety surveys, one of air carrier operators and one of active commercial pilots. Both surveys addressed pilot and company demographics; number of pilot flight hours (total, aircraft type, and instrument hours); flying experience in Alaska; and attitudes about safety, flying practices, and other salient risk factors. Surveys from 153 commuter, air taxi, and public-use operators were received at a 79% response rate. Survey results were used to create an industry profile, compare operators’ responses to their pilots’ responses, and analyze and compare responses of operators with high fatal accident rates (designated "cases") to operators without high fatal accident rates (designated "controls"). Results indicated that the average case pilot had less career flight experience than the average control pilot and worked 10 hours a week more. Case operators were less likely to consider pilot fatigue a problem when scheduling flights and more likely to depend financially on timely delivery of bypass mail. Case pilots were three times as likely as controls to fly daily into unknown weather conditions. Nearly 90% of the case pilots reported that they never flew when so fatigued that they wanted to decline the flight, compared to 64% of control pilots. The findings suggest that the combination of pilot inexperience and longer work hours and work weeks may contribute to Alaska’s high pilot fatality rate. Results of the operator-pilot comparisons suggest that financial pressures on operators may influence their views on what measures would be effective in preventing crashes. Many of the responses received in these surveys were consistent with the goals of three major, recently-implemented aviation safety programs in Alaska: the Medallion Foundation, the Federal Aviation Administration’s Circle of Safety, and Capstone." - NIOSHTIC-2
-
Content Notes:G.A.Conway, N.A. Mode, J.C. Manwaring, M. Berman, A. Hill, S. Martin, D.M. Bensyl, and K.A. Moran. Running title: Air transportation safety in Alaska. Includes bibliographical references (p. 29-30).
-
Subject:
-
Document Type:
-
Genre:
-
Place as Subject:
-
Collection(s):
-
Main Document Checksum:urn:sha256:4aff4ad8468503f697c1d660389fa3ccd4dd8c8406bad179281085dd89d53c4c
- File Type:
-
Supporting Files:No Additional Files
No Related Documents. | https://stacks.cdc.gov/view/cdc/11426 |
The average salary for a Process Technician with Chemical Process Engineering skills in South Africa is R259,858.
25%
R74k
MEDIAN
R260k
75%
R295k
Is Process Technician your job title? Find out what you should be paid
Use our tool to get a personalized report on your market worth.
Salary
R74k - R295k
Total Pay
R74k - R313k
Your Market Worth Over Time
How has your pay changed over time for your market? Find out by taking our salary quiz.
Job Description for Process Technician
A process technician’s main duties center around maintaining manufacturing process equipment and ensuring such machinery operates in a clean organized environment. To do this, process technicians are required to schedule and perform regular maintenance, cleaning, and repairs of all equipment; they also order necessary machine parts and upgrade equipment when necessary to ensure optimal performance. In addition, process technicians perform some administrative tasks, such as documenting all machin…Read more
Process Technician Tasks
- Work to plan, analyze and control the quality of manufacturing processes.
- Provide training to team members on relevant industry processes and equipment operations safety.
- Continuously monitor all operations and processes to prevent and correct equipment malfunctions.
- Monitor equipment and take action in the event of problems.
- Responsible for process technology equipment operation and safety.
Process Technician Job Listings
Search for more jobs
Popular Skills for Process Technician
Avg. Salary
Popularity
Process Improvement
R240,376
40 responses
Troubleshooting
R278,890
27 responses
22 responses
15 responses
Machine Operation
R279,377
9 responses
Skills in Troubleshooting and Machine Operation are correlated to pay that is above average. Skills that pay less than market rate include Quality Assurance / Quality Control, Manufacturing Process Engineering and Process Improvement.
Years of Experience
This data is based on 5 survey responses. | https://www.payscale.com/research/ZA/Job=Process_Technician/Salary/829ce802/Chemical-Process-Engineering |
Once you activate a Survey, you can access user responses and analyze its performance by accessing Stats & Reports from the Survey Card, as shown below.
Here, you will find a detailed engagement report against key performance indicators and user details like Response, Name, Email Address, Location, User ID, Date of submission, and so on.
Let's walk you through each section:
Overview
As highlighted above, this section provides a top-level view of Survey responses against Aggregated Stats like Displayed (Views), Attempted, Submitted and Responses Rate. (Detailed read)
-
You will also find a brief report on the Latest Responses received on the questionnaire along with the user's name and date-time of submission.
-
You can click the Download button placed on the top right to obtain a copy,
Individual Responses
This section provides a detailed report of all the Survey responses for its entire lifetime. As shown above, click on a row to view specific answers and user details. Let's quickly walk you through it:
-
User Details Includes the user's Name (Who), When the response was submitted (Date-time), where it was submitted from (Region), and tech details like Browser, Page URL, and IP Address.
-
Client Data Includes user details related to your WebEngage accounts like their User ID (LUID), Source, and Email Address.
-
Responses to Questions by This User
You can choose to delete irrelevant submissions by selecting a row and clicking the Delete icon placed on the top left of List of Responses.
Full Report
This section provides a detailed report of each response over the entire lifetime of your account.
Using Advanced Search, you can filter down the report by:
-
A specific time frame
-
Response Status: Helps you filter out survey responses that have been Read by an account admin, vs. those that are Unread.
-
Response ID: Each response is given a unique ID. Click on it to access details.
-
User ID: All users that interact with your platforms are assigned a unique ID in your dashboard.
-
Email (Address): Search for a user's messages by adding their email address (found under their User Profile).
-
Message: Type in a keyword to filter out all user queries that contain it.
-
Country, City
-
Browser
-
Tags: Tags can be created under Survey Card to manage them better.
-
Page URL: Add a link to filter out all messages that have been submitted on the page.
-
IP (Address): Specify the IP address to filter out all messages submitted through it.
-
Source
Once you have the details you need, click the Download button placed on the table's top right to get a copy.
Schedule Reports
You can easily track a survey's performance by having a report delivered to your inbox! Depending on your needs, you can choose to schedule the following types of reports:
-
Excel Report of Individual Responses
-
PDF Report of Aggregated Stats
-
Direct Email of Individual Responses
Here's how you can go about it:
Step 1: Click the Add a New Report button to get started. (You can also schedule multiple reports of the same type for a survey.)
Step 2: Name your report.
Step 3: Specify recipient email addresses.
Step 4: Add a subject line for the report.
Step 5: Add a message to provide some context for the report.
Step 6: Specify the report frequency by selecting On (Response) Submission/ Once/ Hourly/ Daily/ Weekly, Monthly/ Quarterly.
Step 7: Specify the time at which the report must be sent.
Step 8 (Optional): Enable the field, Schedule an email even when no response is recorded to receive a notification about the same.
Step 9: Click Save to implement!
As highlighted below, you can choose to Edit, Pause or Delete a Scheduled Report anytime you like.
We hope this has equipped you with a robust understanding of how you can access individual responses and analyze the survey's overall performance. Please feel free to drop in a few lines at [email protected] if you have any queries or feedback. We're always just an email away! | https://knowledgebase.webengage.com/docs/analyzing-survey-responses |
September 2015 Paranal Service Mode User Satisfaction Survey
Typically once per year the User Support Department of ESO launches a Paranal Service Mode User Satisfaction Survey campaign. Until now this was done in the first quarter of the year. Starting in 2015 we decided to switch these campaigns to the third quarter of the year. Hence in 2015 there were two such campaigns. This report details the findings of the September 2015 survey campaign, while previous such reports are found here.
We certainly view these reports as an important way to close the loop with the ESO Community, to thank all respondents, and to demonstrate that such feedback is important to us. To this end, here we provide a summary of the responses received, predominantly in the form of graphs. It should also be stressed that for those cases where respondents did identify themselves and did make specific free-text comments we have contacted them by e-mail to address their particular comments.
Methodology and General Results
The ESO Service Mode Questionnaire is always available on-line for users to fill in but the usual rate of return is less than 2 per month. However, experience shows that a targeted campaign of asking users to fill in the survey results in many more survey completions.
In September 2015, we again took this approach, and asked Principal Investigators (PIs) of Service Mode runs scheduled for Paranal in Periods 95 and/or 961 (plus their then-active Phase 2 delegates) to complete the survey by a fixed deadline. We thus solicited a response from 460 PIs and their then-active Phase 2 delegates (173 individuals). Because of overlap this amounts to a total of 528 individuals which were contacted via e-mail. A deadline was set for two weeks from the date of contact.
A total of 100 responses were received by the deadline (some 20 of which were not fully complete), representing a 18.9% response rate. This rate is somewhat lower than in previous target surveys, perhaps owing to the fact that Period 95 PIs/delegates were targeted both in February 2015 and September 2015. Nevertheless, this survey again yielded a healthy feedback that can serve us well.
As a start in detailing the results from the survey, in the figure below we show the number of responses we received per instrument. In spite of the overall good response rate the large number of instruments offered in Service Mode means that on average we received about 9 responses per instrument.
In the following three pie charts we present a general overview of user satisfaction (in percentage of responss) with three general items:
- the help/advice provided by the User Support Department during the Phase 2 process
- the Phase 2 web documentation, and
- the overall support provided by the User Support Department.
Note that the sum of the responses to the question about one’s satisfaction with the help/advice provided is less than the total number of survey responses. However, there is no reason to expect, a priori, that these two numbers should be equal. This is because the responses to this question come from a subset of respondents (those that actually received help/advice at Phase 2), each of which may have received help/advice from multiple instruments.
Seeking Help, Run Information, and Run Problem Resolution
Amongst the respondents 21% indicated that they had contacted ESO for non-Phase 2 related reasons within the previous 6 months (in February 2015 this number was 20%). Of these 21 respondents, 19 contacted ESO via an e-mail to [email protected] (about the same as in February 2015), with the remainder distributed between other methods (e.g. clicking on ‘Ask for help’ within the ESO User Portal). Below we show the degree to which these respondents were satisfied with various aspects of the resulting exchange with ESO.
Some 58% of the respondents checked on-line for information regarding the progress of their observational programmes. The survey asked those that did check for that information how much they agreed with four statements about that information. The outcome of those questions is shown below.
Finally, we asked if the survey participant was alerted to any problems with their Service Mode observations during the period. For those that said that they had been contacted (31 out of the 100) we then asked to what extent they agreed with two statements describing the contact and the problem resolution. The answers are shown below.
P2PP and Other Observation Preparation Tools
Below, we show details of the feedback received on different aspects of the Phase 2 Proposal Preparation tool (P2PP) and other, instrument-specific, observation preparation tools.
Since the number of responses per observing preparation tool other than P2PP is rather limited (see the table below), any presentation of individual-tool responses on documentation, ease of use, or functionality would suffer from small number statistics. Thus, in the figures below answers for all tools are combined.
|Observing Preparation Tool||Number of Responses|
|CalVin||3|
|FIMS||4|
|FPOSS||6|
|GUCT (unified GUideCam Tool)||5|
|KARMA||6|
|NAOS-PS||7|
|PILMOS||1|
|SADT||4|
|VisCalc||5|
|VMMPS||4|
Finally, we asked survey participants the question, “How satisfied are you with the ETCs you have used?” The responses are shown below.
Notes:
1The total time allocated in Service Mode for Periods 95 and 96 was 11322.9 hours, while the corresponding number for Visitor Mode was 4691.7 hours. Thus, the September 2015 survey targetted PIs (and their then-active delegates) representing 70.7% of the total VLT/VLTI time allocation, including all public surveys. | https://www.eso.org/sci/observing/phase2_p105/PostObservation/UserFeedback/UserFeedback_Sep2015.html |
Concern is growing about the impact of microplastics on the marine and freshwater ecosystem, and in particular the effect on the food chain. In addition to leaching harmful additives, microplastics generate a surface biofilm that concentrates organic pollutants such as pesticides and attracts potentially harmful bacteria. This biota is consumed by marine organisms with potentially harmful consequences for the organisms themselves, the food chain, and human health. Issues such as how microplastics affect reproduction, growth, and productivity of bio-organisms, and how changes in immune response can lead to increased susceptibility to infection, are critically important, as they can lead to disruption in the food chain. In terms of physical harm, there is evidence of tiny particles penetrating into tissue, or into the brain, causing behavioral changes. Such research demands cellular imaging and assay technologies, gene sequencing, and proteomic tools.
Typical applications include:
For research use only. Not for use in diagnostic procedure. | https://www.perkinelmer.com/category/effects-on-bio-organisms |
Air pollution has always been a big concern for environmental managers. They study all the factors closely to look at the factors causing it and the effects it causes on cities. The biggest victims of air pollution are densely populated cities. Cities like Delhi in India are heavily populated and have the highest rates of air pollution. Students of environment management are expected to write dissertations on such topics. These students find it hard to piece together information to write a quality dissertation. Which is why we have gathered the best dissertation writers UK to help you.
There are around 28 megacities in the world. These cities are the biggest contributors to air pollution. After urbanization, people in large numbers have migrated to these big cities in the search of jobs and better lifestyles, making the city densely populated. In these cities, you can find industrial areas, where factories are set up and release pollutants in the air. In addition to this, since these cities have a high population they also have a great number of vehicles. People move from one place to another on either personal vehicles or use public transport, either way, to accommodate a large population there are lots of vehicles on the road, adding more to the air pollution.
These cities are not only polluted themselves but they also pollute areas under a 200-kilometre radius. Here are 10 effects of Air Pollution that not only affect the people but the environment as a whole.
1. Global Warming
For those who don’t know- Global warming happens when harmful pollutants like carbon dioxide and greenhouse gasses collect in the atmosphere and absorb sunlight. This traps the heat on earth making it hotter and colder than usual. Air pollution is the culprit behind Global warming. Global warming in turn affects the people in different ways. If you live near the sea or river beds, you can experience flooding. Intense temperatures can have detrimental effects on the crops causing a shortage of food. Many animals are on the brink of extinction because of the high temperatures all caused by global warming.
2. Climate Change
Even though climate change is an effect of global warming but due to its graveness, it deserves a separate discussion. When the temperatures peak, the change of climate becomes inevitable. Climate change can be very damaging to people as it comes with a plethora of problems such as natural calamities, diseases and other technology-based issues. Air pollution increases global warming which in turn changes the climate. It is very difficult for people to cope with fast-changing climates.
3. Smog
Smog is a hazardous combination of smoke with fog. This happens when there is a lot of industrial air pollution in a particular area. Smog is very dangerous for the health of people living in that area and it can also harm the environment of not only that area but the areas around it. In many instances, smog has also caused great damage to the infrastructure. During the smog, you can experience problems such as asthma attacks, irritation in the eyes and severe coughing fits. In the longer run, if you experience smog for a prolonged time, you can severely damage your lungs.
4. Acid Rain
Acid rain is a very strange phenomenon, it happens when pollutants rise and combine with the water droplets and oxygen to form harmful substances such as sulfuric acid and nitric acid. The worse effects of acid rains are faced by animals and forests. They suffer great losses in terms of loss of nutrients and other problems. The marine life and wild animals also suffer greatly. Fish die due to the Ph. imbalances they face after the acid rain.
5. Respiratory Problems
Air pollution is directly responsible for respiratory issues for many people. You can face minor issues such as shortness of breath or allergies and this can also lead to severe problems such as asthma and lung cancer. Research has also shown that if children are exposed to air pollutants for a prolonged time, it may hinder the development of their lungs. If there are harmful chemicals in it, you can also get severely ill which can turn fatal. Moreover, there are instances where the pollutants make the oxygen levels to drop causing great discomfort to asthmatic patients.
6. Erosion Of Buildings
With the passage of time and urbanization, our air quality has dropped. In addition to the harmful effects on human life, it has also impacted the buildings and other infrastructure. Ancient buildings made with limestone are easily corroded by acid rains. We lose precious monuments because of the poor air quality. Acid rain has damaging properties that wash away the paint from homes, making it difficult and expensive to get repairs.
7. Allergies
Since the air we breathe in and survive in is full of pollutants, many people develop allergies in response to these harmful chemicals. These allergies can be really bothersome as they can go to extreme heights causing more and more problems. Moreover, it is very difficult to pinpoint what exactly is causing the allergic reaction. The condition can get worse before you can find out what’s causing it.
8. Skin Diseases
Our skin is the only organ that is exposed to air. In polluted air, your skin is exposed to several chemicals which can damage your skin badly. The extent of damage can be extreme which means you can even get cancer. The lesser intense impacts can include discolouration of the skin with a pre-ageing sign and much more. You must all costs take care of your skin so that you can avoid getting all the harmful effects of air pollution, do a deep cleanse every now and then.
9. Birth Defect
Many new types of research have come up with the discovery that bad air quality can result in birth defects and fatalities. Researchers have studied the correlation between the two and have come up with a conclusion that the pregnancies that resulted in fatalities were mostly because of poor air quality. Furthermore, upon research, it was also discovered that children who are exposed to air pollution might lessen their life expectancy by 20 months. There are still many areas of research that are under process. The initial discovery has left the world shocked and worried.
10. Vegetation
Air pollution is found to be very damaging to the vegetation. It can hinder the production of new crops and can even damage crops that are ready to harvest. Things such as acid rains and climate change can interfere in the growth of crops and ruin the fields. When the water from acid rain seeps into the soil, the plants die as they are not able to reach the required nutrients and are instead taking in harmful substances.
Hopefully our dissertation help UK have done a good job to help you out with your dissertations and the above-mentioned information is sufficient to kick start your research in the right direction. | https://practicaldev-herokuapp-com.global.ssl.fastly.net/faithcoles/how-does-air-pollution-affect-people-s-life-in-congested-cities-m1k |
the Scientific Aspects of Marine Pollution (GESAMP) of 1972,
marine pollution is ?the introduction by man directly or indirectly,
of substances or energy to the marine environment resulting in deleterious effects such as harm to living resources, hazards to human health; hindrance of marine activities including fishing, impairing the quality for use of sea water, and reduction of amenities?(Clark 3). Since the beginning of modern civilization, man has continuously polluted the oceans. As more and more pollution entered the Earth?s oceans and problems became evident, man has been given the obligation to prevent further damage. Sewage, marine debris, toxic chemicals, heavy metals, oil, and radioactive materials constitute six major categories of marine pollutants that mankind needs to prevent from entering the worlds oceans.
Sewage has a short lifespan, it decomposes due to microscopic organisms like bacteria and fungi. However, in the respiration process microorganisms consume oxygen, creating ?biological oxygen demand?(Johnston 56). If there is too little oxygen in the water to support the biological oxygen demand for these biodegraders, they die and so do all the plants and animals that depend on them for food. When raw sewage is dumped into an area without strong currents to disperse it, the oxygen is likely to be used up. When this happens, the only form of decomposition that can take place is anaerobic which takes a very long time. This process is called eutrophication (Clark 5). When nutrient rich sewage enters the ocean, an extreme overgrowth of toxic phytoplankton. This process causes what is known as red-tides that kill many forms of marine life.
Marine debris is another form of pollution that is a major threat to the earth?s oceans. Marine debris consists of discarded plastic, glass, and metal that does not easily decompose. Some debris such as abandoned ships and old cars that sink attract fish because they form artificial reefs. Some artificial reefs have been purposely made by humans out of sinking marine debris for the sole purpose of providing sea life with an ecosystem. The pollution problem is more centered around floating marine debris such as plastic. Because plastic floats it constitutes a threat to sea birds and mammals who either eat it or become entangled in it. Each year 30,000 northern fur seals as well as hundreds of thousands of other marine animals die due to being entangled in discarded plastic(Johnston 63). These plastics when eaten can lodge in the intestines and stomach to block the digestive tract to cause malnutrition and death. Not only does marine debris effect sea life, but humans as well. Marine debris interferes with ship navigation and litters beaches along coastal water.
Toxic chemicals are extremely hazardous to the oceans. Three of the most deadly chemicals are constantly running off land into water. These chemicals; DDT, PCBs, and dioxins belong to a family of industrial and agricultural chemicals that do not brake down easily in our environment. When these chemicals enter the ocean they are first absorbed by phytoplankton and zooplankton. These planktons are an important part of a fish?s diet, therefore; fish eat them and the toxic chemicals are stored in the fish?s fatty tissue. Because fish are low on the food chain, each time the toxic chemicals are passed on from predator to predator the concentration gets higher. High concentrations of these chemicals cause premature birth, birth defects, nerve damage, learning disabilities, and remain in the fatty tissue of marine animals.
Another major pollutant found in our oceans is heavy metals. Heavy metals are dense elements such as lead, mercury, and cadmium. Each of these elements has a different and harmful effect on sea life. Lead can make fish toxic for consumption and create more lead in the food chain. In animals, lead causes delayed development of offspring, nervous system disorders, and learning disabilities. Mercury is very toxic, even in low concentrations. It disrupts the central nervous system functions in animals. Mercury is extremely dangerous because mixed with other pollutants, the toxicity can be very deadly. Cadmium becomes toxic to sea animals by eventually replacing the calcium, thus making bones fragile and easily broken.
Oil as ocean pollution has been a major concern for decades. When an oil tanker explodes the oil that pours out into the water creates highly concentrated ?toxic pockets?(Whitham 100). Since oil is lighter than water it generally floats on top of the surface. The oil soaks up oxygen, blocks sunlight, and smothers plankton. Oil destroys the natural water resistance of sea bird feathers causing them to be unable to fly and drown. Marine animals swallow the oil causing intestinal problems and liver failure. Oil contamination in the ocean creates a long cycle of destruction, it clogs the gills of fish, taints shellfish, destroys natural corral reefs, and kills marine larva (Whitham 100).
The ocean is also polluted by radioactive materials. Once radioactive wastes enter an ocean they can not be removed or cleaned up, but are diluted into the large body of water. The plants and animals in the ocean develop a concentration of the radioactive material within them. This concentration can be harmless to some animals, but be lethal to the plant or animals predators. Animals contaminated by radiation that survive pass radionucleides in greater concentration along the food chain. Humans are the most effected by radiation, therefore; when nuclear waste is entered into the ocean humans are greatly effected due to rainfall and the food chain.(Seymour 22).
Pollution of the earth?s oceans has become a real problem. There are six major pollutants including sewage, marine debris, toxic chemicals, heavy metals, oil, and radioactive materials. These pollutants need to be regulated and prevented from entering the oceans. As population and consumer societies grow, pollution will become even a greater problem. The effects of each of the six major pollutants is evident, therefore; mankind needs to take preventative steps to save our ?Mother Ocean?.
Clark, R.B. Marine Pollution. Second Edition. Clarendon Press. Oxford, New York. 1989.
Johnston, R. Marine Pollution. Academic Press. New York, NY. 1976.
Seymour, Allyn H. Radioactivity in the Marine Environment. National Academy of Sciences. Washington D.C. 1971.
Whitham, B.T., A.B. Harvey, D.F. Duckworth et al. Marine Pollution by Oil. Institute of Petroleum analysis committee. Great Britian. 1974. | https://ronl.org/referaty/na-angliyskom-yazyke/415892/ |
A deep dive into the eutrophication of Lake Merced
Preparing for dragon boat practice, senior Priti Lama begins paddling into the heart of Lake Merced, ignoring the usual stench emitting from the putrid lake. She’s become accustomed to it after practicing here every day for the last few years. As her eyes adjust to the glare of the water, they rest on a dead fish, floating an alarmingly short distance away from her. She cringes in disgust and paddles away in the opposite direction hoping to avoid physical contact.
Lake Merced didn’t always elicit these negative reactions.
Environmental problems took a local turn when the once beautiful and clear lake began the process of cultural eutrophication. Eutrophication happens when the excessive presence of nutrients in a body of water result in algae growth and depleted oxygen levels. Cultural eutrophication is eutrophication caused by human activity. Due to certain anthropogenic barriers and the addition of nutrients, Lake Merced is currently experiencing the consequences of cultural eutrophication, which has caused everything from the green appearance to the reduced biodiversity of the lake. Although not yet fully eutrophied, the process has started, affecting hundreds of Lowell students who, in a normal year, use the lake on a daily basis. If action is not taken soon, the damage caused could be nearly impossible to reverse.
According to AP Environmental Science teacher Katherine Melvin, the main factor contributing to the start of the eutrophication process is the fact that Lake Merced is surrounded by concrete streets and hard surfaces, causing its watershed to be reduced. A watershed is an area of land that drains water from all different sources into a main body of water. The process of constant adding and draining of water allows the water to be recycled. With a shrunken watershed, however, not enough “new” water is getting to the lake. This creates an abundance of issues, including the body of water becoming shallow and the lake’s nutrients and pollutants can’t get cycled out. The additional nutrients and pollutants contribute to the odor and the increasing number of dead animals in the lake.
The problems caused by the reduced watershed are only compounded by the fact that excess nutrients are getting into the lake from outside sources. In 2018, a water quality report was done for Lake Merced, in which the level of nutrients including phosphorus, nitrogen, lead, and ammonium were reported to have all noticeably increased in Lake Merced since 2009. This nutrient pollution is not cycled out of the lake because of its shrunken watershed and is causing the rapid development of algae in Lake Merced. This is because the nutrients act as fertilizer for the algae.
Algae growth not only destroys the clear beauty of the lake, causing the water to appear green in color, but also creates more serious issues. This algae growth causes a decrease in levels of dissolved oxygen, harming organisms that use Lake Merced as their habitat. Although algae produces oxygen, excess algal growth and eutrophication reduces the light that reaches algae and therefore reduces photosynthesis. As a result, the algae can’t produce enough oxygen to survive and eventually are decomposed by bacteria that further deplete oxygen levels, creating a positive feedback loop. The lack of oxygen makes it difficult for the animals living in the lake to survive. As more vulnerable species die, the food chain is thrown out of balance and the lake’s biodiversity decreases.
The source of the excess nutrients that are causing this harmful algae growth is debated. According to Melvin, the nutrients are likely getting into the lake in the form of fertilizers that the nearby golf courses use for the upkeep of their grass. The golf courses, however, have claimed that they are very conscious of avoiding putting fertilizer near the lake. If their claim is true, fertilizer must be getting into the lake from a distinct source.
Regardless of the exact source of the trouble, the effects of the eutrophication of Lake Merced are already affecting Lowell students. As a Lowell student who has spent four seasons practicing on Lake Merced for dragon boat, Lama has had more than a few experiences finding dead animals floating aimlessly on the surface. “One time I was paddling and there was a dead rat and we had to paddle off really quickly because it was terrifying,” said Lama. According to her, the amount of algae in Lake Merced is visibly unhealthy, and she recalls seeing a sign warning individuals that because of the increasing amount of algae, it was unhealthy to fish in Lake Merced and that certain parts of the lake should be avoided. She believes that Lake Merced is in very poor conditions and that, considering the large number of people who use it, it should be cared for more.
Senior Isabel Do, another Lowell student and former dragon boat athlete, used Lake Merced at least three times a week while on the dragon boat team and clearly remembers her experiences practicing on the polluted lake. “Because [dragon boat] is a water sport, you get the water all over your body,” said Do. “Your hands get wet, and when they dry they smell really gross, like sewage in a way. And if you don’t wash your clothes right afterwards, then they keep the smell after they dry and the smell sinks [in] for days.” She dislikes the odor that the lake emits, how green the water is, and overall how much pollution there is in the surroundings and in the lake.
Solutions are already being worked on to reverse the effects of eutrophication. Bubbling, a technique that has aerators pump oxygen into the lake, is being used in the southern part of Lake Merced to increase the amount of dissolved oxygen. Aerators were installed there and have caused that area of the lake to be clearer and have better biodiversity. According to Melvin, if these aerators were placed in more areas of the lake and more frequently, the amount of dissolved oxygen reaching organisms would increase dramatically, solving the biodiversity problems caused by eutrophication.
Melvin also has some ideas for taking out the root cause. She thinks barriers or walls around the golf courses could ensure that fertilizers don’t get into the lake and thus prevent excess nutrients causing uncontrolled algae growth. Additionally, diverting filtered sewer water back into the lake to give Lake Merced another water source besides the rain that collects into it, could solve the issue of the lake’s small watershed, according to Melvin. However, she believes this project will not happen because it would cost a huge amount of money.
Although Lake Merced continues to become eutrophied and drastic action has not yet been taken, experts in the eutrophication process like Melvin believe it is not too late to reverse the process. Executing projects to reverse the start of eutrophication would not be easy or cheap, but Melvin is optimistic that, if such methods or other solutions are implemented, we can at least begin to reverse the damage caused by human activity before Lake Merced is completely or irreversibly eutrophied. “Almost every school uses [the lake] and it needs to be fixed,” Lama said. | http://thelowell.org/9138/features/a-deep-dive-into-the-eutrophication-of-lake-merced/ |
Acid rain is any form of precipitation that consists of high levels of acidic compounds with a pH of 5.5 or low. Formation of acid rain occurs when nitrogen oxides and sulfur dioxides are released to the atmosphere where they react with water forming nitric and sulfuric acids (1). They can rise to very high levels and disperse to long distances due to their ability to dissolve in water easily which makes them easier to be carried by wind. They, then, combine with other chemicals including oxygen and water resulting to the formation of acidic substances most of which are environmental pollutants (2).
Over the past years, activities by man have been a major cause of acid rain. Such activities have caused the release of various chemicals to the atmosphere that has caused diverse changes to the composition of acidic rain (2). Majority of these chemicals are as a result of burning of fossil fuels, especially in the generation of energy (1). Charcoal burning and eruptions from volcanoes have also had a lot of contribution in the formation of acid rain due to the release of various oxides during the process.
Other sources of nitrogen and sulfur oxides, which are the chief causes of acid rain, are exhausts from automobiles. In other cases, normal precipitation may fall on surfaces such as water sources including lakes and streams, soil or bedrock material that may be too acidic to counteract an alkaline precipitation. Such occurrence converts normal rain to acidic rain (3).
Acid rain is better described as acid depositions which may either be wet or dry. A wet deposition is described by all forms of precipitation that are acidic including rain, fog or even snow. They all have adverse effects to both plants and animals especially those that depend on water. The extent of these effects depends largely on the amount of acid present in the water (2). It is also determined by the chemical composition of the soil and that of other living things involved.
A dry deposition is used to describe the fall of acidity back to the surface through the release of gases and particles that are acidic. When these particles and gases are blown by the wind from the surface, they find their way onto buildings and other surfaces. During rain seasons, these particles may be carried away by normal rain water changing it to acid rain (1). Alternatively, the particles may be washed by original acid rain making the situation even worse.
Effects of acid rain
Acidic rain contaminates the environment and makes it difficult for organisms to survive. In most cases, organisms either experience retarded growth while in other cases; the organisms may die due to harsh living conditions.
The most felt effects of acid rain are on aquatic environments such as lakes and streams. In these ecosystems, acidic water from acidic rain pollutes the water that aquatic organisms depend on for survival. This may lead to death of these organisms and thus biological diversity in these aquatic environments is affected. Though most water environments have a neutral pH, some are normally acidic without the effect of acid rain. However, living organisms that live in the latter ecosystems are adapted to the particular level of acidity that is characteristic of their ecosystem. Any significant changes to the pH of these ecosystems may be disastrous to the living organisms living in them. This is particularly the case in the event the ecosystem is contaminated by acidic rain which lowers its pH making the environment unfavorable for the organisms (2).
Acidic rain has devastating effects on most ecosystems on the earth’s surface. As it has been stated in the discussion above, water-based ecosystems are the most vulnerable to acidic rain due to their ability to dissolve the rain quickly in them. Additionally, among the water-based ecosystems, the most affected are those that lack buffering capacity which is the ability to neutralize acidic substances. As a result of this inability, the pH value of the water decreases and thus the water becomes acidic.
This is normally the case in aquatic environments whose pH is already low before the acidic rain contaminates them. Since their pH is already low, they are unable to neutralize the acidic rain and thus the water gets more acidic endangering the lives of organisms in the ecosystem. Contrarily, an aquatic environment may be having a substantially high pH before the occurrence of the acidic rain. This gives it buffering ability and thus the effects of the acidic rain on the organisms in the ecosystem will be mitigated.
The acidity that acidic rain gives to aquatic environments causes the discharge of aluminum from soil into the water. Being highly toxic, aluminum causes harmful effects on aquatic animals which may even kill them. In other cases, the aquatic animals are may not be killed instantly, instead the high level of aluminum combines with the low pH causing a lot of stress to the animals (1). This leads to a decrease in body mass and size of fish as well as the ability to fight for food, oxygen and habitat.
However, some plant and animal types have the ability to withstand acidic water. Those that are sensitive to acidic conditions are affected as the pH level decreases. The younger species are the most affected lowering the chances of fish eggs to hatch. The change in the pH levels also leads to lack of adequate calcium in fish which may cause deformation of their bones and weak spines (2).
Researchers have confirmed that acid rain has, over the years, had negative effects on forests. The most common effect that acidic rain has on forests is the fact that forests that are exposed to acidic rain have a characteristic slow growth. In other cases, forests may be adversely affected by acid rain, making vegetation to dry or rot. Forests are also affected indirectly by acid rain through degradation of soil when the acidic rain water dissolves in the soil. The dissolution of acidic rain in the soil normally dissolves nutrients that are required by the trees for their growth and thus these nutrients become unavailable to the trees. Forest trees are affected directly by acid rain through the damage of the leaves which in most cases turn brown in color. This results to inadequate nutrient supply to trees since the synthesis of organic materials takes place in the leaves through the process of photosynthesis (2).
As result, the plant becomes weak and more susceptible to bad weather and insects. The trees ultimately get infected with diseases and die. Toxic substances may also be released into the soil which collaborates with the acidic condition to harm the trees. When the chemical composition of forest floor lacks the ability to neutralize acids, its effects gets worse. However, limestone may occur naturally in the forest floor and due to its alkaline nature, it neutralizes the acidity caused by acid rain. In this case, trees will be normal as if the rain that has fallen is not acidic.
A lot of damage in the automotive industries has been reported to occur as a result of air pollution which has increased over the years due to various activities by man. Acid rain has been implicated as a major pollutant that causes permanent damage to different types of automotives. The evaporation of the acid water is what, actually, causes damage to automotive coatings. They do so by accelerating the rusting of automobiles. Damaged coatings have dark colored patches which are irregular in shape.
Metals such as bronze, a tough material, are corroded by acid rain water especially by the dry deposition. Different types of stones including limestone have been deteriorated by the effect of acid water. The overall result is the reduction of value in buildings and other objects that are constructed using these kinds of materials. When dry depositions fall on structures and other surfaces, they cause dirt on them and this calls for higher maintenance expenses (3).
Acid rain affects human health in different ways. Emissions of sulfur and nitrogen oxides to the atmosphere have contributed largely to problems related to vision. In most cases they result to short sightedness and unclear vision. Dry airborne depositions may cause damage to the respiratory system and this may result to the dysfunction of the lungs especially for those with breathing difficulties. When these pollutants that occur in form of gases are inhaled, they may cause the development of asthma or bronchitis and other respiratory diseases. In other cases, emissions from sulfur and nitrogen oxides may irritate the eyes and nostrils.
Death may even occur in cases of severe asthma. Leakage of toxic compounds ends up contaminating drinking water sources and causes vomiting to individuals or poisoning by the metals present in water. Edible fish usually get contaminated especially by aluminum and mercury. Consumption of such fish leads to poisoning and nerve or brain damage due to mercury accumulation. This effect is irreversible and may result to death (2).
Acidic rain has had damaging effects on infrastructure. As stated it damages automobiles, buildings and paints. In addition to the stated materials, acidic rain also damages cultural heritage like sculptures and statues that are made of materials that can be affected by acidic conditions. To illustrate the effect that acidic rain has on non-aquatic organisms, consider a study that was carried out on birds that nest on trees.
This study showed that birds in the Netherlands were seriously affected by acidic rain during reproductions. The eggs laid by birds were observed to have a defective tendency of close to half. The eggs were characterized by poor quality with weak shells and losing the average weight (2). It is thus evident that acidic rain affects both plants and animals badly and it is in the interest of every one of us to try his/her best to find a solution to the problem.
Ways of reducing acid rain
Cleaning up exhaust pipes
The burning of fossil fuels such as charcoal is a major contributor to acid rain. Other sources of energy associated with the same are natural gas and oil. The burning of these fuels emits nitrogen oxide and sulfur dioxide which react with air and other compounds to form acid rain (1). The effects of acid rain can be reduced by burning coal that contains small amounts of sulfur. Coal may also be washed specifically with scrubbers which remove the sulfur dioxides from the gases by chemical processes. Other devices which are similar to converters may be used to reduce the emissions of nitrogen oxides from automobiles (2).
Use of alternative sources of energy
Nuclear and hydropower energy can be used instead of energy from coal. Alternatively, wind, geothermal and solar energy can also be used because they are cheaper and eco-friendly. This ensures that less acidic gases are emitted to the atmosphere hence reducing levels of acidic rain (2).
Conservation of energy
The levels of acidic rain can also be reduced by proper use of energy at home or at the work place. People should, thus, make an effort to share automobiles when they are visiting the same place in order to reduce acidic-gas emissions and save energy (3). We can also conserve household energy and thus less energy will be generated which, in turn, will reduce the levels of acidic gases in the atmosphere. This can be done by employing basic energy conservation techniques such as ensuring that the water heater in the bathroom is properly insulated and turning off electronic equipment after using them to consume less electricity (2).
Conclusion
As discussed, acid rain has adverse effects to our lives. In order to curb these effects, we need to be informed on the causes and effects of acid depositions, especially the activities carried out by man that contribute to the same. There is also the need to understand the environmental changes that are brought by these acidic pollutants in order to come up with reliable solutions. Besides lack of knowledge, other people are aware of the facts about acid rain but they just ignore it.
Acid depositions, both wet and dry, get deeper into objects and soil. It is, therefore, of essence for people to shun ignorance and engage in activities that can restore the damaged environment. The best method of restoring such an environment is liming. Additionally, affected objects should be washed to remove sulfur compounds in order to prevent the buildup of pollutants. This involves adding limestone to acidic lakes to neutralize the acidity. The advantage of lime is that it does not kill fish thus allows the natural habitants to survive in the water. It is also wise to add the lime repeatedly to prevent the water from getting back to the initial acidic condition.
As efforts are made to reduce the effect of acid depositions, scientists should assess the progress of the process to ensure a sustainable solution is achieved. Individuals should actively take part by using energy properly and making use of alternative sources of energy that do not cause harm to the environment. In case of burning coal, the involved party should make sure that he/she uses coal that contains low levels of sulfur. It is also advisable for people carpool. Maintenance of all vehicles is necessary as well as buying those vehicles that emit low levels of nitrogen and sulfur oxides.
Bibliography
Environmental Protection Agency. 2010. Web.
Park, Chris, Acid Rain: rhetoric and reality, Barnes & Noble, California, 1997. National Geographic Society. Web. | https://premium-papers.com/impact-of-acid-rain-on-environment/ |
As an important means to prevent and control sewage, water quality monitoring has been paid more and more attention by the national environmental protection department.
Chemical Oxygen Demand-(COD) is one of the most important indicators. Chemical oxygen demand directly reflects the extent of water resources being destroyed by reducing pollutants.
If the median value of water is high, the pollutants will consume more dissolved oxygen in the water, resulting in water anoxia, resulting in the death of a large number of organisms due to anoxia.
At the same time, causing a large number of anaerobic bacteria in the water, accelerating the deterioration of water quality. | https://www.zataiot.com/chemical-oxygen-demand-cod-sensor-definition/ |
This is an extract from dissertation on Biomagnification:
Every year millions of people get cancer and it’s a major problem in the United States that many people do not like to think about because of how it could impact their lives! Some of these cancers can be linked to a process known as biomagnification, or biological magnification. Biomagnification is a result of bioaccumulation and they are related, yet very different (McGrath 78). These two terms, biomagnification and bioaccumulation, are processes that are causing more damage to the environment than most people think. There are many pollutants that affect these processes and they are commonly dumped into waterways or released into the air. Some of these common pollutants are DDT, PCBs, Polynuclear Aromatic Hydrocarbons (PAHs), lead, and mercury. These pollutants are catastrophic for Mother Nature, as well as humans. Biomagnification is a serious dilemma for life on Earth because it only takes a few pollutants to eventually concentrate to very high levels of toxicity in animals, including humans, after they have consumed lower-level organisms that have been previously contaminated with harmful pollutants.
There are many ways that contaminants can enter ecosystems on land, in water, and in air. Contaminants come from many sources like rain, pesticides, atmospheric deposition with rainfall, and discharges to water, usually by larger corporations trying to save money (McGrath 78). Every year, millions of gallons of pollutants are dumped into waterways all over the world, both legally and illegally (Weiss 35). Once the pollutant enters the waterway it will persist in the environment because of the water cycle. Many people do not know just how dire certain pollution is and they should be educated on how to prevent it for the sake of the Earth. Toxic materials, like cyanide and selenium, do nothing but harm our ecosystems and our way of life. They bio-accumulate, or build up, in smaller organisms and can concentrate in a more harmful process, known as biomagnification. The reason why these contaminants can build up is because they can’t be excreted or metabolized easily by most organisms and therefore, they are accumulated in high levels (McGrath 78). Moreover, almost all synthetic chemicals and radioactive materials can’t be diluted or broken down by natural processes (McGrath 85). Bioaccumulation refers to how pollutants enter a food chain, whereas biomagnification refers to how pollutants tend to concentrate as they move from one level of the food chain to the next (Bioaccumulation). Even though there are definite differences in these processes, they will always be linked by the destructive nature of menacing pollutants.
One such pollutant is the iniquitous DDT, standing for dichlorodiphenyl-trichloroacetic acid, which is a chlorinated hydrocarbon that has widely been used as an insecticide. Chlorinated hydrocarbons, like DDT, are a diverse group of synthetic compounds of carbon, hydrogen, and chlorine, which are used mainly in pesticides (McGrath 204). In World War II, DDT was a valuable insecticide used by the United States Armed Forces; it was sprayed on clothing and dusted on the bodies of soldiers, refugees, and prisoners to kill body lice and other irritating insects (Enger 240). It was important in this aspect because it helped maintain the health of millions of people by preventing typhus fever, and it was a valuable tool, also, because it aided tremendously in achieving victories (Leary 1). Around the Second World War DDT production in America increased from zero to three million pounds per month, in less than just two years (Leary 2). The greatest use in the United States was seventy-nine million pounds in 1959, and its maximum annual production was close to two hundred million pounds in 1964. Its peak production was in 1970, when almost three hundred and eighty million pounds were manufactured globally (McGrath 205). DDT has a major effect on nature and it needs to be completely eradicated.
DDT’s effect on the food chain is immense because it’s soluble in fat and it’s an exceptionally stable compound. In addition, once it is present, it remains effective and stays around for a long time (Enger 241). Because DDT is soluble in fat, it can easily be stored in the fat deposits of insects and small organisms where it can bio-accumulate. Cox, an expert on DDT states, “Ten milligrams of DDT is about the size of the sharpened tip of a pencil and is the minimum concentration to be considered effective enough to bio-accumulate sufficiently to be lethal,” (Leary 2). Its use was originally very effective; however, as time passed, many species of insects formed an immunity to it and it began to kill numerous beneficial species of insects as well. Instead of just harming the pest species, it bio-accumulated in smaller organisms, bio-magnified throughout the food chain, and began to kill larger, non-target species as well, such as predatory birds (Enger 241). A possible reason why many species of insects were able to form an immunity to DDT is because the more complex an ecosystem is, the more likely it is to recover after being harmed or disturbed (Enger 240). The insect community is definitely complex and it’s true that communities that have a wide variety of organisms and a high level of interaction are more resistant to disturbances, such as pollution, than those with fewer organisms and little interaction (McGrath 85). DDT should have been researched more and never should have been manufactured, at least not on a global scale.
DDT was first synthesized in 1874 (McGrath 204). Paul Muller, a Swiss scientist, discovered it’s insecticidal qualities sixty-five years later in 1939. He later won the Nobel Prize for his research on the uses of DDT in medicine in 1948 (McGrath 205). The use of DDT has been especially important in reducing the number of incidents of malaria, which has always been a detrimental disease in warmer climates. On the other hand, it still retains more damaging effects than good ones, such as accumulating in water and air ecosystems (McGrath 204). Bioaccumulation occurs easily with DDT and other chemicals because it is long-lived, mobile, fat soluble, and biologically active (Bioaccumulation). Biomagnification of any chemical wouldn’t be possible without the bioaccumulation of that chemical, first.
An example of biomagnification that resulted from bioaccumulation occurred with DDT. Studies showed that eagles, falcons, pelicans, and other predatory birds were affected with such high levels that their eggshells became so thin that they broke almost instantly when the mother sat on top of them (McGrath 79). Since these birds are at the higher end of the food chain they were affected with high concentrations of DDT. This is because they would eat a lot of contaminated fish that started out with hardly any DDT in them at all, and over time, it bio-accumulated. These birds also had a large population decrease due to a decrease in clutch size, the number of eggs laid. There was also pathological parental behavior and the male birds became feminine-like because the excess DDT acted as an ‘estrogen’ chemical in their bodies. Once scientists knew what was causing the population decrease they banned DDT.
Credit for sounding the warning about DDT and biomagnification usually goes to Rachel Carson who wrote Silent Spring in 1960 (Bioaccumulation). Research efforts were near the eighty-two billion dollars expended on the atomic bomb. There were even plans to put it in paints, soap powders, floor waxes, and other articles (Leary 3). Even though it is still manufactured and used in many parts of the world, including Mexico, DDT was banned in the United States in 1972. Since, there has been a dramatic increase in the population of several predatory bird species (Enger 240). DDT is not the only harmful pollutant because, in actuality, there are hundreds.
In the past few years polychlorinated biphenyls have been creating a large threat to some animal communities, especially those near the North Pole. Polychlorinated biphenyls (PCBs) contain over two hundred and nine individual compounds with varying toxicities (McGrath 129). Also, there are around two hundred known compounds that qualify as PCBs (Amdur 49). Ironically, PCBs weren’t supposed to be released into the environment because scientists knew what damage DDT did and they did not wish to repeat that scenario with even more threatening synthetic compounds (Polychlorinated 363). Yet today, PCBs are still widespread throughout the environment. They exist worldwide in numerous places such as water, ocean sediments, and especially fish (McGrath 130). There are some major problems now because of the hasty and careless polluting of PCBs into the environment and it has been going on for almost seventy years.
Since its production began back in the 1930’s one and a half billion pounds of PCBs have been manufactured in the United States alone. Numerous amounts of these chemicals have found their way into the air, waterways, river sediment, and even certain foods, like fish and poultry (Polychlorinated 364). Like DDT, they are very stable in the environment and persist for many years. Throughout many food webs they are continuously recycled again and again. A number of PCBs were used in many common products: coolants, lubricants, hydraulic fluids, inks, plastics, tapes, paints, glues, waxes, and polishes. However, the major use for these compounds was in insulators for electrical transformers (Bioaccumulation). The reason for this is because of its properties that make its mixtures colorless, viscous fluids that are relatively inert chemically and nearly insoluble in water. These properties make them resistant to heat, light, fire, and particularly electricity (Amdur 49).
Scrapped electrical transformers create major hazardous waste problems because PCBs leak into nearby waterways (Weiss 31). To this day, more than one billion pounds of PCB fluids remain in transformers and other electrical equipment (McGrath 206). In 1970, its production was limited to cases where satisfactory substitutes weren’t available and in October of 1977, its manufacture was halted (McGrath 206). However, since April of 1979, when the EPA banned its manufacture, there have been substitutes found for almost all of its former uses (Enger 241). With all this excess of PCBs, scientists needed to find a way to not only substitute, but to entirely purge them from the face of the Earth.
Substituting PCBs doesn’t stop the threat of biomagnification and scientists know this and they’re trying to come up with ways to completely eliminate them. Three years after PCBs were banned an incineration ship named “Vulcanus” was allowed by the EPA to incinerate over three and a half million gallons of PCBs. Waste Management Incorporated, of Oak Brook, Illinois, built this ship and equipped it with two incinerators that were capable of burning four thousand gallons of waste per hour. Afterwards, they calculated that it had only destroyed ninety-nine percent of the PCBs; nonetheless, it was theoretically intended to destroy one hundred percent of the PCBs, and this percentage upset the EPA because of bioaccumulation (Weiss 35). Gladly, a new method has been adopted recently that eliminates PCBs even better than the massive incinerators. Scientists have developed microbes that devour PCBs and neutralize Agent Orange and then die when their work is complete (Amdur 78). As one may presume, PCBs play a major role in biomagnification and need to be eradicated as soon as possible; it is possible!
Dangerous pollutants, such as DDT, PCBs, PAHs, lead, and mercury, get into the environment pretty easily. Big corporations looking to save money dump their wastes into unprotected waterways, and ultimately, everything suffers. Once a pollutant is in the water cycle, it can travel very far distances easily, which makes bioaccumulation occur easier. For example, PCBs have been known to travel thousands of miles from their sources, and over time, they accumulate to fairly high intensities in various ranges of organisms (McGrath 129). Global contamination of bio-accumulated pollutants in organisms is made possible by their characteristics. Ironically, human influence is usually indirect, and habitat destruction is the main cause of extinction and endangering of species (Enger 240). Perhaps one of these days biomagnification will be the main cause of extinction if it isn’t stopped soon. If it isn’t stopped then human health will be affected just as much as any of the other animals.
Throughout the last decade there have been many cases of diseases, cancers, and other long-term sicknesses, as well as other diminutive cases. Behind the scenes, bioaccumulation and biomagnification, with certain pollutants, are running some of these infirmities. For example, children exposed to PCBs while in the womb were smaller and had much slower brain development than children that weren’t exposed (Health). It would be surprising if the parents of those children were exposed to PCBs because they are known to impair reproduction (Bioaccumulation). Getting exposed to PCBs isn’t very difficult considering all you have to do, basically, is fish. Almost everyone has PCBs in their bodies, and people who eat a lot of fish and seafood, have higher concentrations than people who don’t. PCBs have been measured in human blood, milk, and fat tissue from people all over North America (Health). Whenever people eat organisms higher in the food chain they are increasing their risk of getting chemical pollutants as part of their meal also. Lab mammals fed PCBs have suffered: enlarged livers, gastrointestinal lesions, loss of hair, and much more (Polychlorinated 363). Various people all over the world have reported cancer, liver ailments, birth defects, skin lesions, damage to the eyes, and damage to the lungs, when they were exposed to PCBs (Weiss 31). Factory workers have experienced nausea, dermatitis, dizziness, bronchitis, and severe eye irritation, after being around PCBs (Polychlorinated 363). All the animals and people affected by these pollutants in the past can relate to why we, as a society, need to take charge and significantly reduce pollution.
In closing, biomagnification has the potential to be truly deadly. Millions of people get cancer every year and it’s a major problem that many people have to face sooner or later. Some of these cancers are caused by biological magnification, which is a result of the bioaccumulation of pollutants in an organism. There are many pollutants that affect these processes such as DDT and PCBs. These pollutants cause unprecedented amounts of damage to the world by bioaccumulation and biomagnification. It’s a very serious dilemma because it only takes a handful of pollutants to eventually concentrate up to deadly levels of toxicity in all sorts of life. When the future gets clearer we will know if we have survived, but as long as we have the knowledge and the will, we can strive to overcome anything that Mother Nature hurls our way. | https://www.mastersthesiswriting.com/blog/sample-dissertations/biomagnification-dissertation.html |
- Pollution is any undesirable change in physical, chemical or biological characteristics of air, land, water or soil. This creates Environmental issues.
- The Agents that bring about such an undesirable change are called pollutants.
- In order to control environmental pollution, the Government of India has passed the Environment (Protection) Act, 1986 to protect and improve the quality of our environment (air, water and soil).
AIR POLLUTION AND ITS CONTROL
- We are dependent on air for our respiratory needs. Air pollutants cause injury to all living organisms.
- They reduce the growth and yield of crops and cause the premature death of plants.
- Air pollutants also deleteriously affect the respiratory system of humans and animals. There are several ways of removing particulate matter, the most widely used of which is the electrostatic precipitator.
- which can remove over 99 per cent particulate matter present in the exhaust from a thermal power plant.
- It has electrode wires that are maintained at several thousand volts, which produce a corona that releases electrons.
- These electrons attach to dust particles giving them a net negative charge.
- The collecting plates are grounded and attract the charged dust particles
- The velocity of air between the plates must be low enough to allow the dust to fall.
- A scrubber can remove gases like Sulphur Dioxide. In a scrubber, the exhaust is passed through a spray of water or lime.
- Recently we have realised the dangers of particulate matter that are very small and are not removed by these precipitators.
- According to the Central Pollution Control Board (CPCB). particulate size 2.5 micrometres or less in diameter (PM 2.5) are responsible for causing the greatest harm to human health.
- The fine particulates can be inhaled deep into the lungs and can cause breathing and respiratory symptoms, irritation, inflammation and damage to the lungs and premature deaths.
WATER POLLUTION AND ITS CONTROL
- Human beings have been abusing the water-bodies around the world by disposing into them all kinds of waste.
- We tend to believe that water can wash away everything not taking cognizance of the fact that the water bodies are our lifeline as well as that of all other living organisms.
- It Can be you list what all we tend to try and wash away through our rivers and drains? Due to such activities of humankind, the ponds, lakes, stream, rivers, estuaries and oceans are becoming polluted in several parts of the world.
- Realising the importance of maintaining the cleanliness of the water bodies, the Government of India has passed the Water (Prevention and Control of Pollution) Act, 1974 to safeguard our water resources.
DOMESTIC SEWAGE AND INDUSTRIAL EFFLUENTS.
- we wash everything into drains. Have you ever wondered where the sewage that comes out of our houses go? What happens in villages?.
- Is the sewage treated before being transported to the nearest river and mixed with it? A mere 0.1 per cent impurities make domestic sewage treatment plant for human.
- Solids are relatively easy to remove, what is most difficult to remove are dissolved salts such as nitrates, phosphates, and other nutrients, and toxic metal ions and organic compounds.
- Domestic sewage primarily contains biodegradable organic matter, which readily decomposes thanks to bacteria and other micro-organisms, which can multiply using Eutrophication is the natural ageing of a lake by nutrient enrichment of its water.
- In a young lake, the water is cold and clear, supporting little life. With time, streams draining into the lake.
- introduce nutrients such as nitrogen and phosphorus, which encourage the growth of aquatic organisms.
- As the lake’s fertility increases, plant and animal life burgeons, and organic remains begin to be deposited on the lake bottom.
- Over the centuries, as silt and organic debris pile up, the lake grows shallower and warmer, with warm-water organisms supplanting those that thrive in a cold environment.
- Marsh plants take root in the shallows and begin to fill in the original lake basin.
- Eventually, the lake gives way to large masses of floating plants (bog), finally converting into the land.
- these organic substances as substrates and hence utilise some of the components of sewage.
- It is possible to estimate the amount of biodegradable organic matter in sewage water by measuring Biochemical Oxygen Demand (BOD).
- micro-organisms and the amount of biodegradable matter. shows some of the changes that one may notice following the discharge of sewage into a river.
- Micro-organisms involved in biodegradation of organic matter in the receiving water body consume a lot of oxygen.
- As a result, there is a sharp decline in dissolved oxygen downstream from the point of sewage discharge.
- This causes mortality of fish and other aquatic creatures. natural ageing of a lake may span thousands of years.
- However, pollutants from man’s activities like effluents from the industries and homes can radically accelerate the ageing process.
- This phenomenon has been called Cultural or Accelerated Eutrophication.
- During the past century, lakes in many parts of the earth have been severely eutrophic by sewage and agricultural and industrial wastes.
- The prime contaminants are nitrates and phosphates, which act as plant nutrients.
- They overstimulate the growth of algae, causing unsightly scum and unpleasant odours, and robbing the water of dissolved oxygen vital to other aquatic life.
- At the same time, other pollutants flowing into a lake may poison whole populations of fish, whose decomposing remains further deplete the water’s dissolved oxygen content.
- In such fashion, a lake can literally choke to death.
SOLID WASTES
- Solid wastes refer to everything that goes out in the trash. Municipal solid wastes are wastes from homes, offices, stores, schools, hospitals, etc.
- that are collected and disposed of by the municipality. The municipal solid wastes generally comprise paper, food wastes. plastics, glass, metals, rubber, leather, textile, etc.
- Burning reduces the volume of the wastes, although it is generally not burnt to completion and open dumps often serve as the breeding ground for rats and flies.
- Sanitary landfills were adopted as the substitute for open-burning dumps. In a sanitary landfill.
- wastes are dumped in a depression or trench after compaction and covered with dirt every day.
- If you live in a town or city, do you know where the nearest landfill site is? Landfills are also not really much of a solution since the amount of garbage generation especially in the metros has increased so much that these sites are getting filled too.
- There is a danger of seepage of chemicals, etc., from these landfills polluting the underground water resources.
AGRO-CHEMICALS AND THEIR EFFECTS
- In the wake of the green revolution, the use of inorganic fertilisers and pesticides has increased manifold for enhancing crop production.
- The Pesticides, herbicides, fungicides, etc., are being increasingly used.
- These incidentally also toxic to non-target organisms that are important components of the soil ecosystem.
- Do you think these can be biomagnified in the terrestrial ecosystems? We know what the addition of increasing amounts of chemical fertilisers can do to aquatic ecosystems eutrophication.
- The current problems in agriculture are, therefore, extremely grave.
RADIOACTIVE WASTES
- Initially, nuclear energy was hailed as a non-polluting way of generating electricity.
- Later on, it was realised that the use of nuclear energy has two very serious inherent problems.
- The first is accidental leakage, as occurred in the Three Mile Island and Chernobyl incidents and the second is safe disposal of radioactive wastes.
- Radiation, that is given off by nuclear waste is extremely damaging to organisms because it causes mutations at a very high rate. At high doses,
- nuclear radiation is lethal but at lower doses, it creates various disorders, the most frequent of all being cancer.
- Therefore, nuclear waste is an extremely potent pollutant and has to be dealt with utmost caution.
GREENHOUSE EFFECT AND GLOBAL WARMING
- The term “Greenhouse effect’ has been derived from a phenomenon that occurs in a greenhouse.
- Have you ever seen a greenhouse? It looks like a small glasshouse and is used for growing plants especially during winter.
- In a greenhouse, the glass panel lets the light in but does not allow heat to escape.
- Therefore, the greenhouse warms up, very much like inside a car that has been parked in the sun for a few hours.
- The greenhouse effect is a naturally occurring phenomenon that is responsible for heating of Earth’s surface and atmosphere. Responsible for greenhouse effects.
- You would be surprised to know that without greenhouse effect the average temperature at the surface of Earth would have been a chilly -18°C rather than the present average of 15°C.
- In order to understand the greenhouse effect, it is necessary to know the fate of the energy of sunlight that reaches the outermost atmosphere Clouds CFCs.
- The gases reflect about one-fourth of the incoming solar radiation.
- It absorbs some of it but almost half of the incoming solar radiation falls on Earth’s surface heating it, while a small proportion is reflected back.
- Earth’s surface 20% re-emits heat in the form of infrared radiation Methane 60% but part of this does not escape into space as Carbon dioxide atmospheric gases (e.g., carbon dioxide, methane, etc.) absorb a major fraction of it.
- The Relative contribution of various molecules of these gases radiates heat energy, greenhouse gases to total global and a major part of which again comes to warming Earth’s surface, thus heating it up once again.
- 281 This cycle is repeated many times. The above-mentioned gases – carbon dioxide and methane – are commonly known as greenhouse gases because they are responsible for the greenhouse effect
OZONE DEPLETION
- ozone is found in the upper part of the atmosphere called the stratosphere, and it is as a shield absorbing ultraviolet radiation from the sun.
- UV rays are highly injurious to living organisms since DNA and proteins of living organisms preferentially absorb UV rays.
- It’s high energy breaks the chemical bonds within these molecules.
- The thickness of the ozone in a column of air from the ground to the top of the atmosphere is measured in terms of Dobson units (DU).
- Ozone gas is continuously formed by the action of UV rays on molecular oxygen and also degraded into molecular oxygen in the stratosphere.
- There should be a balance between production and degradation of ozone in the stratosphere. Of late, the balance has beam disrupted due to the enhancement of ozone degradation by chlorofluorocarbons (CFCs) refrigerants. | https://biologyease.com/environmental-issues/ |
Bacteria are ubiquitous, present in the soil, air and water around you. Some bacteria can be beneficial, while other bacteria causes illness and even death. Pathogenic bacteria, those organisms that cause disease, relentlessly bombard your body daily. Your body's immune system fights off most of these invaders. Some types of bacteria cause more harm to humans than others, overwhelming your body's defenses and frequently causing death in terrifying ways and at alarming speeds.
The bacteria Bacillus anthracis, or anthrax, can enter the body via three avenues: through the skin, into the lungs or down the digestive tract. Of these three methods, inhalation anthrax is the most deadly, with symptoms resembling those of the flu or a cold.
Botulinum toxin acts on the nervous system, causing trouble with respiration, swallowing, speaking, vision and overall physical weakness. Death results from the inability to breathe when the toxin paralyzes breathing muscles.
The infamous E. coli normally inhabits the intestines of healthy individuals. Sometimes, certain strains, or forms, of the bacteria arise that become detrimental to humans, especially the very young, immune-compromised or elderly. One rare, dangerous strain causes bleeding in the intestines.
Also called necrotizing fasciitis, flesh-eating disease occurs when bacteria consume human tissues lying under the skin, causing death of tissues and fatalities in a short amount of time. There are many pathogens that can cause this extreme disorder, one of which is the same kind of bacteria that causes strep throat.
MRSA transmits through person-to-person contact or by use of contaminated items or surfaces. About 25 to 30 percent of the population carries the normal form of the Staphylococcus aureus on the skin or in the nose. Only one or two percent carries the resistant type or strain, which does not respond to conventional antibiotics previously used to treat staph infections, such as methicillin.
Clostridium tetani, the bacterium causing tetanus, can exist almost anywhere while their spores can remain inactive in soil. When a wound introduces the spores into the deep recesses of the body, the bacteria become active, producing toxins which affect the nerves. The muscles of the body contract sporadically, injuring muscles and bones and causing locking of the jaw.
This pathogenic bacterium, like many of the other deadly bacteria has become resistant to most antibiotics and causes thousands of deaths every year. It has been nicknamed the “Iraqibacter” due to the number of soldiers in Iraq who have contracted it after being wounded in war. The bacteria are able to enter the body through open sores, wounds, catheters and other openings into the body.
Aspergillis is a common bacterial mold that can cause pulmonary and blood infections which can lead to death. It also occurs commonly in cancer patients and patients with other diseases. This bacterial mold is most often found in air conditioning systems where it is then spread through the air ducts. | http://www.wonders-world.com/2011/04/most-dangerous-bacteria.html |
What is Criminal Negligence Causing Death?
Some simple but simply unknown ideas of laws often make people a bit confused, like- what is criminal negligence causing death? To know, click here now.
By creation, sensibility is linked with the existence of the human. Thus, it is unexpected to find a person being irresponsible. When this act reaches beyond the limit, the consequence is irresistible and unacceptable. Such an irreversible event is death caused by negligence. Or by criminal negligence.
The idea of manslaughter vs. murder distinguishes or matches criminal negligence causing death comparing other similar crimes that lead to death. Today we’ll discuss what criminal negligence is and what if it causes death. Let’s go through some basic ideas towards the central concern of today.
Negligence and Criminal Negligence
The difference between them lies in the intention of the defector.
Negligence
When a person fails by intention or avoids taking measures against a possible or upcoming adverse event, that is called negligence or civil negligence.
For example, we can incarnate of not stopping at the stop sign and keeping driving. This act of the driver can harm himself or the passenger or any person outside the vehicle. Also, there can be damages of property or financial loss.
Criminal Negligence
Criminal negligence is, by definition, a kind of negligence. But, the difference is- here, the person being irresponsible is involved in reckless activities to harm another person mentally or physically. In other words, criminal negligence is a continuous act of negligence to fulfill an evil impulse.
To illustrate criminal negligence, we can again bring the example of speeding and breaking the traffic laws. The difference is to create intentionally psychological or physical distress or injury of the other person.
Murder vs. Manslaughter
The difference between this will echo the earlier discussion.
Murder
When a person deliberately kills a person, that is called murder.
Manslaughter (3rd Degree Murder)
If a person is killed by another one unintentionally whose action was only to frighten or cause harm to the victim, that is called manslaughter.
Criminal negligence causes death
Let’s dive into the conversation from here.
Variety of criminal negligence causing death There are many. The 3 below are very common.
Negligence from parents
This laxity occurs when a child discovers a parent is abusive and has no protection from the other parent.
Negligence by drivers
Talking over the phone or being heedless during driving, not dealing properly with a person outside but on the vehicle. These are some negligence by the drivers which can cause serious injuries and even death.
Negligence from the employers
When an employer lets their subordinates to a risky place without proper safety measures, that’s also considered criminal negligence.
Playing with fire or firearms, showing indifference in treating a patient is also considered criminal negligence.
Offense 220
Criminal negligence causing death is listed as an indictable offense. Section 220 says that a person convicted of the crime will be sentenced to maximum life imprisonment. Carrying a firearm won’t spare the responsible person causing death, and there would be at least a minimum sentence.
Related spikes of criminal negligence and death
- Some criminal negligences are not considered a homicide
- The minimum jail sentence for this type of negligence is a minimum of 3 years
- Negligence claim needs four elements—breach, causation, damage, and duty—to prove
- 2 things are required to win a negligence causing death case- involvement of the convicted and their negligence
- Life imprisonment in Canada is 25 years
- All negligence cases are civil cases, while negligence causing death is under the criminal case
- Normally, the relatives claim for these types of case
- Behind a criminal activity, there are several criminal intents- reckless behavior, intentional behavior, criminally negligent behavior, etc.
Last Words
Carelessness is never encouraged. A person with the intention of causing harm to others does not necessarily plan to do something horrible. But, it’s an accident no one can anticipate or reduce the risk as the accident itself is an uncertain event.
Also, the intent concern of a person is not always possible to determine. Yet, the existing evidence and motive suggest a person victim or convicted, sometimes innocent or not that much guilty to be sentenced gravely.
Thus, people should be as careful as possible. At the same time, playing with someone’s serenity shouldn’t be one’s intention. Or it’s detrimental for both ends. That’s the summary of what is criminal negligence causes death. Hopefully, now you’ve got the basic idea about it. | https://www.alloblak.com/what-is-criminal-negligence-causing-death |
Diarrheal diseases, such as rotavirus, cholera, and typhoid, are the dominant health concern related to wastewater pollution, causing 1.6 million deaths in 2017. ref Symptoms include severe dehydration and malnutrition and impairing children’s growth and mental development. ref The result can be lifelong health complications and damaging consequences for entire communities. See the case study from Bavu and Namaqumaqua villages in Fiji detailing the implementation of sanitation systems to address typhoid outbreaks and other impacts of wastewater pollution.
Pathogens in oysters and other shellfish cause 4 million cases of Hepatitis A and E every year, with roughly 40,000 deaths and another 40,000 cases of long-term disability from chronic liver damage. ref In a recent study along the coast of Myanmar, Littman and colleagues (2020) ref identified 5,459 bacterial pathogens in oyster tissue, marine sediments, and seawater. Researchers reported that 51% of the pathogens found in the oyster samples were known to be detrimental and of emerging concern to human health. Contact with human waste represents an urgent challenge particularly in developing areas, and has led to the development of the Water, Sanitation, and Hygiene (WASH) sector.
Antimicrobial Resistance
The increase in antibiotic-resistant pathogens, or “superbugs,” is probably the most concerning human health impact we face related to wastewater pollution. Antimicrobial resistance is responsible for 700,000 deaths annually, a number that is growing because of opens in a new windowpoor antibiotic stewardship (i.e., over-prescribing antibiotics), lack of sanitation, insufficient wastewater treatment, and discharge into the environment. ref Superbugs originate from the overuse of antibiotics to treat an illness. As the resistant microbes reproduce, the population develops a higher resistance to antibiotics. If not properly treated, these new superbugs make their way into the environment. It is a dangerous feedback loop of disease, antibiotics, commingling, and exposure. Improving sanitation and wastewater treatment is a critical component of addressing the superbug threat because wastewater treatment plants can be a place where this resistance develops.
Other Contaminants
In addition to pathogens, other components of wastewater—like high nutrient concentrations, heavy metals, and contaminants of emerging concern (CECs) - are hazardous to people. Examples of CECs and impacts to people:
- Heavy metals can be ingested when people eat fish and shellfish. Over time, metals bioaccumulate and cause damage to organs and interfere with critical bodily functions. ref
- Pharmaceuticals, personal care products, and household cleaning products can disrupt the endocrine system, leading to negative consequences on reproductive health. ref
- Karenia brevis, the marine dinoflagellate that causes red tides, produces brevetoxins that can disperse as fine particles in the air. These toxins have been associated with increased incidence of asthma, and a 40% increase in emergency room admissions for gastrointestinal disease during red tide events. ref
- Nitrates in drinking water can cause Methemoglobinemia in children, where the body produces excess methemoglobin (a form of hemoglobin) and cannot deliver oxygen effectively. Recent studies have linked nitrates in drinking water to colon, ovarian, thyroid, kidney, and bladder cancer in adults. ref In fact, numerous studies have shown that increased risk of cancer occurs with nitrates at levels below the U.S. standard of 10 parts per million. ref A Danish study reported increased risk of colon cancer with nitrate levels above 3.87 parts per million. ref
- Pseudo-nitzchia australiis, a type of algae, produces domoic acid that bioaccumulates in aquatic organisms and causes a neurological disorder called Amnesic Shellfish Poisoning (ASP) in humans. Like many other algae-borne toxins, small doses over time eventually cause symptoms. In the case of ASP, this includes seizures, hallucinations, memory loss, and vomiting. ref
In addition to making people sick, these contaminants jeopardize fisheries and coral reefs, causing further harm to people who depend on them for food, livelihoods, and coastal protection.
Indirect Health Consequences
Open defecation and unsafe sanitation facilities (without lights or privacy) are especially concerning for women, creating opportunities for harassment or violence. Gender disparities resulting from inadequate sanitation are furthered when girls miss school during menstruation or women spend excess time finding clean drinking water. | https://reefresilience.org/management-strategies/wastewater-pollution/impacts-on-human-health/ |
Achievements:
Dialog content here.
OR
Fetching details, please wait...
*First Name
*Board
Last Name
*Class
Loading
By registering, you certify that all information you provide, now or in the future, is accurate. You also accept that we may contact you by SMS or email
for verification and communication purposes. As per the DOT regulations, this will apply even if you are registered in the Do Not Disturb.
Games
Quizzes
Slideshows
Nuggets
Greetings
Pin Ups
Videos
Read
Polls
Downloads
TopperFamily
Quizathon
Doodle
My Favorites
The part of the earth and its atmosphere in which living organisms exist or that is capable of supporting life. Ecosphere, lithosphere, hydrosphere and atmosphere together constitute Biosphere.
The gaseous mass or envelope surrounding a celestial body, especially the one surrounding the earth, and retained by the celestial body's gravitational field. It consists of different layers like thermosphere, mesosphere, stratosphere, troposphere etc.
The waters of the earth's surface as distinguished from those of the lithosphere and the atmosphere.
The combined processes, including photosynthesis, decomposition, and respiration, by which carbon as a component of various compounds cycles between its major reservoirs - the atmosphere, oceans, and living organisms.
The cycle of evaporation and condensation that controls the distribution of the earth's water as it evaporates from bodies of water, condenses, precipitates, and returns to those bodies of water. Also called hydrologic cycle.
The slide shows the simplified diagram of water cycle with different processes.
The circulation of nitrogen in nature, consisting of a cycle of chemical reactions in which atmospheric nitrogen is compounded, dissolved in rain, and deposited in the soil, where it is assimilated and metabolized by bacteria and plants, eventually returning to the atmosphere by bacterial decomposition of organic matter.
Slide shows the different processes involved in the nitrogen cycle.
Air pollution is the introduction of chemicals, particulate matter, or biological materials that cause harm or discomfort to humans or other living organisms, or cause damage to the natural environment or built environment, into the atmosphere.
Air pollution is caused by oxides of carbon (Carbon monoxide, carbon-di-oxide), nitrogen, sulphur etc.
Water pollution is the contamination of water bodies (e.g. lakes, rivers, oceans and groundwater). Water pollution occurs when pollutants are discharged directly or indirectly into water bodies without adequate treatment to remove harmful compounds.
There are basically two types of sources of water pollution- point sources include discharges from a sewage treatment plant, factory etc. while non-point sources include leaching out of nitrogen compounds from fertilized agricultural lands, urban runoffs etc.
The greenhouse effect is a process by which thermal radiation from a planetary surface is absorbed by atmospheric greenhouse gases, and is re-radiated in all directions. Since part of this re-radiation is back towards the surface, energy is transferred to the surface and the lower atmosphere. As a result, the temperature there is higher than it would be if direct heating by solar radiation were the only warming mechanism.
The slide shows the step by step process of green house effect and how it leads to global warming.
The slide shows different processes involved in causing global warming. | http://www.topperlearning.com/kids-fun-ppt/science-35/natural-processes-200 |
Is there a word disastrous? disastrous | American Dictionary. causing great harm, damage, or suffering: It was a disastrous mistake which he lived to regret.
What is the adverb of disastrous? adverb. /dɪˈzɑːstrəsli/ /dɪˈzæstrəsli/ in a way that is very bad, harmful or unsuccessful synonym catastrophically, devastatingly.
How do you describe disastrous? causing great distress or injury; ruinous; very unfortunate; calamitous: The rain and cold proved disastrous to his health. Archaic. foreboding disaster.
Does disastrous mean terrible? Something that is disastrous has extremely bad consequences and effects or is very unsuccessful. …
Is there a word disastrous? – Related Questions
What’s a word for a horrible event?
A disaster is an event that causes great harm, unhappiness etc. Calamity and catastrophe are similar, but even more emphatic.
What is a word for a horrible action?
1 terrible, awful, appalling, frightful; hideous, grim, ghastly, shocking, revolting, repulsive, horrid, horrendous, horrifying, repellent.
What is the root word for disastrous?
Etymology. From French désastreux, from Middle French desastre (“disaster”) (modern désastre), from Italian disastro, itself from dis- (“away, without”) (from Latin) + astro (“star, planet”) (from astrum (“star, celestial body”), from Ancient Greek ᾰ̓́στρον (ástron)).
What is disastrous effect?
Disasters may be explosions, earthquakes, floods, hurricanes, tornados, or fires. In a disaster, you face the danger of death or physical injury. You may also lose your home, possessions, and community. Such stressors place you at risk for emotional and physical health problems.
What is the base word for horrible?
Horrible means “causing horror,” and goes back to the Latin word for “tremble” or “shudder.” A horrible thing makes you shake with fear, like a story about an atrocious crime.
What does terrible disaster mean?
1 : a sudden calamitous event bringing great damage, loss, or destruction natural disasters. 2 : someone or something that is very bad: such as.
What is a word for a negative situation?
1 hostile, inimical, unfriendly. 2 unfavorable; unlucky, unfortunate; disastrous, calamitous, catastrophic. | https://www.animegoodys.com/is-there-a-word-disastrous/ |
Submit Tips For Editing We welcome suggested improvements to any of our articles. You can make it easier for us to review and, hopefully, publish your contribution by keeping a few points in mind. You may find it helpful to search within the site to see how similar or related subjects are covered.
Concern for environment has increased and spread over recent years. Pollution-a major environmental concern.
Types and sources and effects of pollution. Causes and associated problems. Remedial measures-existent and suggested. Plato lamented the destruction of soils and forests in ancient Greece. Dickens and Engels wrote eloquently of the wretched conditions spawned by the Industrial Revolution.
But the surge in concern about environmental quality over the last three decades has been uniquely widespread and impassioned. Appreciation of the material and spiritual importance of a healthy natural environment has spread. Perhaps the most dramatic intellectual shifts are occurring in the Third World, where understanding of the ecological under spinning of human life-largely lost in the post-war dreams of industrialisation is on the rise.
The new interest in environmental quality complements recent shifts in thought among development theorists, many of whom now stress the need to address the basic needs of the poor directly rather than hope that the benefits of growth will trickle down to them.
Improving the lot of the under-class and protecting environmental quality can be mutually-supportive goals. Both internationally and within nations, the new appreciation of our bonds with nature has spawned new institutions and policies-new UN and governmental agencies, new laws, altered aid programmes, new international treaties.
Yet for the most part, responses remain inadequate to the needs.
For the most urgent need today is to protect and preserve what remains of the environment. To do that one has to understand the meaning of pollution and consider ways of tackling it.
It takes place through changes in energy patterns, radiation levels, chemical and physical constitutions, and abundance of organisms. It includes release of materials into atmosphere which make the air unsuitable for breathing, harm the quality of water and soil, and damage the health of human beings, plants and animals.
Air pollution in one form or another has accompanied human society from the beginning. In the nineteenth and early twentieth centuries, many cities of Europe and the US were covered with black shrouds of smoke.
Despite the successes registered against smoke, the pollution of city air by other products of coal combustion above all, Sulphur dioxide and by nitrous oxides, hydrocarbons, petroleum wastes, and carbon mon oxide continues to worsen in most of the countries.
Strong evidence indicates that prevailing levels of air pollution contribute to the development of chronic respiratory diseases emphysema, asthma, and chronic bronchitis besides short-term respiratory afflictions as well.
And those living near smelters and refineries often face increased cancer risks because of the toxic substances spewing from smoke-stacks. Over the last 25 years, many countries have begun trying to regulate the flow of pollutants in the air, Air pollution can no longer be addressed as simply a local urban problem.
Thermal pollution of water. In general, pollution from so-called point sources like sewage pipes and factories is under progressively better control. But the contamination of waterways from diffuse sources-run-off from farmlands which tends to carry fertilizers, pesticides, and organic matter, and from urban areas, which often carries oil, metals, and other pollutants-remains largely uncontrolled and is on the increase in most countries.
Acids and heavy metals falling with the rain constitute additional sources of water degradation. The problem of water pollution is growing day-by-day; today a great many people are deprived of disease-free potable water, as almost all the sources of water-from seas to wells-are increasingly being infested with different kinds of pollutants.
Soil pollution usually results from the disposal of solid and semi-solid wastes from agricultural practices and from insanitary habits.
Fallouts from atmospheric pollution also contribute to soil pollution. Direct pollution of the land by pathogenic organisms is also important. Thus the soil is heavily polluted day-by-day by hazardous materials and micro-organisms, which enter the food chain or water and are consequently ingested by man.
As a result, there are numerous health problems.
Those bacteria which are transmitted from air to soil infect man causing bacillary dysentery, cholera, typhoid and paratyphoid fever.
Flies which breed or get in contact with the contaminated soil become carriers of disease organisms. The eggs of some of the parasitic worms get incubated in the soil and both the eggs and larvae are infective. Radioactive pollution of the environment is due to the increase in natural background radiation, emerging from the activities of man involving the use of naturally occurring or artificially produced radio-active materials.
The chances of radioactive materials Spreading into the air have increased extensively as a result of the discovery of artificial radio-activity, and particularly due to the development of atomic bomb and of techniques of harnessing nuclear energy.
Biological organisms including human beings are subjected to radioactive contamination either by consumption or inhalation. This radioactive pollution may cause cancers, abnormal births and mutations in men. Chronic exposure to radiation leads to leukemia in an individual and affects even an unborn child.
Thermal pollution denotes the impairment of the quality of environment air or water by a rise in its temperature.
The processes of life involve many chemical reactions, and the rate of these chemical reactions vary according to the changes in temperature.May 01, · Air pollution is due to the direct implication of industrialization.
The chimneys of industries emit smoke and vehicles have fumes, which directly contribute to air pollution. Sep 16, · The airway consists of the oral and nasal cavities, which connect to the voice box (larynx), which connects to the windpipe (trachea).Note in the diagram that the windpipe splits into two air passages called bronchi, one going to each lung (right and left main bronchi).The trachea and larger bronchi contain C-shaped rigid bars of cartilage in their walls.
What is cardiovascular disease, and what are angina and heart attack? Find out more about the symptoms, types, treatments, and common causes. An air embolism occurs when a gas bubble enters a vein or artery.
It can block the passage of blood, and the location of the blockage determines the severity. Air embolisms can be life-threatening.
Air pollution essays What causes air pollution? Air pollution results mainly from the incomplete combustion of several fuels, such as coal, petrol and wood. The primary sources of pollution from combustion are furnaces in factories, engines in vehicles and burning of trash, in the form of particles.
Published: Mon, 5 Dec The introduction of harmful waste materials, other forms of contamination which are a hazard to the life sustaining atmosphere of the planet earth is commonly known as pollution. | https://jonoxuvutuvo.srmvision.com/essay-on-causes-of-air-pollution-12154pw.html |
Acid Rain Find Out What Causes It and What Can Be Done to Stop It
Acid rain is a form of precipitation caused when compounds present in fossil fuels and other human activities enter the atmosphere. When carbon dioxide and sulphur dioxide or nitrogen oxide are released into the air, they react with water vapor, oxygen, and other chemicals to form acids. This process is called ‘acidification’, and the resulting compounds are called ‘acidic’.
Table of Content
1. The causes of acid rain
2. The impact of acid rain
3. Ways to reduce and prevent acid rain
4. Why is this bad for your health?
5. The risks of acid rain on your home’s exterior
6. What can we do to keep our environment safe?
The Causes of Acid Rain
The bad thing about acid rain is that unlike other types of rain, it’s harmful to the environment and living creatures.
This type of rain occurs when nitric oxide and sulphur dioxide — both common pollutants — mix with water in the atmosphere. These vapours then fall back to earth with their acidic properties intact.
The biggest cause of acid rain are sulphates, an air pollutant that come primarily from coal-fuelled power plants and the burning of peat.
Acid rain is caused by many other pollutants as well, which have varying effects on waters, fish and plant life.
The most common cause of acid rain is the pollution caused by fossil fuels. As fossil fuels form a gas when burned, some of the metals from them react with water vapor in the air, creating the acid.
The most common ones are sulphur and nitrogen oxides. Burning coal for energy has resulted in sulphur dioxide emissions in the atmosphere.
Industrial smokestacks also contribute significantly to acid rain as they emit large amounts of pollutants mainly in areas near factories.
While the exact causes of acid rain are unknown, it is believed that several factors contribute to its formation.
Of course, the biggest factor has to do with climate changes. As we have seen, oceans contain alkalinity and rainfall contains acidity.
When these two types of precipitation join, they form an acid rain.
This can be caused by several different factors such as emissions from cars and factories, volcanic eruptions, or even lightning bolts hitting the ground.
The Impact of Acid Rain
The industrial revolution of the late 18th and through the 19th centuries brought many beneficial devices to our lives, but in parallel it also caused a great amount of pollution.
Pollution became an increasingly important problem as new devices came into being.
The most dangerous type of pollution is acid rain. It’s a chemical pollution which is the main cause of damage to our natural environment, like air and water.
Acid rain has an adverse impact on the normal growth of plants and fish population. It also degrades buildings, monuments, and statues.
But by working together, we can reduce acid rain’s impact on our planet — and the organisms that depend on it for survival.
Scientists have been concerned about the effects of acid rain for a long time, but it’s only been in the last few years that the public has become aware of its devastating effects.
How Can We Make a Difference in the World?
No action is too small if it has a positive impact on the world around you. Be mindful of your actions and always consider how they can have a positive effect, no matter how insignificant they might seem.
While it isn’t a huge concern when you’re walking around your backyard, rain will have a greater acidic concentration when it gets into the water supply and begins to react with rocks and minerals present in the soil and sediments.
It is a sad fact that many of the problems confronting our planet are caused by humans, and by factories.
There are factories that release smoke without filters since it occurs in places where there is little or no people.
Acid rain is formed when these emissions encounter clouds, watering and polluting rivers, lakes, and plants. Since this pollutant attacks the leaves of plants, their growth is affected.
Rain is a blessing for our earth. It helps to take away the excess heat from the sun and keeps our water cycle clean.
At the same time, it falls from the sky and can have negative impacts on other organisms around the world. Acid rain is real, but it only really affects certain locations in the world.
However, I believe this is a problem that needs to be addressed quickly because if this trend continues, our planet will not be hospitable to life in the far future. Remember:
- Acid rain is a pollutant
- It destroys lakes and wetlands
- Acid rain is harmful to our ecosystem
- It affects animals and plants on land and in the water
- Its causes include car emissions and coal power plants
- It can dissolve limestone, concrete, and other materials
Ways to Reduce and Prevent Acid Rain
We are all the aware of the negative effects that come with environmental pollution since it is a prominent issue to deal with nowadays.
There is already a lot of discussion about climate change and how it affects our planet for example. The cause of this can be found in vehicles that release harmful gases into the air.
Acid rain emerged as one of the major environmental concerns while the world’s population was increasing at a fast rate in the 1960s.
Even though natural acid rain has been occurring for ages, it became a problem when human activities like burning fossil fuels caused much greater levels to develop.
Most people are concerned that this trend may be continuing, and that acid rain could continue to grow in the future. If we could reduce or even prevent acid rain in the future, many good things might happen.
For one thing, corrosion of buildings and monuments can be limited, making them last longer. No automobile runs well when its metal parts are corroded by acid, so acid rain can also pose problems for cars.
Stop Air Pollution
Air pollution causes serious health problems, lower life expectancy and an economic burden. It is the top environmental cause of premature death…read more
Acid rain is caused by emissions of sulphur and nitrogen oxides from sources such as coal-fired power plants, motor vehicles and industrial boilers.
Nitric and sulfuric acid are the main negative effects on the environment. The natural environment can be severely damaged to a point where our crops and forests cannot grow, or even live.
To reduce or hopefully prevent acid rain in the future everybody needs to expect less, use less, and discard less toxins.
The best method to prevent acid rain is to prevent pollutants from reaching the atmosphere.
If we reduce the number of pollutants that factories, automobiles, and other industrial areas release into the air, then acid rain would, hopefully, have a smaller chance of affecting our environment.
Why Is This Bad for Your Health?
So, you might have heard about the phenomenon called acid rain, how it is bad for your health and the environment. But do you know what it is?
The term acid rain refers to a process whereby certain chemical compounds in the air react with water in precipitation to form acidic solutions.
Now this chemical reaction doesn’t only occur naturally but also when pollutants from society add to that reactivity.
It is a major environmental issue that has unpredictable consequences, harm our natural heritage, and affects the health of millions of people.
It triggers soil erosion, interferes with agriculture, increases the acidity of lakes and streams and contributes to the injury and death of some forest trees.
Acid rain is not good for you. Too much acid rain or smoke can easily cause colds and sore throats, make asthma symptoms worse, or even increase the chance that someone will develop lung diseases in the future.
It can also make it harder for the body to take in oxygen, which affects how well the body works.
Some of the symptoms that can be caused by the pollutants from acid rain include coughing, sneezing, watery eyes and skin rashes.
Smog and car exhaust can also cause these symptoms, but acid rain falls out of the air which means you may be breathing it in without realising it when you go for a run or walk outside.
Some people may downplay the effects of pollution, but my conclusion is that we should try to prevent these pollutants falling on us as often as we can. Remember:
- Acid rain is polluting our water systems
- It has a corrosive effect on a lot of materials
- It may also alter soil composition in unsafe ways
- There might be short term and long-term health impacts
- The more polluted the area, the worse the effects will be
- Acid rain can significantly affect your respiratory health
The Risks of Acid Rain on Your Home's Exterior
As you can imagine, acid rain causes a lot of damage to buildings. You’ve probably noticed that the stone of your building is sometimes eroded or dissolved.
Maybe it seemed like it happened suddenly. The truth is that, over time, acid rain has been poisoning your building and can lead to much more serious problems.
In some cases, the damage that it causes is visible. For example, concrete structures develop brown patches, erosion takes place and eventually leads to building collapse.
However, these visible damages are permanent and cannot be reversed by restoration efforts.
Acid rain causes damage by eroding metals, eating away at surfaces including paint and stone, and breaking down concrete.
Any structure exposed to acid rain can sustain structural damage. Acid rain is caused either by natural processes or human activities and transportation.
Does the Risk of Acid Rain on Your Home's Exterior Scare You?
The damage that acid rain can do to the exterior of your home can be easily prevented. Acid rain is common in certain areas with heavy industrial pollution. It’s more than just a simple rain, it contains gasses that when it falls and mixes with fallen water, absorbs into your house’s surface and deteriorates it over time.
Pollutants like sulphur dioxide, nitrogen oxides and volatile organic compounds react in the atmosphere to form components of acid rain such as sulfuric acid and nitric acid.
While industrial processes, traffic and even volcanoes can cause acid rain, it is also common in areas that produce a significant number of pollutants. These can include mining, burning fossil fuels and smog from cars and factories.
Acid rain contains chlorides that soak through the soil and hit the water table thus interacting with iron and aluminium to form an acidic compound. The higher the rain, the stronger the acid will be.
As it pertains to different materials, there are varying effects. Concrete starts to degrade while redwood trees are affected causing a corrosive effect on their bark.
Marble and limestone will degrade over time as well, but concrete is affected worse because organic materials are broken down faster than inorganic ones.
Acid rain occurs all over the world, but some areas are considerably more affected than others.
What Can We Do to Keep Our Environment Safe?
What should we do? I mean, do we even have a choice. The answer is: Yes, yes, we do.
We can be effective by making good decisions on how we use our resources. Maybe not you, but there are still people that feel that the initiative comes from corporations or governments – wrong again!
How about you just start with your little corner of the world. If you change your habits for the environment then it will trickle down to other people and slowly, our impact would be felt in bigger ways.
We all know that our environment is threatened by different factors today.
There are many other things that we need to worry about that can be life threatening, but what can we do to keep our environment safe? That’s the question of the day.
The first step is figuring out how to reduce the number of pollutants released into the environment.
It’s not just the air we breathe in, it’s also the environment that our food grows in — which are both affected by chemicals like acid rain.
In fact, the way we live and what we do daily are affecting our planet.
It’s not something that belongs to the future, but our present: where we live, what we eat and drink, and even what we breathe in affects our environment.
If you haven’t been worried much about an environmental problem, it’s time you should. Acid rain is one of the biggest problems on Earth today and it has a lot to do with your happiness.
There are some simple steps that anyone can follow to help keep our environment safe. We just must be responsible and understand what it means to live sustainably.
It might mean sacrificing some luxuries, but with global warming and the threat of climate change more and more apparent these days.
We must take responsibility for ourselves and future of our planet.
It’s never too late to make a difference, so think about what you can do today.
Summary
Acid rain happens when two types of pollution, nitric oxide, and sulphur dioxide (both prevalent pollutants), combine with water in the atmosphere. The specific causes of acid rain are unknown, however various factors are thought to contribute to its creation. It can be generated by car and factory pollutants, volcanic eruptions, or lightning strikes. Acid rain is created by sulphur and nitrogen oxide emissions from sources such as coal. This pollutant damages the leaves of plants, causing them to grow slower. Many beneficial things could happen to our world if we could lessen or perhaps eliminate acid rain in the future.
The main harmful effects on the environment are nitric and sulfuric acid. The natural environment can be badly harmed to the point where crops and forests can no longer flourish or even survive. Acid rain corrodes metals, eats away at surfaces such as paint and stone, and dissolves concrete. Sulphur dioxide, nitrogen oxides, and volatile organic molecules combine in the atmosphere to generate acid rain components such as sulfuric acid and nitric acid.
What can we do to protect our environment? There are easy steps that anyone may take to help protect our world. It’s never too late to make a difference, so consider what you can do right now.
Acid rain occurs when pollutants released into the air from human activities comes into contact with water vapour and droplets of water in clouds. | https://cuarl.com/causes-of-acid-rain/ |
Invasive non-native plants are species which have been brought into the UK that has the ability to spread causing damage to the environment, the economy, our health and the way we live.
Invasive non-native plants can cause problems for native UK species and reduce biodiversity (the variety of living organisms). Invasive non-native species are now widely recognised as the second biggest threat to biodiversity worldwide. Japanese knotweed can block footpaths and damage concrete, tarmac, flood defences and the stability of river banks. Giant hogweed can cause harm to human health.
Injurious weeds
Injurious weeds are native species, which have been deemed to cause a problem to farming productivity.
Injurious weeds are those that are considered able to cause harm to agricultural pasture. The five species of 'injurious weed' are:
- common ragwort
- spear thistle
- creeping or field thistle
- curled dock
- broadleaved dock
Further information on both invasive plants and injurious weeds see:
If you have invasive plants or injurious weeds on your premises you have a responsibility to prevent them spreading into the wild or causing a nuisance. | https://www.cherwell.gov.uk/info/7/environment/481/invasive-plants-and-injurious-weeds |
The contamination or degradation of soils impacts heavily on the health of plants. Humans are also affected in numerous ways either directly or indirectly. Polluted soil can harm humans by making contact with the soil or consuming vegetation produce from contaminated soils. Children are even more susceptible to the harms of soil pollution since they spend most of their time playing in close contact with the soil. Thus, soil pollution has a long list of effects.
More than 70% of the soil pollutants are carcinogenic in nature, intensifying the chances of developing cancer in the humans exposed to the polluted soils. Long-term exposure to benzene and polychlorinated biphenyls (PCBs), for instance, is linked to the development of leukemia and liver cancer respectively.
Soil pollutants can also cause skin diseases, muscular blockage, and central nervous system disorders. Humans can be affected indirectly due to bioaccumulation or food poisoning. It happens when people consume crop produce that is grown in the polluted soils or when they consume animal products that eat plants from polluted soils. As a result, humans suffer from acute illnesses and may experience premature death.
For example, high concentrations of lead or mercury in the soil can endanger the functionality of kidneys and liver. It can also hamper brain development in children and cause adverse neurological disorders.
Crops and plants grown on polluted soils can accumulate poison to an extent deemed unfit for human consumption. Consequently, it leads to enormous economic losses. In some parts of the world, heavily polluted soils with metals and chemicals such as lead, asbestos, and sulfur are considered unfavorable for crop production and cannot be used to grow crops.
The crops grown in the soils and the nearby lands are often poisoned with heavy metals and chemicals thus, discarded after harvesting because of high toxicity levels. According to China’s agricultural sector, for instance, about 12 million tons of polluted grains are subjected to disposal on an annual basis, costing Chinese farmers economic losses of up to 2.6 billion U.S. dollars.
Polluted soil by natural means contributes to air contamination by discharging volatile compounds into the atmosphere. So, the more the toxic contaminants in the soil, the higher the level of toxic particles and foul gasses emitted into the atmosphere. Soil pollution can also lead to water pollution if the toxic chemicals and materials like dangerous heavy metals leach into groundwater or contaminate storm water runoff, which reaches lakes, rivers, streams, or oceans.
When soils are repeatedly contaminated and accumulate large amounts of poisonous materials and chemicals, the soil reaches a point where it cannot support plant life. Soil pollutants interfere with soil chemistry, biology, and structure. When these changes occur, beneficial soil bacteria, soil microorganisms, soil nutrients, and soil chemical processes begin to deteriorate to an extent where they diminish soil fertility.
The ecological balance is lost completely. On this basis, the soil becomes unsuitable for crop survival or any other form of vegetation. If the plants die, then it means animals dependent on the plants will also die. This leads to migration of the larger animals and predators to other regions to find food supply, gradually leading to a reduction in wildlife and extinction.
Soil pollution can as well change plant metabolism and lower crop productivity. Besides, when plants take up the soil contaminants, they pass them up the food chain, endangering the health of animals and humans.
Soil pollution allows emission of relatively large quantities of nitrogen via denitrification, volatilization of ammonia, and the decomposition of organic materials in the soil. As a result, this releases sulfur compounds and sulfur dioxides into the atmosphere, causing acid rain.
In the long-run, it leads to a continuous cycle of pollution whereby the acid rain reduces soil chemistry and nutrients, which would further contribute to ecological balance disturbance and soil erosion. Furthermore, acidic conditions hinder soil ability to cushion changes in the soil PH, leading to the death of plants due to unfavorable conditions.
The most evident and crucial element of the soil is its fertility. Once the soil is contaminated with chemicals and heavy metals or degraded due to human activities such as mining, its fertility depreciates and might even be lost entirely. The harmful chemicals and heavy metals in the soil decrease soil microbial and chemical activity.
The chemical elements can also denature active enzymes that revitalize healthy soil activities. Soil acidification as a consequence of pollution also leaches away essential natural minerals like magnesium and calcium.
Acidification, diminished soil fertility, and death of soil organisms in the soil can lead to changes in soil structure. This is because soil microorganisms help in breaking down organic matter that promotes soil structure regarding vitality and water penetration as well as retention.
The increase in soil salinity, salinization, is an effect of salt accumulation in the soil. Salts occur naturally in the soil. However, increased accumulations are linked to soil pollution. Irrigation and agricultural processes that discharge nitrate and phosphate deposits in the soil are the primary contributors to increasing salt levels in the soil.
Increased soil salinity makes it difficult for plants to absorb soil moisture and reduces groundwater quality. Crops and plants grown in these regions combined with other soil pollutant effects are highly poisonous and can cause severe health disorders when consumed. | https://www.eartheclipse.com/pollution/devastating-effects-of-soil-pollution.html |
Apparently not. Six climate activists have been cleared of causing criminal damage to Shell’s London headquarters in 2019, despite judge directing the jury they had 'no defence in law'.
All those who stood trial explained they had targeted the Shell building because the oil giant was directly contributing to the climate crisis, thereby causing serious injury and death, and argued that it was a “necessary” and “proportionate” response to the harm being caused.
One argued: “I believe if I don’t do whatever I can to protect our Earth, to protect life on this Earth, to stop the death and injury that is and will be happening, I’m committing a crime, a really serious crime, and I’m willing to break a window, to paint a message on a wall, I’m willing to break the glass on that emergency button, even if some say that’s a crime. Because this is a much bigger crime and I’m trying to stop that crime, I’m trying to protect life in the only way I feel I can.”
The judge directed jurors that even if they thought the Extinction Rebellion protesters were “morally justified”, it did not provide them with a lawful excuse to commit criminal damage, adding: “They don’t have any defence in law for the charges they face.”
But the jury of seven women and five men took seven hours and four minutes to acquit them of both charges.
Court Convicts French State for Failing to Meet Climate Targets
A Paris court has convicted the French state of failing to address the climate crisis and not keeping its promises to tackle greenhouse gas emissions. In what has been hailed as a historic ruling, the court found the state guilty of “non-respect of its engagements” aimed at combating global warming. [OGN February] More... | https://www.onlygoodnewsdaily.com/post/is-it-a-crime-to-commit-a-crime-to-prevent-a-greater-crime |
The methods used for the evaluation of environmental impacts consider different impact categories in order to describe the environmental performance of one or more processes.
Examples of impact categories
- Global Warming
This phenomenon is caused by the increased concentration in the atmosphere of those gases able to absorb the infrared radiation emitted by the earth surface (greenhouse gases): the consequence is a rise in the average Earth’s temperature.
- Ozone depletion
Chlorine contained in chlorofluorocarbons gases (CFC) reacts with the stratospheric ozone molecules, dissociating them and releasing molecular oxygen (Cl° + O3 → ClO + O2). Ozone depletion is a serious problem since the ozone layer plays an important function as a filter for ultraviolet radiations, which can be harmful for the skin, can cause a partial inhibition of photosynthesis in plants and destroy phytoplankton.
- Eutrophication
This phenomenon is caused by the disproportionate growth of aquatic plant organisms due to the presence of too high quantities of nutrients in the ecosystem, such as nitrogen, phosphorus or sulfur. This results in an alteration of the ecosystem with multiple effects among which an excessive consumption of oxygen resulting in the death of different acquatic organisms such as fish.
- Acidification
It’s the effect of a pH alteration in soil or in water due to a release of pollutants in the air, which fall down with acid rain and dissolve into water and soil. Soils subject to acidification lose their nutrients, causing serious damages to flora and fauna.
- Photochemical oxidation
This type of impact occurs in case of strong sunlight, when the ultraviolet radiation triggers photochemical reactions converting nitrogen oxides (NOx) and volatile organic compounds (VOC) into substances harmful to humans, animals and plants.
- Land use
It’s the impact generated by land use modification, from a natural environment, such as forests and ponds, to semi-natural (agriculture) or artificial (buildings, infrastructures) exploitation. | http://www.ciclodivitadellecose.it/wp1/?post_type=portfolio&p=2188 |
A creative crisis is such an insidious phenomenon that can visit a representative of any creative profession absolutely suddenly. Copywriters are particularly prone to this trouble because they often work according to the flow of orders, and not the tide of inspiration. It looks like this from the outside: you sit at the desk, pointlessly staring at the flashing cursor, and do not know how to get back the inspiration. Why can not you solve the task before you with the usual enthusiasm, why do not you feel a blessed inspiration allowing you to come up with small but indisputable masterpieces? Thousands of different “why” come to your mind instead of the necessary insight.
Physiology of Inspiration
It is possible to say that inspiration is a state in which a person is more efficient and result-oriented in his work. And the best thing you can feel in the process of work is a spiritual uplift that distracts a person even from illness. He simply forgets about his ailments. It is noticed that brilliant discoveries and works are born with a strong emotional creative impulse and masterpieces to appear by themselves. Someone in this state writes poems and prose, someone creates beautiful pictures, and someone makes discoveries in science.
According to the definition of the American Psychological Association, one of the most influential associations of professional psychologists in the world, creativity as a concept has two aspects. There are originality and functionality. In other words, the creative idea should also work in practice. Here we come to the most important in the question of how to find the source of inspiration: there is no point in generating ideas that will simply not be realizable in the real world, or their implementation will certainly lead you to losses.
A creative person, capable of gushing ideas and projects, feels inspiration when there are a desire and interest to embody a new idea. The idea, then the acquisition of knowledge for the implementation of the idea, and the very action to translate it into reality constitutes the process of creativity. And if the idea is not just embodied, but also useful, it encourages the search for more and more new ideas.
Idea vs. Inspiration
The quickest way to invoke inspiration and get a charge of enthusiasm is to generate a new idea that can be applied in practice. To find inspiration it is just needed to learn to see what’s around us. This may include:
- Communication with interesting people;
- Reading William Shakespeare Quotes;
- Books and magazines of creative themes;
- Beautiful movies;
- Blogs and forums for creative people;
- Contemplation of the environment;
- Art and music;
- History and religion;
- Healthy lifestyle and physical exercises;
- And a lot of absolutely individual things and activities.
How to invoke inspiration?
-
- Everything begins with the collection of information, its classification or processing and thinking about how you can use this information.
- Then ideas arise (which need to be fixed), and inspiration comes unnoticed.
How to keep inspiration?
- It is necessary to be constantly ready to catch the idea, and this willingness will give a charge for inspiration. Make it a habit to search for ideas and reflections on how you can implement it.
- Work with inspired people who are creative, energetic, positive.
- Read every day everything you can, everything that you are interested in
- Communicate with nature every day, draw inspiration from it.
- Get in touch with new people, communicate with them, and listen. This communication will give you new ideas.
- Change routine routes when you get home.
- Take time for solitude and reflection – it’s very important and useful.
- Move more. The most interesting ideas come during the movement.
Inspiration Should be Supported
There is an opinion that inspiration cannot be called only by desire. It appears unexpectedly as a result of strong emotions or experiences. Surely it happened with every copywriter that suddenly there is an idea, and you can not fall asleep night, thinking about how it can be realized. At such hours, there is uplift, a desire to learn more about this and it seems that everything will necessarily happen. And, indeed, much can be done with ease.
Acquiring knowledge and skills, observing the environment helps maintain the enthusiasm that is so valued in any business.
It often happens that until you start to do something, inspiration does not come. But in the process of work satisfaction and a desire to continue working come. You can say that the appetite comes with eating. Therefore, it is not right to sit and wait for inspiration to come. It’s better to start doing something.
All of us in response to gratitude (from listeners, readers, admirers) get wings for new flights. Therefore, it is important to share what you have done or learned with others. Receiving approval or opposing critics, you assert in the correctness of your offspring.
The most valuable thing is when the result of inspiration brings a sense of happiness, joy and inner awareness of your success, as well as the benefit to others.
It is believed that it is difficult to invoke inspiration. It is necessary to wait while it will visit you. But in fact, inspiration can always be found, the main thing is to know where to look for it. The ability to be in a constant state of searching for ideas makes our life interesting.
It is noticed that the person inspired by ideas becomes kinder and more benevolent; he is purposeful and does not complain about his failed life. Many creative personalities who are looking for opportunities to realize their ideas are among the long-livers. Therefore, each person simply needs to find the source of his creative enthusiasm or inspiration and should always be in touch with it. | https://tlists.com/how-to-work-as-copywriter-not-lose-inspiration/ |
In this article, we discuss the available sources of information for cancer patients and families, and how to go about obtaining the best information for cancer treatment.
Cancer care requires complex decision-making. This is ideally made with the complete involvement of a well-informed patient and their family. Best treatment outcomes are obtained with knowledgeable patients who understand and actively participate in the treatment process. Access to good quality information is therefore especially important in the oncology setting.
A sensible approach is to get the best out of each of the following sources and filter out irrelevant or harmful information.
- Primary treating doctor/oncologist: Your doctor is the professional medical expert, and there is no substitute for their knowledge and experience. The doctor provides crucial information about the diagnosis, prognosis (chances of cure, prospects of recovery), treatment options, pros and cons of each option, and financial costs of treatment. They are also responsible for developing a comprehensive treatment plan in joint consultation with the patient and family.
The doctor, however, may not be able to fill in all the gaps in the patient’s understanding. Interaction with the doctor is formal and is limited to the clinic/hospital setting. Since a lot of information is shared in a short time, many people find it difficult to understand and retain all that is being conveyed (especially if the doctor tends to use a lot of medical jargon!). It may take multiple consultations to grasp the complete picture – but there should be no hesitation on the part of the patient to ask as many questions as required for a thorough understanding. The doctor can also help guide you to suitable support groups, or suggest trusted internet resources.
Sometimes a second opinion from another expert in the same field can help fill in gaps in understanding, and provide a clearer viewpoint.
- Family and friends: They can be an excellent source of mental and emotional support. They can also be in a better position to understand the discussion with the doctor and help convey the required information to the patient. They could also be active in seeking a second opinion or do other forms of research on behalf of the patient. They are highly motivated in seeking the best for their loved ones and can spend a lot of time with them.
It is extremely important to remember that despite the best of intentions, family and friends are not medical experts, and their role is not to supplant or unduly influence clinical decision-making.
- Other patients: Cancer Survivors are in a unique position of having first-hand knowledge of the challenges in overcoming this disease. They can be a great source of motivation and inspiration, being living examples of winners in the fight against cancer. They can also offer useful personal tips from their experience, especially for non-medical issues such as tailored recipes, local shopping resources (custom garments, prostheses, wigs, etc), introduction to self-help groups, and the like.
Again, keep in mind that they are NOT medical experts. While their personal experiences can help guide another person, there are several factors that determine the treatment and the disease course of each cancer patient. Each individual patient is different, and every cancer can behave differently. If a cancer survivor is like an experienced airline passenger who can offer tips and tricks for a pleasant journey, the doctor is the pilot delivering you to the destination!
- Groups and communities (offline and online): Self-help groups formed by cancer survivors help newly diagnosed and recovering patients understand and come to terms with their cancer. Treatment costs in different centers can be compared, and feedback regarding overall care can be shared. Some groups leverage their numbers to get bulk discounts for products useful to cancer patients and source donations for helping the underserved. Discussion forums can be a good place to listen to others’ experiences, and get a sense of what to expect in their own journey.
Before getting involved, it is important to verify the authenticity of any group. There can be unscrupulous elements out to take advantage of vulnerable cancer patients. Be especially skeptical about claims of “miracle cures” or “treatment without side effects” – families in mental and emotional turmoil easily fall prey to false hopes. If possible, ask your doctor to recommend groups in the region you are in.
- Internet: The internet is a vast source of information. Used well, it provides useful, accurate and up-to-date information, which can be accessed at one’s convenience. Websites, blogs (like this one!), pictures, interviews, podcasts, videos – the choice is endless and each person can choose the medium they are most comfortable with.
A search on Google’s search engine for “cancer treatment” yielded 81,10,00,000 results! There is no restriction and no regulation to posting on the internet. The challenge is to identify genuine sources of information. One should learn how to filter out misleading(but popular) information, as well as advertisements.
My next post is a compilation of a few useful and trustworthy internet resources for cancer patients – https://drsurajmanjunath.com/top-five-cancer-treatment-resources-on-the-internet/
To summarize, knowledge is key to a successful outcome in cancer care. Each individual should be able to access the best available information from trusted and genuine sources. | https://drsurajmanjunath.com/guide-to-sourcing-best-cancer-treatment-information/ |
Having seen at least 10 people dedicating their “I” to Inspiration, I decided to share how I find my ideas for writing. I have always like this advice from Stephen King:
“Amateurs sit and wait for inspiration, the rest of us just get up and go to work.”
To me, inspiration never comes when I look for it. It is only after I have started working that it shows up, often unexpectedly, in the middle of a sentence being typed or a page flipped over, like a nasty mosquito buzzing and circling, refusing to go away. The brain has warmed up and is now operating smoothly, so that everything it senses, it carefully examines and connects this new input with many others already sorted and stored. It is such connection that gives sparks.
It is a bit like clashing two things together to create a loud “Bang!” It does not have to be as dramatic as smashing two atoms in an accelerator, but the more different the two things are from each other, the better. A flower and a fork, a book and a boot, a shoe and a submarine, a smile and a tomato, a tornado or even a torpedo — whatever you can think of. What happens when you put the two together? Even more interestingly, what happens when you desperately try to keep them apart, and why would you want to do that? Who could stop you?
In the beginning this “connection game” will be hard. The brain simply goes blank — Kapaw! Full stop! — and needs to be switched on and warmed up all over again. However, once it becomes used to this process, it will automatically attempt all sorts of connections with or without anyone’s consent. The fun thus begins.
As ideas come in all the time, I tend to grab them as soon as possible. As I am on the computer most of the time, it helps to have a text file open and immediately record whatever ideas my brain generates. Because it is a text file, there is neither font nor format to distract me. It is jut the raw information being nailed there and waiting for me to re-visit it. Sometimes such a text file can stay in the background for weeks and even months, with all sorts of words, phrases and half-formed sentences scattered all over the place. (You can also add images if you are visual-oriented.) Every visit connects more of these “dots” so that new stories come flooding out.
Keeping such an exercise as a habit is crucial. I have learned to keep a journal with me all the time so that no idea can slip pass the machine that is my brain. Obviously it is hard to write things down while doing dishes or taking a shower, but whether I am walking, vacuuming, eating, reading or simply refusing to get out of bed on a Saturday morning, a journal nearby comes in handy. It is the best friend any writer can and should have.
Give it a try, please, and soon you will find that inspiration is everywhere — fresh ideas will eventually become so intrusive and demanding that you will be flat out writing, just to catch up to them. You can trust me now and blame me later. | https://christinesunflower.com/2015/04/11/atozchallenge-i-is-for-inspiration/ |
INSPIRATION AND ENCOURAGEMENT can come from many places. For some, they’re found in a lifetime of work from a literary giant. For others, they emerge, unexpectedly, from an indomitable 16-year-old prophet with preternatural vision and determination. And many of us are heartened, and challenged to see differently, by the inspired imagination of artists—from a poet who urges us to “discover the truth of wonder and rejoice in the silent voice of God” (p. 39) to creatives around the world who transform the tools of trauma into affirmations of healing, redemption, and resistance.
But art is not passive. For art to be meaningful, to be “successful,” requires engagement—and a willingness to allow our most closely held assumptions, norms, and habits of thinking and being to be shattered. Truly (and vulnerably) engaging with art requires moral courage, but it promises unexpected rewards as we’re opened to new possibilities—and even new sources of inspiration in a world that sorely needs them.
Got something to say about what you're reading? We value your feedback! | https://sojo.net/magazine/december-2019/editor-december-2019 |
When creative block strikes or you are in active idea generation mode trying to brainstorm ideas and find inspiration, there are the usual places you look and turn to, to get inspired.
But if you are really stuck or just want a fresh resource to draw ideas from, it might be worth looking at trying some of these slightly more unusual places to find inspiration.
As a creative, I think it is also important to always be taking time to observe and have your eyes open to being inspired by anything and everything around you. There is beauty everywhere and in everything, even the simple, everyday, common and Mundain.
Always keep a notebook or note taking app handy, because inspiration and ideas can strike at any time and anywhere, as long as you are always open to it.
So here are five places you may have not considered, or need a reminder to explore to find creative inspiration.
You might feel ashamed of some of your old work because your skills have developed and you have improved over time. But old work can actually re-inspire new ideas. It’s not about the quality of the work, but the concept that was behind it or even the way you went about executing it or maybe little details that you considered at the time. Looking back at old work is seeing it with a fresh set of, more experienced eyes. You might see something you didn’t pick up on originally that could spark a new idea. Or you might want to revisit an old concept and do it better. It may even just be a little detail that you included that gives you fresh ideas for details you could add to a current project. Don’t dismiss old work as something of the past, it still came from your mind, and you can build upon those old ideas with the wiser and more experienced mind you have today. After all, it’s not stealing if you steal from yourself, so looking at your old work means you can rip yourself off as much as you like.
And old work doesn’t have to be just work related to what your business or hobby, it can be anything you have created, put together, curated or designed. I think sometimes we can get caught up in our present, that it becomes too familiar and every-day, and we begin to fail to see some of the beauty and creativity. So when I look back at things from the past, including old outfits, old studio layouts, old home decoration, old projects, I start to see them the way an outsider would have seen them, and I feel re-inspired.
2. Observing Everyday Life and People.
often we turn to inspiration in other people’s work, creations and concepts. Looking at man-made objects, creations as well as nature and creation. But inspiration doesn’t always need to come from designed and created tangible things, it can also come from observing activity. Observe the intangible. Take a seat in a busy place or jump on public transport. Listen to conversations, watch interactions, observe processes and routines. We can often go through life minding our own business getting on with what we need to do, only briefly being aware of things going on around us. But if you take the time to ‘people watch’ you’d be amazed the little things you notice that would have been easily overlooked, the stories you pick up on and the characters you come across. Observe the way things are done and funny little quirks as you interact with people and businesses through your day. This is where comedians find a lot of there inspiration, and it can inspire creative and business work too.
There is so much to be learned and gleaned from the past. It is an immense resource. Don’t limit your inspiration to the things of today and the present world around you. There are so many avenues you can explore when researching the past and so many time periods you can draw from. Look into something that fascinates you and learn more about it. I love trams, and in my city our tram lines were all taken away in the 60s, researching the old trams has been really inspiring and fascinating. From the stories of how they used to operate, the old posters, the tram rolls, the old advertisements inside the trams, the style and design of the trams. There is always so much to learn and be inspired by looking at the past.
Creative inspiration doesn’t need to come from visual sources, it can also come from the written word. I often find this is where the deepest most meaningful idea really come from. Our minds are always spinning with thoughts, to-dos, ideas reflections, analysis and memories. There is so much going on in there, that getting thoughts out onto paper can not only help clear your mind and give you a chance to brain dump, but getting those thoughts down in words and onto paper helps you better process them and gives your mind more space to explore them rather than just hanging onto them. And it doesn’t need to be just creative or inspiring thoughts, anything going on in your head that you can get out on paper can be a useful source to draw ideas from.
One way of doing this is a method called ‘Morning Pages’ where first thing or as part of your morning routine, you get out a journal and braindump all your thoughts, just a stream of consciousness. It doesn’t need to flow or be on a certain topic, you just let all those thoughts swarming in your head get out onto paper. Then you could look back at these pages and pages you have written in the past, and little thoughts and concepts might pop out at you and spark ideas.
If this doesn’t suit, any kind of journaling thought dumping, or note scribbling can be beneficial.
It is also a good idea to have one place that you collect thoughts and notes, where you can just jot down things as you think of them, making a valuable resource to return to.
5. Look outside Your Own Industry.
We often compare ourselves to others in our own industry and fields of interest and draw inspiration fro what others in our industry are doing. But the best inspiration comes from looking at other completely unrelated industries. Expand your areas of interest and hobbies, learn about things that have nothing to do with your own line of work and look out for innovative ideas or different ways of doing things. It’ll inspire you to bring fresh ideas to your own industry, and even challenge you to break the norm. So don’t get too caught up in your own bubble of your field. If you don’t know where to start, think about a topic that interests you or an area of your life you would like to improve (e.g nutrition, fitness, relationships, housekeeping, gardening, sport, parenting, mindfulness, travel) and then start researching it, find a niche in that topic that you feel drawn to and dive into that world, see what people that teach and share about the topic do and how they share information and present it? How do they grow a community? What clever ways do they market, how do they brand and present their work? What feelings and moods do they create?
I hope this has given you some ideas of where to look and also ways to incorporate these methods of finding inspiration into your life so that you can expand your inspiration sources and have richer more unique ideas. | https://www.tabithaemma.com/5-unexpected-places-to-find-creative-inspiration/ |
Finally, Chicago Med picks up the pace. Sadly, we’re only one episode away from the season finale.
The episode still left a lot to be desired but at least all these new supporting characters make way for some storytelling that’s more gripping and most importantly, outside of the hospital.
Natalie still had a pregnant patient that she was conflicted about but baby steps, you know?
Most of the medical cases felt like fillers to the real story — someone slipped Dr. Charles a file that alluded to Sarah Reese’s father being a murderer responsible for the many missing girls on various campuses. Didn’t see that coming? Neither did I.
But it allows us to hate Robert even more which I’m not opposed to and it also aligns with his sociopath diagnosis. If I had cared enough to give it a second thought, this would have been the logical next step.
Of course, Dr. Charles doesn’t buy into it at first, but part of him sees that it’s possible, so when the opportunity to snoop around his house presents itself, he takes it.
Inside, he finds a picture with one of the missing girls which honestly, isn’t that weird. I know plenty of teachers and professors with ties to their students. The way he’s holding her seems a bit suggestive but it didn’t worry me up until I saw how charming he was with all the ladies at the hospital, including his donor’s wife. That’s another characteristic of a textbook socio!
The creepiest part is that he has a daughter around the same age as all these girls so I wonder if maybe he has some fixation on Sarah that he’s manifested on other unsuspecting victims.
I also think that a socio would know better than to befriend a psychologist since clearly, Dr. Charles can read right through him. Does he want to get caught?
Is Dr. Charles getting in way over his head? Will this strain his relationship with Sarah?
The second major story involves Emily, Choi’s delinquent sister.
She redeemed herself for a short time during the mass shooting episode when she stepped up as a volunteer that provided aid to those in need. When this episode kicks off, she’s at some luxurious party, strutting around in her Louboutins and asking Ethan to come save someone that overdosed.
Immediately, April questions what Emily was doing at the party and connects the missing Percocet that was stolen during the chaos of the shooting to his sister. This angers Choi, unsurprisingly, as he’s always really defensive and doesn’t think April gets it.
But she has a point — how do you afford the red bottom shoes as a volunteer?
The tox screen comes back and Choi’s worst fears are proven true as the patient had a mixture of heroin and Percocet in his system.
He confronts Emily who immediately turns it around on April. A classic sign that she’s trying to get the attention off of herself. Not to mention she was a bit too interested in the man’s condition if he was really just an “acquaintance.” That is unless she was the one who sold it to him.
When Jeremy admitted that he stole the Percocet from his father’s medicine cabinet, Ethan felt terrible about accusing Emily and claimed it was out of habit. April didn’t stop him from going to “right his wrongs” but she did get the bottom of it — Emily was his drug dealer.
Back at the house, Choi found a note that Emily had “gone to Vegas,” which again, raises the question of how does she have money for any of this?
I’d feel bad for Emily but think about what kind of person she has to be if the only thing motivating her to be good is her brother’s acceptance. She also took advantage of a swamped hospital to steal drugs with a street value. That’s low.
Dr. Rhodes’ day went from exceptional to pretty bleak within a matter of hours. He’s definitely formed a sort of god-complex lately that no surgery, no matter how risky, is impossible. And honestly, that’s just not how life works.
Natalie’s pregnant patient had a failing heart and needed an operation. However, the best course of action was to terminate the baby before putting her under, otherwise, both of them might not have made it.
The woman refused to terminate because it was the last piece of her late husband which stung. Instead of convincing her to make the best decision, Rhodes offered to do the procedure and at first, it seemed to take.
However, one complication led to the next and before you knew it their worst fears had come true — Rhodes let both mom and baby die.
It’s heartbreaking to watch because you know these doctors make the calls they genuinely believe in and sometimes, those calls just don’t manifest the way they hope.
The death takes a toll on Rhodes’ psyche and defeated, he requests to be removed from the historical twin surgery that Latham and his team have been prepping for.
When Latham questions his decision, Rhodes says something along the lines of, “I don’t think I’ll be able to make the right call.”
Hey, at least he’s being honest. But truthfully, is there ever a right call? Without risk there’s no reward…isn’t that what he always said?
Dr. Manning assumes she’s going to have a terrible day when she makes a risky call for her flu-ridden patient but it ends up being the one that saves his life.
It’s a juxtaposition to what Rhodes is dealing with; both doctors made risky, unconventional decisions and one of them paid off while the other didn’t. The point is that you never know what will stick and if you stop trying, you might as well hang up your lab coat.
This was also the first time that Nat and Will arrived at a mutual decision even though they disagreed on the best course of action. Is it just me or are they actually making progress?
Apparently, Will thinks they’re making a lot more progress than they really are because at the end of the episode, he stops by Jay’s place to grab his mother’s wedding ring.
Yep, shortly after they broke up and he slept with some other hospital employee to numb the pain, Will is jumping into a marriage proposal.
All in a day at Gaffney, people.
Thoughts on this week’s Chicago Med?
Will Rhodes be removed from the procedure? Will Charles prove Robert is a murderer? Will Choi fix things with Emily? Will Will propose to Nat?
Tune in for the season finale next Tuesday on NBC! (And pray for a renewal!)
Leave a Reply
Chicago Med
Chicago Med Review – Will Dr. Choi Ever Learn? (5×11)
Chicago Med packed in some heavy storylines into one gut-punching episode that will have plenty of repercussions in the weeks to follow.
An electrical fire at O’Hare filled the ED with a few burn victims including one man that was so far gone, they weren’t even able to identify him.
Based on the seat number and the medical convention pass in his pocket, they deduced that it was their Head of Neurosurgery, Sam Abrams.
It was a blow to everyone on the staff as the realization that Sam would never walk or operate again set in.
His wife, Michelle, was a mess and after some time, decided to pull the plug on him because “Sam wouldn’t want to live like this.”
It was a fair assessment as the odds weren’t in his favor. Crockett and Choi believed he’d be mentally okay once his brain swelling went down, but Michelle knew that this wasn’t the quality of life an independent and successful man like Sam would have wanted.
And, of course, Choi didn’t respect Michelle’s wishes at all.
Choi’s always had an arrogance about him and his ego has pushed him into corners that don’t paint him in the greatest of light, but assuming that Michelle didn’t know what she was talking about because she was his wife of three months was a new low.
How did Choi think he had any authority to speak on Sam’s behalf if he mistook Michelle for Sam’s daughter? Clearly, they aren’t as close as Choi led on and thus, Choi wasn’t equipped to be making any kind of calls for Sam.
In fact, it’s way out of line for a doctor to try to push his own beliefs onto a patient or the patient’s spouse. And that’s what they were — Choi’s beliefs of what he would have wanted versus what was in Sam’s best interest.
Choi went to great lengths to stop Michelle, who he even called money-hungry, but even the board didn’t agree with him.
And neither did Crocket, who seems to be the only doctor in Gaffney who ever practices what he preaches and stays in his own damn lane.
Dr. Choi did do one thing right and that was saving the unknown patient after realizing that Sam is very much alive.
Sure, he was a little burnt by the Hawaiian sun, but not nearly as bad as the man they thought was him lying on the table getting the plug pulled.
Everyone’s reaction to seeing Sam was the equivalent of seeing a man get raised from the dead. Technically, he was, he just didn’t know it.
And yet, Dr. Choi still couldn’t let it go.
After Michelle left the hospital, he approached Sam to inform him that Michelle was going to remove him from life support and that proposed they wait until his daughter could be there to make a ruling.
Sam echoed the same exact thoughts Michelle did — he didn’t want to live like this and he never wanted his daughter to see him this wear or bear that responsibility — meaning that Michelle knew Sam better than anyone.
When Choi suggested that Michelle was a gold-digger, Sam pointed out that she’s the one with money in the relationship after inventing the formula for a widely popular protein shake. Cha-ching.
See what happens when we make unwarranted assumptions about a person based on our own personal bias.
Choi should spend more time worrying about his relationship and less about others.
Because last time I checked, he proposed to a woman who cheated on him with the very doctor he’s been clashing with and judging without really knowing.
Passing judgment is such an ugly look on Choi.
He apologized to Crockett in the end, but a friendship between Choi and Crockett is even more dangerous than them being frenemies.
Crocket promised to keep his and April’s kiss a secret, but will he be able to lie to a colleague for long?
The longer April keeps this from Choi, the worse it’s going to be for their relationship when he finds out because the fact that she’s hiding it means that it meant more than just an in the moment kiss.
April is trying to ease her guilt and smooth things over by marrying Choi and proposing IVF, but those are just attempts at covering up the truth.
Another employee at Gaffney had an eventful day and proved that she’s a terrible person who probably deserves everything that’s coming for her.
Gwen Garrett, the Chief Operating Officer, left her baby in the car when she went to work.
If it wasn’t for Natalie finding the baby (and knocking on the window as if an infant would somehow open the door and let her in), the child would have died.
Then, things would have been even worse for her.
Sharon Goodwin took mercy on Gwen, despite their very obvious differences when it comes to patient care. She understood that Gwen’s going through a lot with her divorce and custody battle and knew that it was an honest mistake that Gwen deeply regretted.
It’s the kind of compassion that mothers should show each other in a time where judgment for actions isn’t necessary and won’t’ change anything.
And here I thought we were making some kind of progress with Gwen; that she was being humanized from the constant profit-turning machine that we’ve known her to be.
It’s easy to make calls to protect the hospital when you’re not on the receiving end, but now that Gwen was, it was obvious how much help and support she needed.
She also wanted Natalie and Sharon to keep the secret from her husband and lie about what happened, which isn’t only unethical but it’s just wrong.
Yes, telling the husband the truth would give him a lot of pull in the custody battle to paint Gwen as a bad mother, but owning up to her mistakes was the right thing to do.
Sharon thought that Gwen would understand that. Instead, Gwen tried to “return the favor” for Sharon’s silence by pushing through Halstead’s proposal for safe injection sites at Gaffney.
Eventually, Natalie came clean to the husband who did not have kind words for Gwen and in return, Gwen pulled the plug on the proposal that would help thousands of drug users with a safe space to wean off drugs.
It was an eye for an eye with her.
She never thanked Natalie for saving her son in the first place or the hospital for showing her so much sympathy when they could have dialed up DCFS.
Gwen proved that she’s a monster through-and-through who didn’t learn from her mistakes. She doesn’t care about other people, she only cares about what’s in it for her.
I hate to say it, but I don’t feel bad for her one bit. I do feel bad about the patients who will suffer at the hands of her wrath.
Speaking of Halstead’s safe injection site, it’s a cause near and dear to his heart following the death of his patient on Chicago Med Season 5 Episode 10 who died of addiction after being hooked on painkillers he prescribed her.
It’s great that Halstead is trying to right his wrongs and take responsibility, but with the proposal shut down, he has decided to open his clinic at an unsanctioned site, which Dr. Charles informs him is illegal and could cost him his medical license.
Halstead agrees seemingly admitting defeat, but if we believed that, we wouldn’t truly know Halstead.
Instead, he went to the site and struck up a deal with his partner to move forward with the injection site.
There’s trouble ahead for Halstead, but what else is new?
Amidst all of that action, there was an IVF switch-up storyline that found a couple from the plane crash delivering a baby that wasn’t there’s.
The baby needed a lung transplant, but only one of the parents was a donor match, which brought up the revelation that they weren’t the biological parents.
The real parents were finally tracked down (and were pregnant with the other couple’s embryo) and one of them was also a match so the “mom” that carried baby agreed to donate to save the baby’s life.
Despite the trauma that they endured due to the switch-up, everyone bonded and they came out of the situation with new family members. Sometimes, there is a happy ending.
What did you think of Chicago Med?
Are you happy Sam is still alive?
What did you think of Dr. Choi? Should April tell him the truth?
Is Gwen the worst or is Gwen the worst?
And be sure to watch all episodes of Chicago Med season 5 right now!
Chicago Med
Chicago Med Midseason Premiere Review – The Drama Between Will and Nat Continues (5×10)
New year, same old ED.
Chicago Med’s second half of season 5 picked up roughly six weeks following the dramatic events of the midseason finale.
Dr. Charles was grieving the loss of CiCi, who passed away off-screen, as we assumed. April was reeling with guilt following her kiss with Crockett and debating whether or not to tell Ethan when he came home from deployment, and Natalie and Will’s friendship remained fractured after she’d confessed her feelings for him after getting her memories back and getting rejected by him.
And from what just transpired between Will and Natalie regarding their patients, these two are not going be smoothing things over anytime soon. Or ever.
We can all agree that Will and Nat are toxic for each other as a couple, but they don’t even make good friends or partners. They have two clashing personalities — they believe they’re always right and never see eye-to-eye.
Patient care is at the center of their drama now. Will once again asked Natalie to bend the rules for a former patient, Lynne, who came back and blamed him for her oxy addiction,
She was treated by Will four years ago for a knee injury, which is when she began abusing drugs.
Will felt an enormous pang of guilt as Lynne made it very clear that she blamed him for what transpired, but Will should have known better. Despite writing out the prescription, it wasn’t his fault that she’d abused the drugs.
Moreso, the moment failed to really underline the issue of doctor’s “handing out prescription pain killers like candy.”
The statement was made, but it got lost in the hoopla of Lynne’s case and Will and Nat’s disagreement on how it should be handled.
Will wanted so badly to believe that this was a one-off situation with Lynne because it would ease his conscience, but we know addiction is a lifelong struggle. She was a patient four years ago and chances are, she has been using since then.
The rapid detox was a dangerous treatment method, but in their dire situation, one that felt necessary to help keep Lynne with her son.
Though, it was obvious even if Lynne was weaned off the addiction that she wasn’t in the right state of mind to take her son home and be his primary caregiver.
Lynne’s intentions may have been in the right place but it takes a lot of willpower to quit a bad habit. And sometimes, as we saw in this situation, loving your son isn’t motivation enough.
Did that mean Lynne should have lost her child? Absolutely not. She needed a helping hand and some guidance to set her on the right path.
It’s beyond frustrating that in these situations it’s either you keep your addiction a secret to keep a child or get help and risk losing your child. There’s truly no winning.
However, when you take a step back and take the personal out of it, Natalie did the right thing. She spoke to the son and realized this wasn’t a one-off situation as he carried Narcan in his backpack and administered it before to help revive his mother.
This was a 6-year-old boy who was taking care of a drug addict because he didn’t want to lose his mother. The realization that getting Child Protective Services involved was heartbreaking and likely not a choice Natalie wanted to make. But that’s the thing — she didn’t have a choice.
Her priority was to get the boy out of a toxic environment, which she did by calling CPS.
Now, Goodwin and Med should have handled Lynne better. They shouldn’t have approached her mid-detox when she’s at her most vulnerable to tell her they’re taking away the only thing that matters to her.
And there’s absolutely no way she should have been considered in the right state of mind to check herself out.
But that’s exactly what happened, which lead to the deadly overdose.
In a way, the drugs won out for Lynne in the end, which again, underlines the power they had over her that Will was blinded to because of his guilt.
Will will now blame Natalie for his patient’s death when it isn’t that simple. It’s not black and white, and what really needs to happen is for Will and Natalie to stop placing blame on each other for making medical calls that they see fit. It’s not personal. Natalie didn’t want to get back at Will for refusing her love. Let’s not get it confused, though, I know we will.
Then we have April and Crockett whose relationship is tense because of that one little kiss on the finale.
It’s been six weeks and April is still obsessing over it. The guilt is eating her up inside, which means that Ethan will eventually find out what happened.
It won’t be April who tells him either as she’s seemingly made up her mind about keeping it a secret, especially now that Ethan has proposed and accepted that she might never have a child.
Crockett is going to be the one to blow the whistle on it after learning of April and Ethan’s engagement.
I haven’t been able to put my finger on Crockett or whether or not he’s a good guy with good intentions, but nothing has made me dislike him either aside from his pursual of April while knowing she’s with Ethan.
All I know is that trouble is brewing for April and Ethan once the truth comes to light. If April had just come clean, Ethan might have understood that the kiss was innocent and happened in a moment of weakness and vulnerability, but by keeping it a secret, April is proving that it means a lot more.
Other Med Musings
- Noah Sexton is alright, you guys. And who knows what happened to the girl he was helping or the gang that beat him up. I guess we’re just ignoring it.
- Dr. Charles’ grief resulted in a beautiful and therapeutic karaoke piece. I love that despite being the “all-knowing” psychologist, he realizes when he himself needs help and takes advice.
- Ben is alive, thriving, and cancer-free! Woo! The same cannot be said for Maggie who is starting her radiation, but hopefully, they can both celebrate victory soon enough.
What did you think of Chicago Med’s return?
Are you over the Will and Natalie drama?
Was April right for keeping the kiss a secret from Ethan?
Chicago Med
Chicago Med Fall Finale Review – Natalie Remembers, [SPOILER] Kiss (5×09)
Plenty of character’s found themselves uncertain of what their future held on the fall finale of Chicago Med.
Throughout the hour, Natalie’s memories slowly began coming back to her until she realized that’s she’s been a bitch to Will Halstead this whole time.
She didn’t go to his car to tell him about her engagement to Philip, she went there to tell him she loved him.
But you know how timing is everything? Well, her timing was slightly off.
Halstead wasn’t interested in hearing anything she had to say because he didn’t care. When she told him to “get out of her life,” Halstead did just that. And what did he find? Well, what we’ve all been saying all along: they’re better off without each other.
In a surprising twist, Halstead told Natalie that he moved on, and once he did, he realized the toxic nature of their relationship. All they do is hurt each other and that’s not what a relationship should be built upon.
When Natalie reiterated that she loved him, Will hit her with “I’m free now,” and boy, that one had to sting.
Natalie went from having two men pining over her to being completely alone.
And you know what, she should embrace that. Being alone and getting some clarity might do well for her.
I’m seriously impressed with Dr. Halstead’s realizations since, for a moment, it felt like his obsession with Nat would consume him.
It’ll be interesting to see her attempt to win him back now instead of the other way around.
Phillip’s storyline got some kind of explanation as baby Sophie was brought into the hospital and abandoned by her father.
Despite saying he didn’t have any family in Chicago, it turned out that he had a sister and brother-in-law who showed up for their niece.
And that’s when they offered up the explanation we’ve been looking for — Philip is a pathological liar.
I knew it.
Philip only cared for Sophie because he was getting something out of it, in this case, Natalie, but once she broke up with him, he no longer cared about his own child.
Worst of all, he requested a DNR, which meant that the hospital wouldn’t be able to resuscitate the infant during surgery if anything went wrong.
Thankfully, Dr. Latham agreed to take on the surgery and it went off without a hitch.
Baby Sophie will live to see another day, likely with her aunt and uncle, though I’d love for her to get adopted by Natalie.
Is this the end of the Philip storyline? I’m glad the acknowledged that there was something fundamentally wrong with him, but I expected something more dramatic.
Natalie didn’t even process the fact that she’d been dating someone so messed up because she was too focused on getting her memories back.
CeCe is dying and Dr. Charles threw her a “bon voyage” party as a last celebration with all of her loved ones.
It’s a tough pill to swallow as we saw with Dr. Charles’ breakdown. He’s always so rationale and put together, but seeing him unable to control his emotions proved that he’s human, too.
Then, he got it together again and was strong for his wife.
I’m going to assume that when the show returns in 2020, CeCe will have passed.
However, in a surprising turn of events, Ben is doing significantly better.
At one point, everyone lost all hope and assumed that he had a few hours left to live. His counts were low and he was in respiratory failure.
And then suddenly, he made an impressive recovery, one Dr. Lanik credits with stopping the chemo so that his immune system rebooted and fought the infection.
Ben will seemingly live to see another day, which means that maybe Maggie will get her happy ending after all.
Lord knows she deserves it. She hasn’t left Ben’s side this whole time despite being sick herself.
And then, we have the mess that is Ethan and April’s relationship.
It’s frustrating that April wouldn’t just be honest with Ethan about her prognosis. It’s no secret that April has struggled to get pregnant before, so the early onset menopause wasn’t a major shock.
It’s because of April’s struggles before that Ethan is coming off completely insensitive.
Both of them agreed that maybe it would be cool to have a child together passively and all of a sudden, he has expectant father books.
He’s too pushy about it, which explains why April kept her diagnosis to herself. She didn’t want to let him down.
She was also processing what it meant for her. Did she want to spend thousands on in-Vitro to have a child? Was a child just not in her cards? Was she okay with that?
April has a lot to figure out in terms of her own future, and instead of facing it with Ethan, she ran away from it and found comfort in Crockett’s arms.
You guys, that kiss left me shook.
They have a spark, I’m not going to lie, but Ethan deserved better than to get cheated on the moment he was deployed.
If April is so unsure of what she wants, she should have broken up with him for the time being.
Instead, she cheated and from the upcoming episode promo, we see that Ethan is going to come back and propose to her.
When she finally does tell him, he’s never going to forgive her.
If the show really wants to add some soapy drama to this love triangle, April should get pregnant with Crockett’s baby after a one-night-stand.
Now that would be something.
It also didn’t help that April and Ethan were on two opposing sides when it came to their patient’s treatment.
Ethan’s patient’s death was unfortunate, but it didn’t negate the fact that they’d already signed off on giving a kidney to another patient in need.
I could understand calling off the kidney transplant if they hadn’t cut the kidney out yet, but since it was already out of the mother’s body and on ice, it only made sense to give it to the rightful recipient.
Ethan’s ethics haven’t been lining up with April’s for a while now, which only proves that they’re not as compatible as they thought.
Also, wasn’t it strange that there was absolutely no mention of Noah, his situation, or what went down with Jacinta?
April was so preoccupied with her Ethan/Crockett/pregnancy drama that she completely forgot about her brother.
From what I expected out of this finale, I was slightly underwhelmed, but there’s enough here to give us something to work with when the season picks back up in 2020.
What did you think?
Will April stay with Choi?
Are Natalie and Will done? | https://www.craveyoutv.com/chicago-med-crisis-of-confidence-3x19/ |
When inspiration is hard to come by, you would usually try to look for it everywhere. The problem is, they tend not to come the more you go searching for it. To find artistic inspiration that will help you continue with your work, you can try these unusual ways to help you get there.
1. Look Around
Your sources could be the people around you, the places surrounding your home, or the very room you are in. Look around and pick something that can help you get started like a book on your shelf or a pot of flowers in your garden.
2. Go Somewhere Unfamiliar
Go somewhere new and try to get lost as a way of getting into an adventure. It would help you get out of your comfort zone and be more attuned with your instincts and sense of wonder. Try to explore around and talk to other people whom you don’t know.
3. Look at Old Works
Your early works can help you realize your progress and ignite some ideas you must have forgotten in the past. Look at your old portfolio or read some of the things you have written in the past. Your raw emotions can trigger some creativity in you.
4. Make a Mess
Try new things by doing something messy. You could throw in random shapes and colors just for fun. Sketch randomly and do not mind doing it perfectly as long as you keep doing it. Take the pressure off you by creating something unintentionally. All it takes is a single combination of randomness to ignite something in you.
5. Through Gaming
Gaming is a great way to destress your mind and let go of stressful thoughts. As seen on 188spesial, an online gambling site based in Indonesia, gaming will also take you to a new world where you can explore different game graphics, which might help spark your creativity. They can as well reignite your passion because you get to practice strategies and decision-making skills to win.
6. Sleep
There is hardly anything that sleep won’t help, so sleep it off when you think inspiration is slow during the day. Get a nap and re-energize your whole body and mind.
7. Throw Things Out
Work in a clutter-free environment so you don’t get easily distracted with unnecessary things. Clean and organize as a way to refocus yourself.
8. Cook Something
Good food is good for the heart so try cooking your comfort food or look for a recipe you can follow for the first time. Enjoy your hearty meal and satisfy your cravings to start again.
You don’t have to work so hard to get inspired. What you need is to look for a way to trigger your artistic side and wait for it to come. Take some load off your shoulders and let the inspiration come to you by being more open to new ideas. Explore and get out of your comfort zone. Give yourself a break once in a while and surround yourself with things that motivate you. | https://bbmlive.com/8-unusual-ways-to-find-artistic-inspiration/ |
It is 1991 when Ethan Jones finally wins the role of his dreams in an upcoming, big screen movie. With the envelope holding the script clutched in his hand, he arrives at his California apartment where he can hardly wait to tell his girlfriend the exciting news. But when he finds the door unexpectedly ajar, he has no idea that in just a few seconds, the life he has fought so hard to obtain will be shattered. Eight years earlier, Ethan is attending university in Ottawa, Canada. One evening after seriously contemplating suicide, he finds his way into a club where he meets Mila Monahan, a beautiful acting student who saves him from himself. After he watches Mila rehearse a university play, Ethan catches the acting bug and decides to pursue his own creative passions, causing a collision with his more secure ideals. But when Mila suddenly disappears, Ethan vows he will never stop chasing the dream she inspired in him, believing in a world entirely different from the one he is living in. The Actor is a gripping tale of a young man's unforgettable journey of self-discovery in overcoming the trauma of a personal tragedy. It is a story of love, hardship, persistence and overwhelming joy where The Actor learns he can portray anything he can imagine.
Product Details
|ISBN-13:||9781938908668|
|Publisher:||iUniverse, Incorporated|
|Publication date:||05/12/2014|
|Pages:||352|
|Sales rank:||1,221,333|
|Product dimensions:||5.90(w) x 8.90(h) x 0.60(d)|
Read an Excerpt
The Actor
By Douglas Gardham
iUniverse, Inc.Copyright © 2013 Douglas Gardham
All rights reserved.
ISBN: 978-1-4759-7772-1
CHAPTER 1
Real Time November 1983—Carleton University in Ottawa, Canada
Ethan realized he'd made a grave mistake. It wasn't the sort of mistake you go back and erase, like spelling a word wrong. There was much more at stake than that, like the rest of his life.
Attending university had been as much of an assumption in his life as marriage or buying a car might be to others. It wasn't a question of whether he wanted to go or could afford it. Postsecondary education was simply expected, a part of his destiny. His parents didn't see education as just important; it was a requirement for life. They knew the privileges of education and had fought too hard without them. They weren't about to let their only child suffer through those same hardships.
His parents were distraught when school became secondary to a rock band he'd formed with a few of his buddies. He'd been groomed for adulthood and a successful career since preschool and was expected to know better. University required good grades that weren't achieved by staying out all night strumming an electric guitar to a drunken audience. So when his band broke up in the final days of his high school senior year, no one was as happy—maybe ecstatic was a better word—as his mother and father. "Things always work out for the best," he later recalled his mother repeating more than once at their dinner conversation. His father, in a strange reinforcement of his mother's words, added something like, "I never thought you were any good anyway."
The breakup was devastating for Ethan. He'd loved the band. Nearly all his hopes and dreams had gone into its creation. He loved writing songs, creating something from nothing. There was magic in playing songs they'd created themselves. What could be more exciting than making a living doing what he loved? He would remember forever the fateful Monday night of their breakup. They were sitting around the pool table that supported their eight-track mixing board in Greg's parents' basement. Greg was their drummer. After a couple of beers and shootin' the shit, Greg announced he'd been accepted at MIT. Ethan's jaw dropped.
"So what the fuck does that mean?" Ethan demanded, his disappointment displayed in anger.
"It means that I'm fuckin' outta here, come September," Greg answered, shaking his long black hair and raising his bottle in celebration.
"I thought we were a fuckin' band, man," Ethan shot back, upset by the obvious betrayal. "What's the deal? Fuck."
Greg proceeded to unload a barrage of faults that insulted everyone—they'd never made money, they'd never recorded, they didn't sound any good, and Ethan couldn't sing. They had to get on with their lives. On and on he went. If it wasn't wrong before, Greg found a way to make it wrong by the time he finished. Ethan could still remember standing in stunned disbelief beside someone he'd called his friend.
"Fuck, Greg, if you felt this way," he cried, not far from taking a swing at him, "why the fuck did you hang around so long?"
Greg shook his head. His decision was made. There was nothing to talk about.
They never played again as a band. From then on, Ethan floundered. Some quick decisions got him into university and four months later, he was on a bus to Ottawa.
The most difficult part of the university experience for Ethan was never feeling connected. Like an outsider, his mind was always on something other than the point at hand. The engineering curriculum was rigid and demanding—it had to be; the world didn't need bridges and buildings falling down, or planes falling out of the sky, or ships sinking.
Engineering was his program of choice because of his flair for mathematics and love of the automobile—two reasons that were as good as any.
Mid-November in Ottawa found bare-branched trees and grass covered with leaves. Ethan's roommate, Robbie, had come back to their dorm room after breakfast and invited him to play touch football in the commons. Ethan had refused to get out of bed. He wasn't feeling well and was trying to sleep off the bug. In truth, he was depressed over the decision he'd made to be there. His pain was due to his foolhardy reasoning that university was a lesser evil than facing the wrath of his parents if he dropped out. He spent the entire day in bed, attempting to escape the inevitable dreariness that lay ahead. It was dark when he finally got up and dressed. Being a Sunday night, there was little action in the quad. He decided a walk was in order and left the dorm for the briskness of the mid-November evening.
It was just past seven when he stepped off the cement steps of the entrance. With little in the way of destination, he headed toward the bridge over the Ottawa River. A sense of calm settled over him in the darkness of the night. Utterly alone, no one knew who or where he was. He gained a sense of power, knowing he could do as he pleased without interference. The night belonged to him.
Ten minutes later, he crossed the bridge toward Billings. Few cars were on the road. The whole world seemed to move in slow motion. He'd stopped on the sidewalk in the center of the bridge and peered over the rusting metal railing, down into the water below. He shivered in the cold breeze. Winter was on its way.
The flowing water seemed so free, a part of nature. Why wasn't he? Why couldn't he fit in? The water seemed to speak to him in a way he'd never heard: Come join me.
Without consciously making a decision, Ethan found himself climbing the railing and leaning over the edge in a very precarious position. Come join me, the water repeated. He sensed the words more than hearing them; his grip on the railing loosened.
Then a calm female voice spoke to him. Ethan, you're not finished yet.
The words, which would return in moments of indecision, turned him around, panic-stricken, as he wrapped his arms around the rusty railing. Fear weakened him. He barely held himself upright. Gravity teamed with the water to pull him down. To maintain his grip seemed impossible, yet he fought back. Nearly incapacitated, he pushed hard with his legs and dragged himself back over the railing. Shaking but safe, he listened as the water rushed past below. Shock overtook his muscles as he realized, shamefully, how close he'd come to certain death. He collapsed to the cement sidewalk and wept. Deep sobs shook his body; sweat cloaked his skin.
After a few minutes of catching his breath, he got up and continued along the bridge. He walked toward Bank Street. He wanted a drink, and there'd be something open on Bank. It didn't take long to find his way into a small club. The entrance led him to a downstairs bar, where half a dozen people were sitting around, drinking and talking. Quite thirsty, he approached the bar and stood beside a young couple who looked to be students, although he didn't recognize them. Ethan ordered a draft and stood quietly watching. A draft appeared in front of him. At the same time, the girl beside him tapped him on the shoulder, held out her cigarette, and asked him for a light. On reflex, he checked his pockets, knowing he didn't have a match, but liked the girl's pretty face. It was hard not to look at her deep brown eyes. She was dressed in torn denims, a faded jean jacket, and a pair of red high-tops, and he didn't want to stop talking to her just because he couldn't light her cigarette. He asked if she was a student as he kept digging in his pockets.
She said she was a sophomore in dramatic arts. She was supposed to be rehearsing but wasn't in the mood. "What are you in?" she asked, her eyes darting around his face. She was a natural beauty without makeup. Her brown hair was tied back in a loose ponytail. The red sweatshirt under her jacket was ripped just below the neckline.
Ethan couldn't take his eyes off her. "Engineering," he answered, passing his hand through his hair. Her blue eyes seemed to hold the light in the dimness of the bar.
"Engineering?" she said, amazed. "You must be in environmental or systems engineering."
He smiled. "Why do you say that?"
"Because you don't look like a jerk."
Ethan paused and took a sip of his beer. "Is that a compliment or a shot?" He was suddenly more comfortable than he'd been in weeks.
"Probably a compliment," she said and smiled, nodding her head, "although I'll reserve the right to change my mind."
Mila then introduced herself and her friend Sean, who was seated next to her. They were both from Ottawa and in the same program.
"Sean's like a brother," Mila said, as if needing to explain their relationship. "I keep telling him if he hangs with me, he'll never have a girlfriend."
Ethan wasn't entirely sure that Sean saw things the same way that Mila described them, but he didn't care. Her blue eyes and smile were kindling something inside him.
"So what kind of engineering?" asked Sean, speaking for the first time. "I've got two friends in mechanical."
"And they're assholes," Mila was quick to add. "Don't tell me you're a gear-head too." A smile trickled across his lips. "I knew it!" she cried, while Sean smirked beside her. "Another gear-head! Is there no justice? You seem so normal. You're the first human I've met in mechanical engineering. It's not too late to change, you know."
It was Ethan's turn to laugh.
Their conversation continued. They talked about movies and actors. What was hot and what was not. Mila told him of her dream to go to Hollywood and become an actress. That was the real reason she was in the drama program.
Ethan asked why she didn't just go to Hollywood, if that was her dream.
"I'm not ready yet," she replied matter-of-factly, as if it were obvious. "Why are you in engineering?"
"I didn't know what else to do. I'm good at math and science."
Ethan hated his response after hearing that she was chasing a dream. It didn't matter, though. He just liked talking to her; she was like a friend he'd known his whole life. "I like to know how things work," he added. He surprised himself with his next question. "Mila, do you think you're a good actress?"
"Yes," she replied without hesitation. Something flashed in her eyes when she said it—a raw excitement, an inner strength. "You should come and see me. Another Color Blue starts in three weeks."
Another Color Blue was the university theater group's stage production just before the Christmas break. He'd seen the posters but knew almost nothing about it.
Sean stood up. "Mila, I gotta get going. I've still got stuff to catch up on for tomorrow." He leaned forward to give her a hug, but Mila didn't reciprocate. There was a brief awkwardness as his arm went around her back. Ethan again sensed that Sean had different ideas about Mila's company than she understood.
"You know, I'm in the same boat," Mila added.
"No, Mila, it's okay. You stay. It's just I've—"
"I'm already late," Mila interrupted. "Great to meet you, Ethan. See you around." Then she paused. Her forehead wrinkled, as if she was confused about something. "Sean, give me a sec. I really need to pee."
She headed to the toilet.
"You really ought come out and see her in Another Color Blue," Sean said abruptly. "They've had poor ticket sales, but that's just the university. Mila's incredible. She's the show. I've only watched rehearsals, but she's incredible. An alumnus wrote the play."
"Hey, who knows?" Ethan said noncommittally. "Maybe I'll check it out."
An uncomfortable silence fell between them.
First impressions were difficult to break. Ethan could already see the word weirdo forming on Sean's forehead. Each of Sean's gestures seemed to be an effort to be cool. Tall and wiry with a constant smirk on his face, Sean sported gold rings in each earlobe. The thought that crossed Ethan's mind was how often beautiful girls hung around goofy guys.
Sean picked up his book off the bar.
Mila rejoined them. "Okay, let's go," she said.
"See you around, Ethan," Sean said without expression, staring at Mila.
"You too," Ethan added, reaching out to shake hands. Sean had already turned toward the stairs. Ethan stepped back and bumped Mila. "Sorry," he said.
"It's okay," Mila replied. "My fault. It was nice to meet you, Ethan."
She followed Sean to the stairs. At the bottom, she paused and turned. Ethan expected a wave. Instead, she glanced his way, winked, and then hustled up the stairway.
Ethan leaned against the bar. There still were a few people around, but he decided he'd go too. Raising his glass to finish, he found a piece of paper—a note—stuck to the bottom. He realized he'd bumped into Mila because she was slipping the note under his glass.
Ethan
I'd love to talk more. Come see my rehearsal tomorrow.
You won't be sorry. We're in the Aud at seven.
Mila
CHAPTER 2
Real Time November 1983
Ethan hardly slept that night. Mila never left his mind. Having dozed for most of the day, he didn't need the rest. After his physics lab on Monday afternoon, he headed to the Aud.
The Aud was the main auditorium on the university grounds, located at the far end of the campus, away from the dorms. Walking in the cold autumn air, Ethan hardly noticed its icy bite; he had one thing on his mind—Mila.
It was strange to see the main doors into the reception area of the auditorium open when he reached the building. Clusters of people were loitering at the front. He proceeded through the entrance doors and was greeted by a gust of warm air. The maintenance staff was struggling with the inside temperature. The heat in the foyer was stifling, which explained why the main doors were open on such a cold evening. Ethan slowed on entering and then headed to an open door of the auditorium.
The front floor lights of the stage were lit. A number of people were in conversation on the stage. As he walked down the aisle, a handful of people were seated, but he didn't recognize anyone. Ten rows back, he cut in and moved to the middle. The lap of the wooden seat banged loudly as he sat down, and several people turned around, but he reacted as if nothing had happened. He leaned forward, forearms on his knees, and hunted through the faces on stage for a glimpse of Mila.
As he searched, activity started to take place. A spotlight lit up two guys on the front right side of the stage. The main lights dimmed and the auditorium went quiet. The two were talking about a beautiful girl. The spotlight faded and then lit up another actor, farther back to the left. A female, wearing a kerchief around her head, was sweeping the floor. Ethan was quick to recognize Mila, although her face was obscured by the kerchief. Her character spoke with a French accent about how happy she would be if a certain boy would ask her out. Her accented voice mesmerized him, like music to his soul. With each move, she became more beautiful, amplifying his memory of the night before at the bar. To Ethan, she was the only one on stage. In what seemed like minutes, the house lights came on again. But it was eight thirty. It didn't seem possible he'd been sitting for so long.
Mila had enthralled him. The stage was hers, and the cast supported her. He'd never experienced anything quite like it, and he couldn't pull away from her spell. It was rare to be in the presence of greatness, but the impact would change his life.
"Ethan, what a treat to see you!" exclaimed Mila, coming up the aisle where he was sitting. He stood up. "Are you okay?" she asked. Her hand touched his arm.
He blinked a couple of times. "Just great," he answered, staring at her. Unexpectedly, he pulled her close and kissed her. She felt light in his arms and didn't resist. Her lips were soft and salty from perspiring under the stage lights. He held her close for a moment. When he released her, her fellow cast members applauded. Ethan's face flushed, having momentarily forgotten where he was and the many eyes following the star performer. Despite the attention, he was not about to trade places with anyone.
From the corner of his eye, Ethan also noticed Sean, leaving the auditorium alone.
(Continues...)
Excerpted from The Actor by Douglas Gardham. Copyright © 2013 Douglas Gardham. Excerpted by permission of iUniverse, Inc..
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site. | https://www.barnesandnoble.com/w/the-actor-douglas-gardham/1115085783?ean=9781938908668&isbn=9781475977738&cm_mmc=AFFILIATES-_-Linkshare-_-je6NUbpObpQ-_-10:1&r=1 |
Current release: Together with drummer Joshua Abrams, Chad Taylor forms the duo Mind Maintenance, which sees them playing guimbri and mbira. Their self-titled album is out now on Drag City.
If you enjoyed this interview with Chad Taylor, his portrait page on the website of New Music USA makes for a good introduction into his multifacetted work. He also has a bandcamp page.
Where does the impulse to create something come from for you? What role do often-quoted sources of inspiration like dreams, other forms of art, personal relationships, politics etc play?
People are sourounded by an endless amount of creativity and sources of inspiration. It’s more a question about awareness. In society today it is very easy to be complacent.
Inspiration can come from anything and can come from anywhere at anytime if we are tuned in. My kids inspire me all the time. The title Snap yr teeth came from my 6 year old daughter Essie.
For you to get started, do there need to be concrete ideas – or what some have called a 'visualisation' of the finished work? What does the balance between planning and chance look like for you?
We are not thinking in terms of getting started, or in finishing, but rather about entering and exiting. It’s important to plan but it’s also important to be able to let go of any plans.
Is there a preparation phase for your process? Do you require your tools to be laid out in a particular way, for example, do you need to do 'research' or create 'early versions'?
Everyday I wake up is an opportunity for preparation. But how does one prepare for the unknown? A majority of the record is improvised. When improvising, the best way to prepare perhaps is being comfortable with being uncomfortable, being comfortable with the unknown.
Do you have certain rituals to get you into the right mindset for creating? What role do certain foods or stimulants like coffee, lighting, scents, exercise or reading poetry play?
The best ritual for me is keeping an open mind.
What do you start with? How difficult is that first line of text, the first note?
We get started by getting a good tone on our instruments. Once we are satisfied with our sound, the sound itself can help dictate what should happen next.
Once you've started, how does the work gradually emerge?
Yes.
Many writers have claimed that as soon as they enter into the process, certain aspects of the narrative are out of their hands. Do you like to keep strict control over the process or is there a sense of following things where they lead you?
You have to let the music lead you.
Often, while writing, new ideas and alternative roads will open themselves up, pulling and pushing the creator in a different direction. Does this happen to you, too, and how do you deal with it? What do you do with these ideas?
There are many roads that can be traveled but they all get to the same destination. We are usually not concerned about the different possibilities, but it’s important to have an understanding of where you are going and how you will know when you arrive.
There are many descriptions of the creative state. How would you describe it for you personally? Is there an element of spirituality to what you do?
Being alive and aware is the creative state. There can be an element of spirituality in everything or nothing. Your mind is constantly pulling you in different directions and this requires mind maintenance.
Especially in the digital age, the writing and production process tends towards the infinite. What marks the end of the process? How do you finish a work?
You don’t. A piece of music is never finished nor is it stagnant. It’s constantly changing and evolving.
How important is it for you to let it lie and evaluate it later on? How much improvement and refinement do you personally allow until you're satisfied with a piece? What does this process look like in practise?
Listening to a piece of music never effects me the same way twice. It always produces different feelings and vibrations. Perhaps a piece of music is never finished nor stagnant. Perhaps it’s alive and constantly changing.
What's your take on the role and importance of production, including mixing and mastering for you personally? How involved do you get in this?
This is all very important but not my expertise. So I rely on people I really trust in this matter.
After finishing a piece or album and releasing something into the world, there can be a sense of emptiness. Can you relate to this?
No, I feel quite the opposite. I feel a sense of fulfillment. Music never ends nor does it ever start. It’s cyclical.
Creativity can reach many different corners of our lives. Do you personally feel as though writing a piece of music is inherently different from something like making a great cup of coffee? What do you express through music that you couldn't or wouldn't in more 'mundane' tasks?
For me personally there is nothing in the world that can help one spiritually more than music. The power of music is boundless. | https://15questions.net/interview/chad-taylor-mind-maintenance-shares-his-creative-process/page-1/ |
Sometimes in life, a man suffers from failure which moves his life to a setback. At this moment, the one thing which this person needs is the feeling of inspiration from anything else which will allow him to move his life back on track. Here are some of the famous inspirational quotes which may help to change the mind of such a person.
Not only in times of failures, but in times when a person needs to achieve something in his life but is not able to do so as he just cannot cross the bridge to get what he wants. There may come a time in life when this person may give up everything after failure. He tries so hard in order to avoid going to a dark phase in his life, but the sadness which surrounds him compels him not to do so. At that exact moment, all he needs is a light of inspiration. This depressed man needs his real friends to tell him that he should not lose hope in himself and just keep moving on in life. The inspiration which this man needs can come from many things. One thing he might do is to find a place inside of him where there is nothing else but joy, which may then help him to burn out the pain inside of him.
Mentioned here are some of the most famous inspirational quotes by great philosophers, various famous people, and great writers which may compel you to find a way to work around the problems you are facing and find a solution to it. | https://quotesology.com/famous-inspirational-quotes/ |
Dec 04, 2019 - Reading time 6-8 minutes
What will the hospital of the future look like? According to Jeroen Tas, Chief Innovation and Strategy Officer, it will be very different from what we are used to today. In our most-read article of 2019, Tas shares his vision of a hospital that is not confined to the walls of a single building or hospital complex, but which comprises an integrated network of healthcare facilities and services that reach out into every corner of the community. All organized around the patient.
A healthier and sustainable world for all. That’s the north star which our CEO Frans van Houten describes in this article, reflecting on the reality that over half the global population still lacks access to essential health services. Drawing inspiration from a personal letter by his late grandfather, Van Houten makes the case for inclusive and sustainable healthcare development, highlighting the importance of Universal Health Coverage and the transition from a linear to a circular economy.
How can healthcare providers and health technology companies partner up to address the rising demand for care and surging costs? Today’s world requires a new approach to innovation, argues Chief Technology Officer Henk van Houten as he looks back on the past 15 years. He envisions a future of value-based care, supported by open digital platforms and ‘systems of engagement’ that connect and interpret patient data from various sources to support caregivers with relevant insights at the point of care.
Chief Medical Officer Jan Kimpen further elaborates on what is needed to turn value-based care into a reality. He calls for a collaborative approach, enabled by health informatics, to connect caregivers, data, and technology across patient pathways. This is still a far cry from how most healthcare systems are organized today, Kimpen points out. He lists three priorities for change while highlighting bright spots to take inspiration from.
Few areas of medicine have evolved as rapidly as image-guided therapy. Open heart surgery used to be the norm for treating conditions such as coronary artery disease. A patient would spend more than a week in the hospital. Today, a patient treated for coronary artery disease is often able to walk out of the hospital the same day – thanks to minimally invasive procedures. What’s next in image-guided therapy? Bert van Meurs, Chief Business Leader Image Guided Therapy, lists five key innovation areas.
Imagine a future in which all healthcare professionals that are involved in the care for a patient have easy access to the same holistic view of that patient – whether it’s in a high-tech hospital room where the patient is about to undergo a life-saving operation, or during a video consult with a patient who calls for advice based on the latest health readings from her home therapy device. Chief Technology Officer Henk van Houten explains how the Internet of Things can help turn this vision into a reality.
One of the biggest challenges clinicians face today is the sheer amount and complexity of data they acquire every day. With such a huge influx of data, often stored across disparate systems, clinicians can easily feel overwhelmed. Carla Kriwet, Chief Business Leader Connected Care, details how artificial intelligence (AI) can help translate data into actionable insights to help clinicians prioritize, increase efficiency, and improve patient outcomes along the entire continuum of care.
Lung cancer remains the leading cause of cancer death worldwide. Tragically, the majority of patients are diagnosed at a late stage, with a minimal chance of surgical cure. Fortunately, the knowledge and the technology are now available to diagnose and treat lung cancer earlier, more accurately, and less invasively. Ronald Tabaksblat, Business Leader Image Guided Therapy Systems, reveals how.
User-friendly, intuitive interfaces are key to the future of healthcare, according to Atul Gupta, Chief Medical Officer Image Guided Therapy. In this article, Gupta highlights augmented reality (AR) as a case in point, showing how AR will allow clinicians in the interventional suite to keep their eyes on the patient, while superimposing live data and 3D medical imagery to guide precision therapy. It’s a whole new way of looking at image-guided therapy, from which both patients and staff stand to benefit.
Despite advances in medical technology, healthcare accessibility and levels of clinical expertise and performance still vary tremendously between regions and care settings, says Rob Cascella, Chief Business Leader Precision Diagnosis. Can AI contribute to a universal standard of patient care by advancing precision diagnosis across the globe?
Our site can best be viewed with the latest version of Microsoft Edge, Google Chrome or Firefox. | https://www.philips.com/a-w/about/news/archive/features/20191204-our-10-most-read-articles-in-2019-on-the-future-of-healthcare.html |
This paper is about exploring the use of spirituality in the practice of psychology and how it could potentially strengthen their bond with their patients. There could be a better understanding of issues and treatment if spirituality were apart of therapeutic practices.
Psychology got its start in ancient times from philosophy. Psychology remained part of philosophy until nineteenth century (Leahey, 2004). Here we have the beginning of the mind-body problem. To some the mind was our essence and the body a mere vessel. When the physical body died, the soul moved on to an afterlife (Leahey, 2004). The soul (mind) has the spiritual world knowledge, whereas the body has the physical world knowledge (Leahey, 2004).
Everybody but atheists, have a faith that they follow. It has been addressed that psychologist show gain education and knowledge as to the role that religion and spirituality plays on personal factors (Shafranske, 2010). There was an introduction of value in different consciousness in therapeutic practices. It is very important for the therapist to integrate the patients’ spirituality in the course of interventions (Shafranske, 2010). It is also important to take into consideration the personal and professional influences of inspiration from the therapists’ point of view. This can greatly impact how the therapist entices the patient to open up, and help the therapist to relate better to the patient (Shafranske, 2010). “This leads to an associated point: Given the lack of attention given to the religious and spiritual dimension in most psychology training, how prepared are clinicians to be mindful of the potential impacts their religious and spiritual commitments have on their professional practice, to appropriately and ethically integrate spirituality in psychological treatment, or respond to emergent transcendent experiences” (Shafranske, 2010, pp. 125)? This seems to mean that therapists should have the understanding to be able to mindfully talk about spirituality in their practice and treatment plans for their patients.
Spirituality is hard to define, but it has been explained a few ways. One is that spirituality can be called one’s highest or ultimate values or reality, and the relationship one has with those realities or values (Braud, 2009). A second way is the belonging or link to the transcendental ground of being. Another is how people relate to God, other humans, or Earth. Some refer to it as how committed one is to practicing a particular faith. However, it is important to distinguish between healthy practices and beliefs and ones that are unhealthy to well-being (Braud, 2009). Yet another general term by Lindholm and Astin is involving the process inside when you look for personal authenticity, wholeness, and genuineness; transcending one’s center, having a deeper sense of connecting to self and others from having relationships and community, having meaning, direction, and purpose in life, being open enough to the possibility of a relationship with a higher being that is above human existence and knowing, and having a value for the sacred (Braud, 2009). There are other definitions of spirituality for femininity and other cultures. But they were not included in the ones above.
There is a relatively new field of psychology called transpersonal psychology. In addition to conventional ways, transpersonal psychologists use heuristic research, intuitive inquiry, organic inquiry, and integral inquiry. These are depicted in these psychologists by a higher level of integration and inclusiveness in the whole person, more variety of benefits and functions in a session, sources of inspiration, more ways of knowing, topics and questions researched, different ways of gathering, using, and explaining information, including epistemology and ontology, and ethical thoughts and values that are relevant (Braud, 2009). This gives a broader perspective of all aspects of the issues at hand.
When we think in terms of helping people with their psychological issues, it just makes sense to include everything you possibly can to understand what the patient is going through and how they see thing possibly running their course. Because a lot of people do follow some sort of faith, it is important for the therapist to know as much as possible about their patient’s spirituality, in order to help them the best way possible and include every aspect of that person as a whole. Having this knowledge will provide the best treatment plan for that specific patient.
REFERENCES
Braud, W. (2009). Dragons, spheres, and flashlights: appropriate research approaches for studying workplace spirituality. Journal Of Management, Spirituality & Religion, 6(1), 59-75.
Leahey, T. H. (2004). A history of psychology: Main currents in psychological thought (6th ed.). Englewood Cliffs, NJ: Prentice Hall.
Shafranske, E. P. (2010). Advancing “the boldest model yet”: A commentary on psychology, religion, and spirituality. Psychology Of Religion And Spirituality, 2(2), 124-125.
———————————————————-
Tracie Timme is a Privileged ProvenTherapist. See her Profile for counseling support. | http://www.counseling.proventherapy.com/2015/02/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.