content
stringlengths 71
484k
| url
stringlengths 13
5.97k
|
---|---|
An engrossing biography of the longest-reigning female pharaoh in Ancient Egypt and the story of her audacious rise to power.
Hatshepsut—the daughter of a general who usurped Egypt's throne—was expected to bear the sons who would legitimize the reign of her father’s family. Her failure to produce a male heir, however, paved the way for her improbable rule as a cross-dressing king. At just over twenty, Hatshepsut out-maneuvered the mother of Thutmose III, the infant king, for a seat on the throne, and ascended to the rank of pharaoh.
Shrewdly operating the levers of power to emerge as Egypt's second female pharaoh, Hatshepsut was a master strategist, cloaking her political power plays in the veil of piety and sexual reinvention. She successfully negotiated a path from the royal nursery to the very pinnacle of authority, and her reign saw one of Ancient Egypt’s most prolific building periods.
Constructing a rich narrative history using the artifacts that remain, noted Egyptologist Kara Cooney offers a remarkable interpretation of how Hatshepsut rapidly but methodically consolidated power—and why she fell from public favor just as quickly. The Woman Who Would Be King traces the unconventional life of an almost-forgotten pharaoh and explores our complicated reactions to women in power.
PUBLISHERS WEEKLY
The life of Hatshepsut, Egypt's second female pharaoh, was replete with opulent living, complex royal bloodlines, and sexual energy; in short, the kind of drama that fuels Ancient Egypt's enduring appeal. What it lacked, however, was comprehensive documentation something UCLA Egyptologist Cooney offers in a narrative biography supplemented by scholarly hypotheses that attempt to flesh out the uncertainties. Groomed for an important role as a high priestess from birth, Hatshepsut, through a combination of good fortune and ruthless strategy, "scaled the mountain to kingship." Her role ostensibly "decreed by nothing less than a divine revelation" is shrouded in mystery by a limited historical record concerned too frequently with the "supernatural mechanisms of divine authority." The high points; of this ambitious project are to be found in Cooney's keen sense for the visual elements of Hatshepsut's gender-defying rule and expert inferences on the psychologies of Hatshepsut and her contemporaries. From Hatshepsut's self-perception, political prowess, and lifestyle emerge an image of the "ultimate working mother" and a compelling insight into ancient gender roles. However, Cooney's work will likely appeal most to already well-informed armchair Egyptologists, as unfamiliar nomenclature and the speculative tone can make this a difficult text for the casual reader.
Customer ReviewsSee All
What if?
She Who Would Be Queen would have made a more literary title. The acknowledgements and bibliography are actually more useful than the text itself.
Once you take away all the "we don't know"s, "it's likely that"s, and "could have been"s, you're left with a very thin text indeed.
The book might serve as a young reader's introduction to an interesting woman. | https://books.apple.com/ca/book/the-woman-who-would-be-king/id866752170 |
Original post:
Two Trappist-1 planets are highly likely to be habitable
Kepler data reveals 20 potential habitable worlds
November 2, 2017 by
Filed under Green
Comments Off on Kepler data reveals 20 potential habitable worlds
Many people once thought Earth was unique in outer space in its ability to support life. Recent discoveries could shatter that notion, like one new analysis of information from the Kepler Space Telescope . An international team led by Susan Thompson of the SETI Institute has discovered there might be 20 worlds where life could dwell. There could be as many as 20 habitable planets in space , according to this new discovery. One of the most promising worlds is KOI-7923.01. It’s 97 percent Earth’s size, and has a year comprised of 395 days. It is a bit colder than Earth – think more tundra and less tropical island – but it is warm enough, and it’s big enough to hold liquid water so crucial for life. Jeff Coughlin of the NASA Ames Research Center told New Scientist, “If you had to choose one to send a spacecraft to, it’s not a bad option.” Related: First hints of water detected on Earth-sized TRAPPIST-1 planets Many of the habitable worlds orbit stars similar to the sun. The star KOI-7923.01 orbits is a little colder than the sun, and that fact together with the exoplanet’s distance away makes KOI-7923.01 cooler than Earth. The time to complete an orbit varies among the potentially habitable worlds – at 395 days, KOI-7923.01 takes the longest. Some of the worlds finish an orbit in mere Earth weeks, or months. The quickest orbit is just 18 Earth days. Coughlin told New Scientist his team is around 70 to 80 percent sure these habitable worlds are solid candidates – they’ll need to confirm their hunch with further observations, such as from the Hubble Space Telescope or ground-based observatories. The original Kepler mission unearthed the planets, but it gazed at the same part of the sky for just four years until its reaction wheels broke, hindering its aiming ability. That means we’ve only glimpsed the planets just once or twice, and, according to New Scientist , the signals could be wobbly. The scientists recently submitted their research to a journal in the middle of October. Via New Scientist Images via NASA Ames/JPL-Caltech/T. Pyle and NASA/W. Stenzel
Originally posted here:Â
Kepler data reveals 20 potential habitable worlds
First hints of water detected on Earth-sized TRAPPIST-1 planets
September 1, 2017 by
Filed under Green
Comments Off on First hints of water detected on Earth-sized TRAPPIST-1 planets
Water could be present on some of the Earth-sized planets orbiting the dwarf star TRAPPIST-1, according to work from an international group of astronomers. They utilized the NASA/ESA Hubble Space Telescope to estimate substantial amounts of water could be present in the outer planets, including three in the habitable zone. This boosts the possibility those planets are livable. Astronomer Vincent Bourrier of the Observatoire de l’Université de Genève led an international team that included scientists from NASA and MIT to attempt to determine if there’s water 40 light-years away on the seven Earth-sized planets orbiting TRAPPIST-1, a system which claims the biggest number of Earth-sized planets we’ve found to date. These researchers used the Space Telescope Imaging Spectograph on Hubble to scrutinize how much ultraviolet radiation the TRAPPIST-1 planets receive. Related: NASA discovers 7 Earth-sized planets outside our solar system Bourrier said ultraviolet starlight can break water vapor into oxygen and hydrogen . And those elements can escape as ultraviolet rays with more energy to heat a planet’s upper atmosphere. It’s possible for Hubble to detect escaped hydrogen gas, which can act as a “possible indicator of atmospheric water vapor,” according to the statement on the research. Some of the outer planets, including e, f, and g, could have water on their surfaces. During the last eight billion years, the inner planets of the TRAPPIST-1 system “could have lost more than 20 Earth-oceans-worth of water,” according to the statement. But the outer planets might have not lost that much, suggesting they could have retained water. While the hints are exciting, the scientists say we can’t draw any final conclusions quite yet. Bourrier said in the statement, “While our results suggest that the outer planets are the best candidates to search for water with the upcoming James Webb Space Telescope, they also highlight the need for theoretical studies and complementary observations at all wavelengths to determine the nature of the TRAPPIST-1 planets and their potential habitability.” Via Hubble Space Telescope Images via ESO/N.Bartmann/spaceengine.org and NASA/R. Hurt/T.Pyle
View original here:Â
First hints of water detected on Earth-sized TRAPPIST-1 planets
Astronomers Reveal the Most Livable, Earth-Like Planet Ever Discovered
April 18, 2014 by
Filed under Green
Comments Off on Astronomers Reveal the Most Livable, Earth-Like Planet Ever Discovered
In the race to find a planet besides Earth that can host life, scientists have made an incredible discovery. Astronomers recently confirmed the first ever planet similar in size to Earth at distance from its own star that would allow liquid water to pool on the surface. Every once in a while scientists come across a planet that orbits within the Goldilocks zone – the range of distance from a star that can potentially sustain life – but none of those planets have ever been very comparable in size to our own blue sphere. Dubbed Kepler-186f, it’s the the most Earth-like planet ever discovered and confirms that other habitable planets exist somewhere out there. Read the rest of Astronomers Reveal the Most Livable, Earth-Like Planet Ever Discovered Permalink | Add to del.icio.us | digg Post tags: astronomy , Cinderella planets , earth cousins , Earth like planets , earth twins , extraterrestrial life , Goldilocks planets , Habitable planets , Kepler habitable planet , Kepler-186f , nasa , Nasa discoveries , NASA Kepler , new planets , planets like earth
Go here to read the rest: | http://agreenliving.org/tag/habitable-planets/ |
By Tom McGregor, CCTV.com Panview commentator and editor
Chinese scientists are blazing new trails in the science & technology fields. From patenting groundbreaking inventions to setting up more powerful telescopes, the Chinese Academy of Sciences (CAS) has earned worldwide recognition for supporting scientific projects that are transforming the world as we know it.
(Picture from China Daily)
In recent years, CAS has encouraged Chinese scientists to make further discoveries in astronomy that has led to more in-depth research on quantum theory, black holes, pulsars and gravitational waves.
On January 2017, CAS announced they would start to set up the world’s highest altitude gravitational wave telescopes - Ngari No. 1 and Ngari No. 2 - in the Tibet Autonomous Region that can detect the faintest echoes from the universe resonating from our universe, according to Xinhua.
Collaboration but in isolation
The telescopes’ location is 30-km. south of Shiquanhe Town, Ngari Prefecture. The first phase telescope will be 5,250-meters above sea-level and scheduled for operations in 2021.
The Ngari No. 1 can detect and gather precise data on primordial gravitational waves in the Northern Hemisphere. Afterwards, the Ngari-2 will be set up nearby, but at 6,000-meters above sea-level.
Both telescopes will be installed at the Ngari Gravitational Wave Observatory with total construction costs estimated at RMB130 million (US$18.8 million).
Yao Yongqiang, chief researcher of National Astronomical Observatories of CAS, will lead the project. The Ngari Prefecture is home to high mountains and few people, making it easier for scientists to detect the slightest movements in the cosmic light.
The Ngari Observatory would stand alongside the South Pole Telescope and a facility in Chile’s Alacama Desert as crucial sites for China’s gravitational waves’ research.
Why important?
Gravitational waves are “ripples” in a space-time continuum that are caused by violent and energetic processes in the universe. Albert Einstein had predicted its existence in 1916 when he published a book on the general theory of relativity.
Scientists believe massive accelerating objects, including neutron stars or black holes, that are orbiting each other can disrupt space-time through “waves” of distorted space that would radiate from a source.
Xinhua likens it to the ripples you see from a stone that is thrown in a pond. The ripples travel at the speed of light through the universe, while carrying information about their origins and provide valuable clues on the nature of gravity.
Additionally, some scientists think that by conducting research on gravitational waves, they can gain a better understanding on the Big Bang, which is a scientific theory explaining how our universe came into being.
Going deeper
On February 11, 2016, scientists based in the United States, working at the Laser Interferometer Gravitational-Wave Observatory (LIGO) announced they were the first to observe gravitational waves, which were generated from a black hole merger. The same team made a second detection on June 15.
Nonetheless, Xiong Shaolin, scientist at the Institute of High Energy Physics for CAS, was not amazed. He told Xinhua the position accuracy of all gravitational wave events that were detected so far are “poor.”
“If scientists can find electromagnetic signals happening at similar positions and times of the gravitational wave events, it will increase the reliability of the detection,” said Xiong.
He added, “Combined analysis of the gravitational waves and electromagnetic signals will help reveal more about the celestial bodies emitting the gravitational waves.”
Xiong suspects what the American scientists detected was gamma-rays, not gravitational waves.
Gamma-rays blast
A roaring debate has ensued in scientific community circles over the proper identification gravitational waves and gamma-ray bursts.
Zhang Shuangnan, lead scientist for China’s Hard X-ray Modulation Telescope (HXMT) and director of the Key Laboratory of Particle Astrophysics for CAS, wants more monitoring of gamma-ray bursts, which is defined as intense radiation that emanates from a supernova, star collapsing or black hole.
“Since gravitational waves were detected, the study of gamma-ray bursts has become more important,”Zhang said. “In astrophysics research, it’s insufficient to study just the graitational wave signals.”
He added, “We need to use the corresponding electromagnetic signals, which are more familiar to astronomers to facilitate the research on gravitational waves.”
Zhang contends the HXMT can play a vital role to detect gamma-ray bursts and distinguish them from gravitational waves, since the HXMT when functional in 2020 can detect gamma-rays with high-sensitivity in energy, ranging from 200keV to several MeV.
Looking at the big picture
Chinese scientists, who are on a mission to conduct research on gravitational waves and gamma-rays, recognize the value of their endeavors.
“Gravitational waves provide us with a new tool to understand the universe, so China has to actively participate in the research,” Hu Wenrui, physicist and CAS member, told China Daily. “If we launch our own satellites, we will have a chance to be a world leader in gravitational wave research in the future.”
Hu addressed the topic last year - 2016 and in June 2017, China’s Long March 4B rocket launched the HXMT, known as Insight, into orbit 350 miles above the Earth.
Chinese scientists see space as the new frontier and are prepared to tackle new discoveries on gravitational waves and gamma-ray bursts.
[email protected]
(The opinions expressed here do not necessarily reflect the opinions of Panview or CCTV.com)
Panview offers a new window of understanding the world as well as China through the views, opinions, and analysis of experts. We also welcome outside submissions, so feel free to send in your own editorials to "[email protected]" for consideration. | http://english.cctv.com/2017/07/14/ARTIVPOzWfhgWSznIvEzF8YB170714.shtml |
A Search for Gravitational Waves from Inspiraling Neutron Stars and Black Holes
Using data taken between July 2009 and October 2010, researchers from the Laser Interferometer Gravitational-wave Observatory (LIGO) Scientific Collaboration and the Virgo Collaboration have completed a joint search for merging binary star systems consisting of neutron stars and black holes.
Neutron stars are formed when old massive stars collapse under their own gravity. As their name indicates, neutron stars consist almost entirely of neutrons packed tightly together and are extremely dense. Black holes are formed from the collapse of even more massive stars; they are so compact that even light cannot escape their gravitational pull. Black holes and neutron stars can sometimes form binary systems, that is, two neutron stars or two black holes or a neutron star and a black hole may be close enough in space to orbit around each other. As they orbit each other, the system loses energy in the form of gravitational waves. The objects move closer together and eventually merge to form a single black hole. As they "inspiral" into each other, their relative velocity increases; by the time the objects are close to merging they are moving so fast that the gravitational waves can be detected by ground-based detectors on Earth, even though the binary may be hundreds of millions of light years away. Binary neutron stars and binary black holes are one of the most promising sources for the first detection of gravitational waves.
Scientists know that such systems exist, as astronomers have observed binary neutron stars in the Milky Way galaxy using radio telescopes. Although none of the observed binaries are close enough to merger to be detected by LIGO and Virgo, scientists can use these observations to determine the rate of mergers in the universe. Observations indicate that a neutron star-neutron star merger occurs on average only every 10,000 years in a galaxy like the Milky Way. Binary mergers do not occur very often in our own galactic backyard! By comparing the sensitivity of the LIGO and Virgo detectors to this rate, the number of possible gravitational-wave detections in a period of time can be estimated. At the same time, observations by the LIGO and Virgo detectors can set limits on the rate of mergers, which helps astronomers to make better models of the universe. A direct gravitational-wave detection would instead allow scientists to shed light on the internal structure of neutron stars and test how gravity behaves when it is very strong.
Prior to this search, there have been five searches for neutron star and black hole binary systems using the LIGO detectors and one search using the Virgo detector. During this new search the network of LIGO and Virgo instruments were more sensitive than ever before; they could detect binary neutron stars up to approximately 130 million light years away and binary black holes up to approximately 290 million light years. Combing through the data, LIGO and Virgo Scientists found a signal that looked like a gravitational wave from a black hole orbiting another black hole or a neutron star. It was later revealed that this signal was a "blind injection" --- a fake signal secretly added to the data! The success of this exercise confirmed that LIGO and Virgo scientists are ready to detect real gravitational wave signals and tested the procedures that are used in their searches.
After the blind injection was removed from the data, no gravitational-wave signals were identified. This "null result" allows LIGO and Virgo scientists to set new limits on the rate of compact binary mergers in the universe. These limits are still about 100 times higher than expected rates from astronomical observations, so the fact that no gravitational waves were detected is consistent with expectations. This search is one of the last to use data from the "initial" detector era. Advanced LIGO detectors will be operational in 2015, and once these reach design sensitivity scientists will be able to detect neutron star-neutron star mergers within a volume that is 1000 times larger than initial detectors. Based on astronomical observations this means we may detect tens of gravitational waves per year. The successful identification of the blind injection as a gravitational-wave candidate in this analysis gives scientists the confidence that they will be ready -- and able -- to detect gravitational waves from binary neutron stars and binary black holes in the advanced detector era.
Read more:
Figures from the Publication
For more information on how these figures were generated and their meaning see the publication at arXiv.org. | https://ligo.org/science/Publication-S6CBCLowMass/index.php |
It looks like `Oumuamua is just a big rock after all
The SETI initiative Breakthrough Listen has announced that preliminary observations of the first known interstellar asteroid show no sign that the 400 m (1,300 ft)-long object is anything other than natural. No directed or broadcast radio transmissions have been detected from `Oumuamua (A/2017 U1), but observations and analysis continue.
It was a long shot, but scientists at Breakthrough Listen couldn't pass up on the chance that the first detected interstellar visitor to the Solar System might be more than it seemed. Earth-based telescopic observations of `Oumuamua after it was discovered on October 19, 2017 by the University of Hawaii's Pan-STARRS 1 telescope on Haleakala indicated that the object was on an open-ended hyperbolic course that had already brought it to within 0.25 AU (23 million mi, 37 million km) of the Sun in September, and that it was speeding back into deep space at 95,000 km/h (59,000 mph).
But what intrigued Breakthrough Listen was that the object is a rocky or metallic spindle and that this shape could mean that `Oumuamua is artificial. Not wanting to miss the chance that it was an alien spacecraft similar to Sir Arthur C Clarke's fictional Rama, the scientists turned the Breakthrough Listen backend instrument on the Robert C. Byrd Green Bank Telescope in West Virginia on `Oumuamua.
The first of four planned observation blocks was conducted on December 13 from 3:45 pm to 9:45 pm EST. The scan was of the L, S, X, and C radio bands that consist of several billion radio bands between 1 and 12 GHz. After calibration, 90 TB of raw data was recorded over a two-hour period. This was then sent through the Breakthrough Listen "turboSETI" pipeline software to seek out signals that are drifting in their frequency. This allows the scientists to pick out intelligent signals while eliminating those caused by the asteroid's motion distorting background radio signals and human interference.
Though there are more observations to be made and more analysis to be carried out, Breakthrough Listen says that no intelligent signals have been found so far. However, the study is ongoing and the organization invites the public to inspect a subset of the S-band data for themselves with the help of an online tutorial.
"It is great to see data pouring in from observations of this novel and interesting source," says Andrew Siemion, Director of Berkeley SETI Research Center. "Our team is excited to see what additional observations and analyses will reveal."
Source: Breakthrough Listen
Want a cleaner, faster loading and ad free reading experience? | https://newatlas.com/radio-oumuamua-interdtellar-probe/52640/ |
That’s very informative, but just how do archaeologists do…archaeology? And how do they “think” it? How does the analysis lead to the interpreting and recreating? A new, 2012 publication provides a helpful discussion of this situation for all kinds of researchers.
Archaeologists are scientists, and like other scientists they make observations about the real world, and formulate hypotheses and develop theories and models about the data they have gathered.
Figure 3-1 from A Framework for K-12 Science Education (page 45), captioned: The three spheres of activity for scientists and engineers.
One helpful way of understanding the practices of scientists and engineers is to frame them as work that is done in three spheres of activity, as shown in Figure 3-1. In one sphere, the dominant activity is investigation and empirical inquiry. In the second, the essence of work is the construction of explanations or designs using reasoning, creative thinking, and models. And in the third sphere, the ideas, such as the fit of models and explanations to evidence or the appropriateness of product designs, are analyzed, debated, and evaluated…. In all three spheres of activity, scientists and engineers try to use the best available tools to support the task at hand, which today means that modern computational technology is integral to virtually all aspects of their work.
At the left of the figure are activities related to empirical investigation. In this sphere of activity, scientists determine what needs to be measured; observe phenomena; plan experiments, programs of observation, and methods of data collection; build instruments; engage in disciplined fieldwork; and identify sources of uncertainty. For their part, engineers engage in testing that will contribute data for informing proposed designs. A civil engineer, for example, cannot design a new highway without measuring the terrain and collecting data about the nature of the soil and water flows.
The activities related to developing explanations and solutions are shown at the right of the figure. For scientists, their work in this sphere of activity is to draw from established theories and models and to propose extensions to theory or create new models. Often, they develop a model or hypothesis that leads to new questions to investigate or alternative explanations to consider. For engineers, the major practice is the production of designs. Design development also involves constructing models, for example, computer simulations of new structures or processes that may be used to test a design under a range of simulated conditions or, at a later stage, to test a physical prototype. Both scientists and engineers use their models—including sketches, diagrams, mathematical relationships, simulations, and physical models—to make predictions about the likely behavior of a system, and they then collect data to evaluate the predictions and possibly revise the models as a result.
Between and within these two spheres of activity is the practice of evaluation, represented by the middle space. Here is an iterative process that repeats at every step of the work. Critical thinking is required, whether in developing and refining an idea (an explanation or a design) or in conducting an investigation. The dominant activities in this sphere are argumentation and critique, which often lead to further experiments and observations or to changes in proposed models, explanations, or designs. Scientists and engineers use evidence-based argumentation to make the case for their ideas, whether involving new theories or designs, novel ways of collecting data, or interpretations of evidence. They and their peers then attempt to identify weaknesses and limitations in the argument, with the ultimate goal of refining and improving the explanation or design.
In reality, scientists and engineers move, fluidly and iteratively, back and forth among these three spheres of activity, and they conduct activities that might involve two or even all three of the modes at once. The function of Figure 3-1 is therefore solely to offer a scheme that helps identify the function, significance, range, and diversity of practices embedded in the work of scientists and engineers. Although admittedly a simplification, the figure does identify three overarching categories of practices and shows how they interact.
People who are not archaeologists sometimes think archaeology is about artifacts, ancient things used by long-dead people. But for archaeologists, old things provide clues they can use to create stories about people, and they focus on developing these stories.
Archaeologists usually have only small parts of each story, and so they look at many story-pieces hoping they can amalgamate them into more complete stories. Thus, over time, archaeologists are assembling more and more real-world data, which do illuminate more corners of the larger, illusive, complete stories. However, in archaeology, complete stories can never be discerned—too much is missing. It is missing because it was never left behind, or because it never survived the rigors of time. Still, the pieces archaeologists are assembling today provide more and more—exciting—detail about our human past. | http://thesga.org/2012/03/archaeology-real-world-to-hypotheses-theories/ |
You need to ensure that you put good data into your model (inputs) in order to get good data out of your model (outputs). For example, you'll need to know things like how long it takes to perform a particular process or how frequently customers arrive, etc.
In order to input that kind of data into FlexSim, you'll have to observe your business system or use other data-gathering methods to get kind of information FlexSim needs to create an accurate model of your business system. This topic will discuss various methods for gathering useful data.
Use Historical Data
You might already have all the useful data you need right at your fingertips. If your facility uses automated tracking for customers, work orders, etc., you could possibly pull that data from the computers that track it. You could then use that data to determine an appropriate statistical distribution for a particular process or set of processes. Talk to your facility's IT managers about pulling statistical data from these computers for a specific period of time. Remember that you'll want to gather enough data to be representative of what is normal for your business system.
Conduct a Time Study
A time study involves direct and continuous observation of a particular task or process to determine how much time the process takes. The observer often uses a timekeeping device (such as a stopwatch or video camera) to record the time taken to accomplish a task. The observer will observe the task multiple times over a long period of time, recording the amount of time each process takes every time. There are many free guides on the Internet for conducting time studies if you would like to conduct a time study yourself. There are also consulting firms that are willing to conduct time study projects if needed.
Observe Your Business First Hand
Don't just ask yourself: is this what is happening in my business system today? Make sure you actually go and see first-hand for yourself. While you're there observing your business system first hand, try walking along the actual pathways of the customer, material, and/or information flow yourself. Start with a quick walkthrough of the entire door-to-door business system to get an overall sense of the flow of materials and information. Ultimately the goal is to make real observations in real time talking to real people.
Interview Employees
You could interview all your staff members and get a rough estimate about how long particular processes take and use that data to get an approximate wait and processing times. Although employee interviews are subjective by nature, this kind of information would at least be useful for making an educated guess.
Make an Educated Estimate
There are actually some decent ways to make an educated guess within a 90% confidence range. Consider reading Douglas Hubbard's How to Measure Anything: Finding the Value of "Intangibles" in Business, in its 3rd edition at the time of this writing. This book discusses useful methods for estimating measurements that are potentially too costly or difficult to observe directly.
Conduct a Sensitivity Analysis
Each of the data-gathering methods described in the previous sections will involve some investment of time and money. Some methods are more resource-intensive than others. For example, time studies are the most costly, whereas interviewing employees or making an educated guess are not very costly at all. Before engaging in a costly time study, you should ensure that the information you would gain from this study would be valuable enough to justify the expense. You don't want to waste time and money on data that isn't going to matter to your simulation project in the end.
One effective way to justify this expense is to conduct a sensitivity analysis. To conduct sensitivity analysis, begin building your simulation model with the least costly data-gathering methods (such as making an educated guess). Then, once you've built your first prototype of your simulation model, try testing your model inputs (such as customer arrival rates or processing times, etc.) by changing them and monitoring their impact on your key metrics. After performing this analysis, you'll know which model inputs have the strongest impact on the key metrics. You can then determine which model inputs are most valuable to your simulation project. In other words, you will know which model inputs might justify a more expensive data-gathering technique to get higher quality data. | https://docs.flexsim.com/en/21.1/BestPractices/BeforeBuilding/DataGathering/DataGathering.html |
SOAR telescope observing Dimorph after collision with DART spacecraft
(ORDO NEWS) — The DART (Double Asteroid Redirection Test) spacecraft deliberately crashed into the asteroid Dimorph, a satellite of the asteroid Didyma, on September 26, 2022.
This was the first planetary defense test in which scientists tried to change the orbit of an asteroid.
Two days after the impact, astronomers used the 4.1-meter telescope of the Southern Astrophysical Research Center (SOAR) at the Cerro Tololo Observatory in Chile to capture a huge column of dust and debris erupting from the asteroid to the surface.
“Now the next phase of the DART team’s work begins as they analyze their data and the observations of our team and other observers around the world who participated in the study of this exciting event,” said Matthew Knight, an astronomer. “We plan to use SOAR to monitor the release in the coming weeks and months.”
These observations will allow researchers to gain knowledge about the nature of the Dimorph’s surface and find out how much material was ejected as a result of the collision, and at what speed it was ejected.
Also, scientists will be able to study the distribution of particle sizes in an expanding dust cloud. The analysis of this information will help protect the Earth and its inhabitants in the future.
The Vera K. Rubin Observatory, currently under construction in Chile, is scheduled to conduct a census of the solar system to search for potentially hazardous objects.
—
Online:
Contact us: [email protected] | https://ordonews.com/soar-telescope-observing-dimorph-after-collision-with-dart-spacecraft/ |
POWERFUL signals are being beamed in Earth’s direction from deep space at an unprecedented rate.
According to scientists, a repeating fast radio burst source discovered last year was recorded firing more than 1,800 bursts our way within the space of two months.
The hyperactive nature of the burst allowed researchers to pinpoint its host galaxy and source.
Named FRB 20201124A, the object was detected using the Five-hundred-meter Aperture Spherical radio Telescope (FAST) in China.
It was described in a paper led by astronomer Heng Xu of Peking University in China.
Fast radio bursts, or FRBs, are a mysterious space phenomenon.
The high-intensity emissions usually last only for a fraction of a second and their origins were unknown until recently.
There have been a few thousand caught by scientists since the first was detected in 2007.
All FRBs are unusual, but the newly discovered one was especially odd.
Over 82 hours of observation spread over two months, according to the paper published in Nature, FAST detected 1,863 bursts.
Its polarisation and signal strength swung wildly, making it the first FRB to show these kinds of variations in its waves, study author Fayin Wang of Nanjing University told Inverse.
The evidence so far points to its source being a magnetar, a neutron star with a powerful magnetic field.
However, the way its polarisation changed over time suggested another object may be contributing to the signals.
“These observations brought us back to the drawing board,” said astrophysicist Bing Zhang of the University of Nevada, Las Vegas.
“It is clear that FRBs are more mysterious than what we have imagined. More multi-wavelength observational campaigns are needed to further unveil the nature of these objects.”
Almost all FRBs detected so far have come from too far away to clearly make out where they originated.
Only a handful have repeated, and fewer still in a predictable pattern.
This makes them notoriously difficult to study, meaning their origins have eluded scientists for over a decade.
It’s thought the signals come from huge explosions in deep space that fade away in less than a second.
In 2020, researchers said they had pinpointed radio flares coming from an object known as a magnetar.
Magnetars are a type of neutron star with a hugely powerful magnetic field – only a handful of them are thought to be present in the Milky Way.
Physicists have previously speculated that magnetars might produce FRBs but there was no evidence to prove that was the case.
It means the signals don’t come from alien civilisations, a theory touted by some UFO hunters but dismissed by scientists. | https://news.nmnandco.com/2022/09/28/mysterious-radio-signal-from-deep-space-flashes-earth-2000-times-within-2-months-the-us-sun/ |
Heart versus Brain02 October, 2012
KHARTOUM, (Sudanow)- Not for a second would we hesitate to say answer any question about which comes first that: the brain thinks and the heart beats, this is a belief that has been going on, for centuries, that is until recently, when a professor says time has come for us to reconsider this dictum. Professor Amnah al-Fekki Salih, argues that between the action and the thinking there is a very minuscule fraction of time which scientists failed to explain. She said this time was the time the heart issues the order, and the brain, receiving the order send signal for the concerned body part to act. Her argument is not only based on religious belief, but is fully backed by new scientific finding which time would prove to be correct and sound.
Interviewed by Ishraga Abas, sudanow.info.sd reporter, the Sudanese sciences professor, argues that:
-The operation of the heart is half a second ahead of the brain, the duration it needs to establish the idea, prepare the piece of information and transmit to the brain
I was faced with numerous pros and cons
My hypothesis does not contradict, but reinforces the views of the Muslim scholars
I utilized the research of the Western scholars but, contrary to them, I succeeded in confirming my hypothesis because I have considered the heart while they have excluded it
The new hypothesis will change many scientific theories in the West, including Charles Darwin’s theory of evolution
Following is the text of the interview:
Question: All scientists, predecessors and contemporary, believe that the brain is the source and center for the consciousness, ideas and mobility … but you have come up with a new hypothesis that contradicts centuries of thought… would you explain this hypothesis?
Answer: My hypothesis is that the heart is the source and origin of the intention and voluntary action. The heart creates the idea or intention, takes the decision of implementation and dispatches signals through electro-magnetic waves and certain hormones excreted by the heart to the brain for carrying out any voluntary action in cases of wakefulness. It is therefore the heart that creates the idea and takes the decision and it is the brain that discharges the task. God’s Prophet (PBUH) was right when he said: “Actions are based on intentions and every person gets what they intend”
Question: How do the mobility and behavior occur in cases of consciousness and wakefulness in accordance with this analysis?
Answer: Wakeful consciousness is a case that occurs when the heart emits to the brain signals containing information and decision it has already taken. Those signals are dispatched to the different centers of the brain for action. The emitted signals contain instructions for carrying out any action such as holding the pen and writing. If the person intends to write, this intention springs from the heart and (the task of the heart is creating the intention while the task of the brain is alerting the audio-visual senses for action. The heart sends a signal to the center of the hand in the brain, which forwards this signal to the brain crust, which in turn transmits it to the hand center, and thus the writing process is done by the mobility of the hand in compliance with the heart’s free will.
The consciousness hypothesis underlines that in order to carry out any voluntary action in the state of consciousness; there must be three basic centers – the heart, the brain and the organ. The role of the heart is that its nervous system carries out the thought processing and analyzes the information incoming from inside the heart (the intention) and from outside the heart, whether from the chest or any nervous cells in the body such as the skin directly through the blood and the electro-magnetic waves. This information may reach the heart from outside the body. After receiving the information, whether from inside or outside the body, the nervous heart system carries out the analysis and recognition processes and takes the appropriate decision and transmits the signals to the brain.
The role of the brain starts when the signals arrive from the heart to all centers of the brain, the most important of which are: 1) the voluntary mobility centers in the brain crust which engender any movement upon receiving the signal from the heart like writing, painting, playing a tune, etc, or in the legs area if the signal of the heart orders walking and the legs respond accordingly.
The role of the organ starts when any brain mobility center receives the signal; and having its center activated, the organ performs the movement like the hand for writing and the legs for walking. The hypothesis indicates that the brain is the origin of the involuntary movements only, such as the movement of the hands forward and backward during walking.
Question: How did you start contemplating this hypothesis and what were the observations that prompted it?
Answer: It is a long story that began when I was in the preparatory year of the University. I asked Professor Abdulla Al-Taieb whether there may be a function for the heart other than pumping the blood and responded by saying that there may be another function. When I began my medicine studies, I developed concern with the functions of heart. I contemplated Verse No 179 of Surah A’raf (The Heights): “Many are the Jinns and men we have made for Hell: they have hearts wherewith they understand not, eyes wherewith they see not and ears they see not. They are like cattle –nay more misguided: for they are heedless (of warning)." The word ‘understand’ was the key to my research. At this point, I must mention Professor Abdul Rahman Al-Agib who, in the 1970s, wrote a book titled (The Koran as a Key to Scientific Research). His theory has been confirmed, as the Koran is now the source of all sciences of medicine, physics, astronomy and chemistry. I am not saying this because am Muslim but because I noticed this during my research works in the United States of America.
Question: When did you start applying or testing this hypothesis?
Answer: It was in 1982 when I was sent for a scholarship in the United Kingdom where I presented this hypothesis; but the response of the academic institutions was that they could not assist me as there was no research on this issue and that they believe that the brain is the origin and source of the will and mobility.
I also discussed the idea with a number of Sudanese scientists and presented them with a paper on my research. Those included Professor Jaafer Sheikh Idriss, Professor Suleiman Salih Fidail, Professor Al-Dhaw Mukhtar and Professor Abdulla Al-Tayeb. Some of them told me that what I was trying to prove contradicts numerous concepts while others commented that this research requires much money and tremendous efforts. This proved true as I continued throughout the period from 1965 to 2012 trying to prove my concept and, praise be to Allah, I succeeded in the end.
After nearly four decades of research and contemplation, I was able to prove in April 2012 at Arizona University in the US, that the heart is the most important organ and is the source of voluntary mobility. The analytical research confirmed this hypothesis by 100%. The consciousness studies research center of the university awarded me the patent of the intellectual royalty in acknowledgement of its soundness. The Director of the Center, Professor Stewart Hameruf said he supports my ideas and is looking forward to my further efforts for conducting laboratory research to prove the hypothesis
Question: Were there scientists who shared this thought with you and have you made use of relevant research works?
Answer:American scientist Benjamin Lebed spent 40 years analyzing this question but could not prove anything because he based his theory on the concept that the brain is the source and origin of mobility, will and implementation, i.e., only from the brain to the brain. He found in his accounts that there is half a second, may be more or less, between the intention and the implementation that does not appear in the brain planning. He could not explain the origin of this fraction of time that precedes the start of the movement.
I benefitted from his research and managed to prove that the heart action precedes the action of the brain and that there is half a second of the action of the heart which is the time the heart spends for creating the thought, preparing the information and transmitting to the brain. In 1995, Canadian scientist John Armor managed to prove that the heart has a nervous system that has no relationship with the brain and I took this as a basis of my hypothesis and greatly benefitted from it. I also benefitted from the explanations and meanings of the verses of the Koran which contained references to the heart and such terms as to understand, recognize, etc.
Question: Numerous scientific theories were based on the hypothesis that the brain is the origin of mobility and intention. What will be the impact of the new hypothesis you have proved?
Answer: The new hypothesis is due to change many concepts and sciences in the West and elsewhere, which were all, based on the premise that the brain is responsible for tasks being carried out by man. The Darwinian evolution theory, for instance, will lack foundation and so with political and social theories and those of education, ethics and spiritual values and many other sciences, which will be of no value. Following confirmation of this analysis and hypothesis, one American scientist declared the importance of the heart in establishing science and conserving the future of the coming generations by calling “Let us conserve our future”. The University of California appropriated 5 million dollars for scientific research that can bridge the gap between science and religion.
Question: Have you confronted any opposition by other scientists towards this hypothesis?
Answer: At first, some British scientists described my initial analysis as fanciful and did not approve of conducting the analysis and in 2009, arbitrators and reviewers of a specialized American magazine refused even to publish my relevant research.
Question: In opposition to this, have you received any support from other scientists?
Answer: Yes. When the arbitrators and reviewers turned down my thought in 2009, the American philosophical studies magazine contacted me and published the research in 2010. Specialized scientists in Malaysia who stressed the importance of its contents also supported it. Scientists in Arab and Muslim countries, including Qatar, also nodded for the hypothesis
Question: Does your research contradict the views of the Muslim jurisprudents and scholars since the Prophet (PBUH) Companions and those of the Islamic knowledge renaissance age such as Al-Ghazali and Ibn Rushd who explained the relationship between the heart and the brain in accordance with the Koran and Sunna of the Prophet (PBUH)?
Answer: In the contrary, it does not differ or contradicts those views. It rather, adds a brick to the bricks they have built as they have indicated this premise; but the laboratory empirical science was not available to them as it is for us at present. When he was once asked whether it is permitted to conduct research, one contemporary Muslim scholar of jurisprudence responded by underlining that research and contemplation never contracts any one of the Islamic tenets and that a physician has to contemplate and conduct research on verses of the Holy Koran
Question: When are you planning to start laboratory tests on this analysis?
Answer: I will start the tests immediately on the laboratory mice
Question: Are you going to conduct these tests in the Sudan?
Answer: No. The laboratories appropriate for such a research are not available in the Sudan. They are not either available in the Arab, African and Asian regions; they are found only in the United States of America, Canada and Sweden and, for this reason, I will do my research in one of those countries
Question: How are you going to finance the research?
Answer: I hope the state will assist and stretch a helping hand to me as this research is conducive to the Sudan’s scientific image. I also hope I will get assistance from the Arab and Islamic academic and scientific institutions in addition to institutions of the inimitable scientific characteristics in the Koran and Sunna in Saudi Arabia and elsewhere as the research props up the scientific status of the Arabs and Muslims
Question: What is the outstanding landmark in your professional life? | https://sudanow-magazine.net/pageArch.php?archYear=2012&archMonth=10&Id=1599 |
In Chile, the SOAR telescope, operated by the American NOIRLab, captured a long plume of fragments and dust from the NASA DART asteroid in the latter ' s orbital reorbit experiment.
Two days after the DART strike, American astronomers used the 4.1-metre Southern Astrophic Research Telescope telescope to film a large dust plume and fragments that left the asteroid after the impact.
The next phase of the DART team is related to data analysis and further observations by researchers from around the world, and SOAR telescopes are planned to be used to monitor emissions in the coming weeks and months, while the SOAR combination and the observation network capacity of the Astronomy Event Observer Network allow for effective monitoring of dynamic events like this.
Observations will allow scientists to learn about the nature of the surface of Dimorph, how much matter was released as a result of the impact, how quickly the release occurred, and how much of it was produced, for example, whether the impact caused the release of predominantly large fragments or mainly dust. Analysis of information could potentially help scientists to protect the Earth and its inhabitants through better understanding of the processes.
SOAR observations demonstrate the capacity of the Association of Universitys for Research in Astronomy to plan planetary defence, while the Vera C. Rubin Observatory, which is being built in Chile, will allow the solar system to be inspected to detect potential hazards. | https://skilfulseo.com/article/8839/asteroid-dimorph-became-like-a-comet-after-the-dart-probe-struck-with-a-tail-of-10000-km/ |
In the early 1900s, Edwin Hubble observed that distant galaxies are moving away from us. This surprising observation led Hubble to hypothesize that the universe is expanding; more recently, scientists have observed that not only is the universe expanding, but that the expansion is accelerating! The evidence for acceleration emerged from observations of a particular kind of supernovae (Type Ia supernovae) that have a uniform expected brightness. This uniform brightness allowed scientists to determine how far away each supernova was and to discover that distant supernovae were moving slower than expected, and conclude that the universe hasn’t always been expanding at the same rate.
Flash forward to this year – astronomers studying the same Type Ia supernovae made the contrary claim that a more rigorous analysis of the data rules out a universe in which the expansion is accelerating. A number of media outlets have sensationalized this finding, however, the new analysis only reduces the likelihood of an accelerating expansion by a mere 0.3%. That means that we can still say with 99.7% certainty that the universe is indeed accelerating. Adding to the overblown nature of these analyses, scientists have criticized the study for its failure to accurately account for a number of small, but significant differences (like intervening dust) which should be accounted for in analyzing the data.
Even if the researchers’ new analyses turn out to be wrongly criticized, Harvard Astronomy graduate student Jieun Choi remarks, “There are independent results that have now demonstrated time and time again that the universe is accelerating in its expansion. So even if we took Type Ia supernovae out of the picture, we still have sufficient evidence,” including fluctuations in the cosmic microwave background (radiation from the early universe), among others. Despite what the headlines would have you believe, it appears that all signs still point to an accelerating universe.
Acknowledgements: Many thanks to Jieun Choi, a graduate student in the Harvard Astronomy department, for her extremely helpful insight and expertise on the subject. | http://sitn.hms.harvard.edu/flash/2016/dont-worry-expansion-universe-still-accelerating/ |
The discovery of a gravitational wave caused by the merger of two neutron stars, reported today by a collaboration of scientists from around the world, opens a new era in astronomy. It marks the first time that scientists have been able to observe a cosmic event with both light waves — the basis of traditional astronomy — and gravitational waves, the ripples in space-time predicted a century ago by Albert Einstein’s general theory of relativity.
The discovery was made using the U.S.-based Laser Interferometer Gravitational-Wave Observatory (LIGO); the Europe-based Virgo detector; and some 70 ground- and space-based observatories. The first detection of gravitational waves, made in 2015, earned LIGO’s leaders the 2017 Nobel Prize in Physics; in that case, scientists determined the waves were touched off by a collision of black holes, an event that isn’t expected to give off light.
The new discovery, involving neutron stars, “allows us to link this gravitational wave source up to all the rest of astrophysics: stars, galaxies, explosions, massive black holes and, of course, neutron-star mergers,” says McGill University astrophysicist Daryl Haggard, who led one of many teams of affiliated scientists around the world who examined the source of the latest gravitational-wave signal. “It’s an entirely new level of knowledge.”
A New Perspective on Gamma-Ray Bursts
Haggard and McGill postdoctoral researchers Melania Nynka and John J. Ruan are the lead authors of a paper, published in Astrophysical Journal Letters, that details their team’s observations using NASA’s orbiting Chandra X-ray telescope, trained on the point in the sky identified as the origin of the gravitational wave that reached Earth on Aug. 17.
Those observations confirmed that the collision of the two neutron stars — among the densest objects in the universe — also touched off a violent jet of hot plasma known as a gamma-ray burst in a galaxy about 138 million light-years from Earth. What’s more, the team determined, the burst is the first that astronomers have observed that is “off-axis,” or not pointed toward Earth — providing a perspective that could enable scientists to better understand how these potent bursts impact their surroundings.
“The gamma-ray bursts that are easiest to detect are ones with bright jets of emission pointed at Earth,” Nynka explains. “It’s easiest to see a spotlight when it is shining directly at you, but sometimes the light may be too bright to sort out the whole thing. When the light is pointed a little off to the side, as in this case, it gives us a different view.”
Neutron stars, formed when massive stars explode in supernovas, are so dense that they weigh two or three times the mass of our Sun, even though they’re roughly the size of a city such as Boston or Montreal. A teaspoon of neutron star material has a mass of about a billion tons.
“We’ve thought for a while that two neutron stars smashing together might lead to a gamma-ray burst,” Haggard says. “But the combination of a gravitational wave detection and the data we’re collecting from observatories like Chandra seals the deal.”
Probing the Origins of Heavy Elements
Mergers of neutron stars are thought to be responsible for producing most of the heavy elements in the universe, such as gold, platinum and silver. Further study of such collisions could help scientists determine the origin of these elements, which make up almost half of the periodic table. Already, follow-up observations by telescopes around the world have revealed signatures of recently synthesized material, including gold and platinum.
The gravitational signal, named GW170817, was first detected on the morning of Aug. 17 by the two identical LIGO detectors, located in Hanford, Washington, and Livingston, Louisiana. The information provided by the third detector, Virgo, situated near Pisa, Italy, helped narrow down the location of the cosmic event.
At nearly the same time, NASA’s Fermi space telescope had detected a burst of gamma rays. LIGO-Virgo analysis software put the two signals together and saw it was highly unlikely to be a chance coincidence. Rapid gravitational-wave detection by the LIGO-Virgo team, coupled with Fermi’s gamma-ray detection, enabled the launch of follow-up observations by telescopes around the world, including Chandra.
“The X-rays from this merger were very dim at first, but then suddenly brightened about 10 days afterward,” says Ruan, a postdoctoral fellow in Haggard’s research group at the McGill Space Institute. “This was entirely unexpected, and our modeling showed that this behavior is due to the jet from the gamma-ray burst being ‘off-axis’ — pointed away from the Earth — a phenomenon we have not seen before.”
In the weeks and months ahead, telescopes will continue to observe the afterglow of the neutron star merger and gather further evidence about various stages of the merger, its interaction with its surroundings, and the processes that produce the heaviest elements in the universe. | https://spaceref.com/press-release/latest-gravitational-wave-detection-opens-new-era-for-astronomy/ |
“Follow the water” has long been the mantra for scientists searching for life beyond Earth. After all, the only known cradle of life in the cosmos is the watery planet we call home. But now there is more evidence to suggest that a possible discovery of liquid water on Mars may not be so airtightresearchers report on September 26 in nature astronomy.
In 2018, scientists announced the discovery of a large underground lake near the south pole of Mars (Serial number: 7/25/18). That statement, and follow-up observations suggesting additional inground pools of liquid water on the red planetSerial number: 09/28/20), fueled the excitement of finally finding an alien world possibly conducive to life.
But since then, researchers have proposed that those discoveries might not stand up to scrutiny. In 2021, one group suggested that clay minerals and frozen brinesrather than liquid water, it could be responsible for the strong radar signals the researchers observed (Serial number: 7/16/21). Spacecraft orbiting Mars beam radio waves toward the Red Planet and measure the time and intensity of the reflected waves to infer what lies below the Martian surface.
And now, another team has shown that ordinary layers of rock and ice can produce many of the same radar signals previously attributed to water. Planetary scientist Dan Lalich of Cornell University and his colleagues calculated how flat layers of bedrock, water ice, and carbon dioxide ice, known to be abundant on Mars, reflect radio waves. “It was a pretty simple analysis,” says Lalich.
The researchers found that they could reproduce some of the anomalously strong radar signals that were thought to be due to liquid water. Individual radar signals from different layers of rock and ice add up when the layers are a certain thickness, Lalich says. That produces a stronger signal, which is then picked up by a spacecraft’s instruments. But those instruments can’t always tell the difference between a radio wave that comes from one layer and one that’s the result of multiple layers, he says. “They look like a radar reflection.”
These results do not rule out liquid water on Mars, Lalich and his colleagues acknowledge. “This is just saying that there are other options,” he says. | https://exquisitepost.com/mars-buried-lake-might-just-be-layers-of-ice-and-rock/ |
The way one teaches is inevitably coloured by one’s experiences as a student, and for me formal feedback was always enormously important. I pored over it, both for the obvious pleasure of compliments when a task was done well, but also to glean insights and advice for how I could improve in the future (I willingly admit I was, and still am, a nerd). When the feedback was cursory or ill thought out I found the experience disappointing and it made me less inclined to try as hard on future assignments for that teacher. I recognise that this isn’t the case for all students (or many at all) but this experience still influences the way I approach writing feedback. I want to explain and to some extent justify the grade that has been given, but I also want to provide advice and guidance where possible so that the student can continue to develop their course work, whether for their own benefit or for wider audiences.
As I result I tend to put more time and thought into it than is maybe sustainable for my practice as a teacher, particularly when it comes to marking smaller assignments or ones where there are many to get through. Conscious of this I have started to think about strategies that might make the delivery of feedback more effective and perhaps also more useful to my students. Ideas for this have come from a few sources, including colleagues at UAL and at other institutions, and from discussions with some of my students. For the benefit of this journal I am going to focus here on a paper by David Carless, Diane Salter, Min Yang and Joy Lam which proposes and advocates for a more sustainable feedback model based on research and interviews with award winning lecturers. While they conceive of sustainable in primarily in terms of a student’s lifelong learning, some elements of what they propose might I think also be adapted to make the feedback process more sustainable for tutors as well.
The piece opens by acknowledging the importance of feedback, with a critique of current feedback practice, and by repeating the often heard (although not universally accepted) mantra that students tend to value feedback less than teachers. Crucially they explain this as being less to do with the inherent usefulness of feedback, than with the way it is currently practiced in higher education. A particular issue they identify is the ‘finality of one-way written comments’ and also the lack of training and opportunities for students in how to use and act upon feedback. They also identify the swelling of higher education as a particular problem for tutors looking to provide suitably personalised student feedback. What the authors ultimately call for is not just the usual small scale tinkering with the feedback process, but a more thorough reconceptualization of what constitutes feedback.
From this introduction the article then enters a review of the practices of a number of award winning teachers at the University of Hong Kong, the results of which were then subdivided into a series of categories for further analysis: The themes addressed were two-stage assignments and their role in facilitating feedback, dialogic feedback through oral presentation tasks, the use of technology to facilitate feedback, and finally notion of student self-evaluation. For reasons of length I won’t get into analysing each of these categories in great detail, but will make a few observations for each. In terms of two stage assessment, several of those interviewed reflected on the positives of this approach. One pointed out how previously feedback could feel like ‘throwing a stone in the sea’ with little idea whether it was acted on. Another reflected on the heavy marking workload of dual submission points, but argued that it was worth it for the resultant learning benefits they saw in students. Peer review was mentioned as a potentially useful technique at the first stage of assessment, although there was also a recognition of the difficulties of motivating students to participate in peer feedback (an issue which might be addressed with recourse to peer mentoring techniques).
In terms of dialogic feedback, many of those surveyed employed oral presentation as a method of assessment. One tutor reported videotaping these presentations and then asking the students to effectively self-assess, reflecting on their own performance (by his own admission not an uncontroversial approach). Another reported using frequent, short presentations as an assessment tool, stating that it’s value also lay in building up public speaking skills which were key to the subject they taught. Moving on to technology supported feedback, this was quite widely used. One teacher advocated online feedback, with students given the opportunity to post drafts of assignment work and view and comment on those by other students. He characterised feedback as a scaffolding or support which tutors give to students, but also cautioned against students becoming over dependent on ongoing feedback and argued they should be asked to also find the answers themselves. Adding an example from my own experience, colleagues at Ravensbourne College have experimented with using recorded feedback for students. They have found this reduces the burden on tutors, and engages students better than written feedback.
The final section of the paper discusses examples of student self-evaluation. One informant suggests this is helpful because it reduces the amount of guidance provided by the teacher and pushes students to find answers themselves, achieving both the goal of equipping students with lifelong skills and reducing the burden on tutors. One example of this is the use of pre-assessment workshops, used as a way to help students evaluate their learning, the informant writes that ‘my strategy is that the same question has to be asked twice, so students can realize what they have learnt.’ Another informant emphasised the importance of ‘feedback in real time’ asking constant questions in classes as a way to assess student understanding and offer them opportunities to question in return. He also coins the idea of ‘provocative feedback’ intended to open a dialogue or discussion. These approaches, like any, can introduce tensions, the same informant writes that ‘initially, I would think that they are probably a bit disappointed because they expect the teacher to teach them. In the end, they value the fact that you respect what they brought to the class’.
Altogether this paper provides a number of ideas for how feedback might be more evenly distributed throughout the teaching process, to the potential benefit of both students and tutors. For students, such distributed and dialogic approaches offer more opportunities to understand feedback, incorporate it, and adapt their activities as they work. For tutors, it potentially opens up more sustainable models of feedback, which in various ways place more of the onus on students to self-evaluate and discuss feedback, reducing the burden on tutors to provide a stream of specific, dense information at the end point of grading and marking. | https://lewisbush.myblog.arts.ac.uk/2017/07/26/towards-sustainable-feedback/ |
COTA celebrates new High Street passenger shelters
COLUMBUS— On Wednesday, August 13 the Central Ohio Transit Authority (COTA) celebrated the installation of new Downtown passenger shelters on High Street, designed in partnership with Columbus College of Art & Design (CCAD).
Gathered at the northbound shelter on S. High Street between Fulton and Mound streets, various project partners, COTA employees and COTA Trustees took part in a ceremonial ribbon cutting.
The ribbon cutting featured Michael Young, who designed the shelters as a student at CCAD. Michael graduated in 2012 and is a Junior Designer for Zukun Plan, a product design firm located in Gahanna.
COTA challenged a class of CCAD industrial design students to submit designs for the new passenger shelters. The students were asked to incorporate innovative materials, recommend sustainable or "green" features, provide protection from the elements for customers, and were required to comply with Americans with Disabilities Act requirements. The community and COTA staff had the opportunity to offer feedback on designs from three finalists, and Michael Young’s design was selected to be constructed along High Street.
Speakers at the event included Curtis Stitt, President/CEO, COTA, Tom White, President, CCAD, and Marilyn Brown, Franklin County Commissioner.
The new shelters demonstrate COTA’s commitment to providing customers with a clean and safe place out of the elements to wait for the bus, and reinforce COTA’s investment in the revitalization of Downtown.
The 13 new shelters cost approximately $479,735 for fabrication and installation, which is being completed by Brasco International, Inc.
For more information please visit cota.com or contact COTA at (614) 228-1776. | https://www.yournewscolumbus.com/cota-celebrates-new-high-street-passenge |
The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.
2,580
Students
2
Projects
About This Class
Welcome to Toplining 101! In this class we’ll be exploring melody and lyric in songwriting.
Throughout these lessons you’ll learn about what ‘toplining’ actually is, and how to create memorable, meaningful lyrics and melodies in your songs that will connect to both you and your listener!
This class is for anyone and everyone wanting to explore these skills. Whether you are already a producer, an artist or an instrumentalist, or if making music is completely new to you, I’ll be providing you with practical, usable steps and creative inspiration that will become your go to building blocks in developing your own writing style.
Expressing ourselves and connecting with others through music is something that is so powerful and beneficial for both us as the creators and our listeners, but it can be hard to know how to start, especially if we’re just waiting for inspiration to hit us! The skills you’ll learn in this class will teach you how to approach your songs, and how to consistently find your way to a great melody and lyric combination. These are without a doubt the most important elements of a song, and once you have the tools to create them, you can take your song ideas forward in an infinite number of ways!
The only tools needed for this class are somewhere to write (a journal/notebook, computer or even your phone, whatever you prefer) and a device to record your ideas (this can be a microphone and your computer with preferred DAW, or simply a phone with a voice notes app or dictaphone).
For producers taking this class you will using you own beats to write to, and if you play an instrument you will be encouraged to use this – however these are not requirements, so do not worry if neither apply to you!
Having worked in the music industry as a songwriter for nearly ten years now, with artists, producers and fellow writers at every stage of their musical journeys, I’ve learned so much about songwriting and creative process in general that I can't wait to share with you! I’m so excited to explore these writing tools with you and hear that work you create.
Meet Your Teacher
Hi, I'm Clare and I'm a songwriter from London. I've been working in the music industry for over a decade - both as a songwriter and as a vocalist, and I'm super passionate about sharing my experiences (and boy, have I had a lot of them!) with others.
I work daily with incredible award winning producers and upcoming artists, both here in the UK and across the globe, and have encountered pretty much every scenario (good and bad) that you might come across when creating a song! I'm excited to share with you my knowledge on everything songwriting - from practical advice on crafting melody and lyrics, to tips on collaboration, creativity and resilience. Whether you are stepping in to songwriting ... See full profile
Hands-on Class Project
As a class project you’ll be working on your own song topline!
Each lesson will focus on a new step in the process, so that by the end of the class you have a completed, fully formed topline! I’ll show you how to get started and approach this project from the perspective that’s right for you and your experience, and then take you through the process of crafting your topline, one manageable step at a time.
At the end when you have your idea you can then share this with me and your fellow class mates to get feedback and advice on the next steps to take with your idea.
If you use a music software program such as Logic, Pro Tools or GarageBand, you can use this to record and share your topline. If you have no idea what those programs are you can absolutely just use the voice notes app that you’ll find on most smart phones, or video yourself or someone else singing your song. You can upload your file as an MP3 or M4A (voice note) audio recording, or as a video. You can also share links to your work via SoundCloud and YouTube.
Remember this class is NOT about recording perfect quality song demos, using music technology or great singing voices - our focus is on crafting memorable, meaningful melodies and lyrics, whatever stage you’re at in your musical and songwriting journey.
If you need any guidance along the way please reach out and I'll be there to help and offer feedback!
I’m so excited to hear your work!
Class Ratings
- Exceeded!0%
- Yes0%
- Somewhat0%
- Not really0%
Reviews Archive
In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.
Why Join Skillshare? | https://www.skillshare.com/classes/Toplining-101-Melody-Lyrics-in-Songwriting/2099867477?via=browse-rating-music-industry-layout-grid |
This paper describes the perceptions of undergraduate students and their advisors on the role and challenges of academic guidance in Saudi Arabia. For this, five focus groups comprising six to eight students and one advisor were interviewed, and their responses to four questions were qualitatively analyzed. The responses from all groups emerged in four major themes, two related to the student’s perspectives and two related to those of the advisors. Overall, the students identified the unfamiliarity with the purpose of academic guidance and a failure of their advisors to follow their progress as the primary challenges. The advisors highlighted the lack of student feedback and academic guidance training as the obstacles to successful student progress. The findings presented here suggest that universities should incorporate student and advisor feedback into the academic guidance systems to ensure student success.
Full Text:PDF
DOI: https://doi.org/10.22158/jecs.v2n3p118
Refbacks
- There are currently no refbacks.
This work is licensed under a Creative Commons Attribution 4.0 International License. | http://www.scholink.org/ojs/index.php/jecs/article/view/1460 |
Every Wednesday evening, soulful melodies escape from a rehearsal room at the University of Miami’s Frost School of Music, as a new music program educates and prepares aspiring student musicians for a potential career on stage and in the spotlight.
The Wednesday night class, which is called Afro-American Song Traditions, is one of five main classes offered by the Creative American Music Program. The program, founded in March by Bruce Hornsby, a Grammy award-winning songwriter, performer and UM alumnus, aims to help educate aspiring songwriters and performers in the Frost School of Music. Class subjects include Modern American Pop Music and Anglo-American Song Traditions. While Hornsby has been known to attend some of the classes and offer advice to the students, other well-trained instructors, such as Nicole Warling, a professor in the Frost School of Music, critique and provide guidance in order to help their students perform to the best of their ability.
“The jazz drumbeat is kind of sloppy,” Warling told a student during an Afro-American Song Traditions class on Oct. 1. Many students laughed, encouraging Warling’s honest comments. She added, “It’s sloppy, but it feels right.”
Warling explained that the purpose of the class is to expose young musicians to the history of Afro-American song traditions.
“In this class we study field hollers, spirituals, gospel, blues, jazz,” Warling said. “Bruce [Hornsby] and I are on the same page. You need to take things apart to see how they work.”
When advising students on writing songs in the Afro-American tradition, Warling tells her students to “get the flavor of it and make your own interpretation.”
Freshman Ben Goldsmith said that the class has inspired him.
“Keep in mind how simple the blues are. With pop music today, it’s important to know that something simple can be catchy,” Goldsmith said.
The program, however, does not simply instruct students in music theory and history. Students write and perform their own material and receive feedback by not only their professor, but students as well.
Jessie Allen, a junior in the program, writes original piano music and has presented it to her classmates and professor.
“I wrote this song two weeks ago,” Allen said. During class, Allen began to sing over her slow piano melody. When she finished, another student raised her hand and suggested a chord change.
Many students in the program find it beneficial to receive feedback from their peers, in addition to their professor.
“It’s really good to have fellow songwriters critique your stuff,” junior Elaine Maltezos said.
As for Afro-American music, Maltezos said she is inspired by the emotion and the vocals.
“I like the idea that music doesn’t have to be tight and strictly regulated. It can just be free and convey so much emotion.”
Warling said that performance is an important aspect to the curriculum.
“I am a big advocate of practical application,” Warling said. “Learning theory for four years at school puts you in a sheltered environment. The students should be able to get out there in the real world and perform.”
Many of the students are competing in a songwriting contest that will give the winners the opportunity to perform as the opening act for the Bruce Hornsby and Friends concert. The concert will take place at the BankUnited Center at 8 p.m. Thursday. The students will also be performing “Pick a Bale of Cotton,” as well as their own musical material at a Books and Books location on November 1 and 8. For details, visit www.booksonbooks.com.
To learn more about the Creative American Music Program, please visit http://www.creativeamericanmusic.net. | https://www.themiamihurricane.com/2008/10/21/frost-music-program-prepares-students-to-perform-at-bruce-hornsby-show-thursday/ |
Students work with staff at the forefront of research in their disciplines, and have access to dedicated facilities and equipment.
Our specialist degrees, with the option of placement, provide students who have completed a degree in computer science or a related subject with further advanced study to build on what has been learned.
Our conversion degrees, with the option of placement, are designed for graduates from non-computing backgrounds to develop the technical, analytical and professional skills required for a computing role.
Together with other academic schools we offer a number of degrees designed to help you develop and apply core skills and knowledge across disciplines and give you the opportunity to study cutting edge topics in each area.
Enhance your CV and boost your employment prospects by opting to study for your master's degree along with a work placement year. The School is one of only a few in the UK to offer postgraduate work placement opportunities in the field of computer science.
Your paid year in industry would take place following successful completion of the taught elements of the course, allowing you to incorporate your industrial experience into the final dissertation which is completed following your work placement.
The School has a dedicated placement officer to ensure you have access to a broad variety of placement opportunities, in addition to providing constant support and guidance throughout the entire process. | https://www.cardiff.ac.uk/cy/computer-science/courses/postgraduate-taught |
By incorporating pre-writing activities such as collaborative brainstorming, choice of personally meaningful topics, strategy instruction in the stages of composing, drafting, revising, and editing, multiple drafts and peer-group editing, the instruction takes into consideration what writers do as they write.
Appellate brief table, pdf book creative writing course content - trove. An applied linguistic perspective. Aristotle distinguishes between the genres in my stepfather, writing: Academic survey texts on medical history do not usually have much in the way of visual components to demonstrate textual material.
In very general terms, piyutim sung by Sephardic and Eastern communities follow the makam modal system, while Ashkenazi melodies are governed by shtaygers. Therefore, the designers of the Python language have published a style guide for Python code, available at http: For example, the copla telling of the birth of the patriarch Abraham, "El nacimiento de Abraham" " Cuando el rey Nimrod " draws on midrashic legends explaining why Abraham chose to believe in one God and his struggle with Nimrod, mythical builder of the Tower of Babel and aspiring leader of the world.
They also tend to over-generalize the rules for stylistic features when acquiring new discourse structures. It was nevertheless, a major force in assisting births, comforting the sick, and attending the dying.
It should then become apparent that the process approach to writing instruction can only be effective if these two components are taken into consideration.
The underscore is just a regular Python variable, but we can use underscore by convention to indicate that we will not use its value. For example, had Athens not been decimated by an unknown plague in B.
Research insights for the classroom pp. The authors discuss the notion of mental representation as a writing strategy. Coaching from the margins: The song's riff and techno dance rhythm are then introduced.
This behavior means you can choose variable names without being concerned about collisions with names used in your other function definitions. Stuart Hall, Meaghan Morris, Tony Bennett and Simon During are some of the important advocates of a "Cultural Studies" that seeks to displace the traditional model of literary studies.
Specifically, the effectiveness of feedback may depend on the level of students' motivation, their current language level, their cognitive style, the clarity of the feedback given, the way the feedback is used, and the attitudes of students toward their teacher and the class Ferris, ; Goldstein, ; Omaggio Hadley, Thus, zip takes the items of two or more sequences and "zips" them together into a single list of tuples.
The second program uses a built-in function, and constitutes programming at a more abstract level; the resulting code is more declarative. Piyutim are sung as zemirot table hymns by the family on Shabbat and festivals as well as at family lifecycle celebrations, and are thus regarded as both a paraliturgical as well as liturgical genre.
A common complaint among ESL students at university is that they have difficulty meeting native speakers and getting to know them. He also advocates that ESL instructors make explicit use of thinking or procedural-facilitation prompts and student self-evaluation as the optimal mode of assessment.
This is in line with what is happening with other branches of Israeli music which, since the 's, have thrived in the legitimation of eastern musical influences. According to Flood, the production team worked to achieve a "sense of space" on the record's sound by layering all the elements of the arrangements and giving them places in the frequency spectrum where they did not interfere with each other through the continual experimenting and re-working of song arrangements.
Aspects of language teaching. Repeating a previous mistake, or backsliding, is a common occurrence in L2 writing. Furthermore, learners may be uncertain about what to do with various suggestions and how to incorporate them into their own revision processes.
But such an account says little about why certain linguistic forms transfer and others do not. Ain't I a Woman: For example, at the beginning of each of my ESL writing classes, I often ask students to fill out a personal information form to determine their needs and interests when planning my course.
Using the various poststructuralist and postmodern theories that often draw on disciplines other than the literary—linguistic, anthropological, psychoanalytic, and philosophical—for their primary insights, literary theory has become an interdisciplinary body of cultural theory. Making plans and juggling constraints.
Python Coding Style When writing programs you make many subtle choices about names, spacing, comments, and so on. Combining Different Sequence Types Let's combine our knowledge of these three sequence types, together with list comprehensions, to perform the task of sorting the words in a string by their length.
Written on acoustic guitar, D. Her latest two disks in Yiddish, "The Well" with the Klezmatics and "Lemele" [Little lamb], both feature melodies which Chava herself has composed.
I argue that the process approach to instruction, with its emphasis on the writing process, meaning making, invention and multiple drafts Raimes,is only appropriate for second language learners if they are both able to get sufficient feedback with regard to their errors in writing, and are proficient enough in the language to implement revision strategies.
If you enjoy reading tidbits about language, join me in my weekly visits to Philologos. Creative writing four genres in brief table of contents. September 11, Uncategorized 0 comments. 1. 0. Cherche aide pour dissertation de fr.
about yoga essay james bond. define of essay structure poverty. life essay sample persuasive. not giving up essay your marriage. 4 Writing Structured Programs. By now you will have a sense of the capabilities of the Python programming language for processing natural language.
Book creative writing four genres in brief download online free pdf best practices in ic writing now a brief guide for busy students by david. Inquiry to academic writing: a text inquiry to academic writing helps students understand academic culture and its ways of reading.
The Online Writing Lab (OWL) at Purdue University houses writing resources and instructional material, and we provide these as a free service of the Writing Lab at Purdue. Creative writing four genres in brief table of contents ∑œœ ∑ œ. | https://hoxuciryfezy.turnonepoundintoonemillion.com/creative-writing-four-genres-in-brief-table-of-contents-8824ny.html |
By Ian Kelleher
I have a unique and marvelous job. I teach science to high schoolers every day, but I am also “Chair of Research” for my school, charged with answering this question: “How do we use the science of teaching and learning to improve every child’s whole school experience?” The days of COVID have been difficult, but a fascinating challenge – how can the science of learning help us in this unique time?
Like many, I have felt like a first-year teacher all over again, but I have found a steady rhythm. Ten months on, however, it is time to stretch my practice beyond the “emergency remote teaching” described by Professor Paul Kirschner. It is nerve-wracking to try something new when I feel so stretched, but my students need it – and there are simple things I can do to improve their learning experience during this tumultuous period. Busting the three myths described below is a manageable next step that should have a significant impact for my students, and yours as well.
Myth 1: Children of today are digital natives, and require little help with technology.
The idea that children today, because they have grown up with so much technology, are tech-savvy “digital natives” has been tested by research and found to be a myth.1 Students require a learning period for new technology like they do for any other skill. Learning is hindered when we design lessons assuming a technological facility or knowledge of the digital landscape that just isn’t as robust as we think. Compounding this, we have noticed that the stress and weirdness of distance learning during COVID has caused many children to struggle with executive functioning tasks, an observation that appears to align with research on the impact of stress2. This means students need even more time and guidance to pick up on new skills than they normally would. To alleviate this, choose a small, well-curated list of EdTech tools; avoid magpie syndrome. Each time you use a new tool for the first time, run through a simple, quick, low-stakes example project, all the way from beginning to submitting, so that your students can learn the steps. We often do a short project that is fun, goofy, or makes students smile – it is all about learning the new tool.
Myth 2: The best time to give students feedback is when you return their graded work.
Providing students with good feedback is hard right now, but so vital during distance learning. There are so many formal and informal moments during a typical in-person school week where communication happens – going from table to table during class, or bumping into a student in the hallway, for example – and without these moments, we need to consciously fill the gap. Importantly, students need a chance to act on the feedback they receive right away. When you give a grade as well as the feedback, students focus on the grade and not what you wrote. That is a waste of your precious time!
The solution is to design assignments where students have a chance to receive feedback and act on it before their final work is submitted. Your feedback should be brief, frequent, and targeted. Don’t try to fix everything at once; you will only overload the student. Put 10-15 minutes of class time aside for students to find and act on their feedback. Think about including a one-minute video or audio piece of feedback occasionally (many LMS’s, or learning management systems, allow this) as this helps boost the emotional climate of how students receive feedback. Include just a few words of feedback when you return graded work.
Myth 3: Student-led inquiry-based learning is a good way to introduce core ideas.
One of the most important papers that all educators should read is Kirschner, Sweller and Clark’s “Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching.”3 Even in distance learning, direct instruction is a much more effective way to learn fundamental knowledge and skills – though this needs to be done well with high quality questioning, formative assessment, multiple modalities of instruction, and should include concrete examples, analogies, and stories to explain abstract concepts. Use Loom or Screencastify to record simple videos of you presenting material. Do this even if you are on a hybrid schedule; it buys you time to pause and think in class, and students benefit from seeing your mask-free face over video.
Projects have their place, though. Once core knowledge and skills are built, assign projects that challenge students to transfer these to a new context. Doing this helps build usable, durable, flexible knowledge in long-term memory, and can be a great way to include nuggets that boost motivation – such as elements of choice, novelty, or real-life relevance, as well as allowing social-connection – and to incorporate small-group check-ins with students. Check out this article4 for more information about how to do project-based learning well during distance learning.
Evaluating your own distance learning practices can feel like a daunting prospect when you are already navigating uncharted territory. But tackling these three common education misconceptions – and how they might be impacting your remote or hybrid teaching – can be a manageable first step in improving the distance learning experience of your students.
References:
1 Kirschner, P. A., & De Bruyckere, P. (2017). The myths of the digital native and the multitasker. Teaching and Teacher Education, 67, 135–142. https://doi.org/10.1016/j.tate.2017.06.001
2 Shields, G. S., Sazma, M. A., & Yonelinas, A. P. (2016). The effects of acute stress on core executive functions: A meta-analysis and comparison with cortisol. Neuroscience & Biobehavioral Reviews, 68, 651–668. https://doi.org/10.1016/j.neubiorev.2016.06.038
3 Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75–86. https://doi.org/10.1207/s15326985ep4102_1
4 Whitman, G., Kelleher, I. (2020, September 18). Your checklist for virtual project-based learning. Edutopia. http://www.edutopia.org/article/your-checklist-virtual-project-based-learning
Dr. Ian Kelleher is a science teacher at St. Andrew’s and The Dreyfuss Chair of Research for the Center for Transformative Teaching and Learning. He is the co-author of Neuroteach: Brain Science and the Future of Education. His latest project is Neuroteach Global – online professional development about using the science of learning in the classroom. Twitter: @ijkelleher. | https://k-12talk.com/2021/01/21/three-myths-of-distance-learning/ |
By Naima Charlier
At Nord Anglia International School, Hong Kong (NAISHK) we were delighted to win the strategic leadership award at the International School Awards for our approach to online learning. We called this our Virtual School Experience (VSE) and it has irrevocably changed a multitude of aspects of our school and left us with some deep questions to consider for the future.
There were many elements that went into our VSE but some of the key points were replicating the following areas of our physical school, virtually:
- Culture and ethos – ensuring that our school was collaborative, engaging and driven by the desire to ensure learning was purposeful and effective.
- Balanced – developing guidance and frameworks to ensure that teaching and learning was delivered in a variety of ways and that teachers developed strong new pedagogy to ensure optimal virtual learning.
- EdTech – using this as a support not a driver. We utilised a wide array of tools not just to teach the students but to create an environment supportive of rapid peer learning so that teachers could develop and share best practice.
- Safe – our initial work was on ensuring we had strong, evidenced-based guidance on how to keep all stakeholders safe while taking the school online. This included research into platform security, guidance on acceptable use and dress as well as writing and teaching an e-citizenship curriculum. Our IT team were vitally important as we tried to stay one step ahead of the students and the technology.
The steps to VSE
Our journey towards being in this position started back in the autumn of 2019 when, due to some social unrest in Hong Kong, we were one of the first schools to face virtual schooling. In these pre-COVID months we experienced situations where we had to close the school early and so were already looking at digital solutions to support teaching and learning if the students could not be in the physical building. The advent of school closure due to COVID-19 happened early in 2020 and, as we found our way as the ‘new normal’ unfolded, we all sought ways to mitigate the impact on learning.
Although research existed on blended learning it was mainly aimed at older students and we found little information about what might be the best way to approach online learning from 3 to 18. Devising a framework to model the key elements that made up our virtual school was an important early step. This framework supported us when structuring our offer as we constantly adjusted it in response to feedback, and gave us a common language to discuss what we were trying to achieve. Every element of our VSE involved multiple conservations and a great deal of trial and error to find the balance that worked for as many students as possible.
Embracing new pedagogies
As we start to come back to face-to-face learning, we are left considering how we can embrace our new pedagogies and blend the best elements into our physical building. Teachers are considering the benefits of digital textbooks, experimenting with more video and audio feedback to support Assessment for Learning, and continuing to utilise Microsoft Teams chat to stay connected and collaborate on many aspects of the school. We often say that the biggest expert in the room is the room. Our ‘room’ has now become limitless with opportunities to collaborate with colleagues in all our campuses on a global level.
At NAISHK, we are in the position where ‘snow days’ (or in our case, typhoon days) are a thing of the past. With our VSE we can seamlessly swing into virtual learning with minimal notice, ensuring the day is filled with all the elements that are so important to engaging teaching and successful learning. No matter what is happening in our city, all our learners from 3 to 18 can continue to attend a vibrant and exciting school day.
Considerations for a successful VSE
- Make the development of safeguarding and privacy guidelines a priority and adjust your e-citizenship curriculum so it matches the new systems. Actively teaching good e-safety skills to all ages is vital.
- Develop mechanisms to monitor, collate and record learning across all aspects of the school. Make sure quality assurance of these is part of the role of your Middle and Senior leaders, just as it would be in a school building.
- Consider support mechanisms for both learners and parents. We created guides and used surveys, phone calls and Microsoft Teams meetings to get feedback on what we were doing and to check it was working for everyone. Our pastoral team developed systems that ensured every child was registered as ‘engaged’ or identified as someone to support.
- Screen time is a ‘hot topic’ and the impact on well-being as well as optimal learning is an important one. Involve stakeholders in the discussions about this issue and develop guidelines that are age- and subject-specific.
- Start with your core philosophy and regularly loop back to this to monitor that your virtual school is staying true to your values, vision and mission.
- Staff well-being is crucial, as is recognising that teaching in a VSE coupled with rapid change is hard. Acknowledging that can be just as important as the steps you take to alleviate the difficulties. We found being as flexible as possible about where people worked was helpful.
Watch this presentation from the World Education Summit to learn more about the NAISHK Virtual School Experience here.
Naima Charlier is Director of Teaching and Learning at Nord Anglia International School, Hong Kong Connect with her directly on LinkedIn.
Subscribe to ISL Magazine for more! | https://iscresearch.com/embracing-new-pedagogies/ |
Exorcism is a Multinational doom metal band reigning from Basel, Switzerland portraying elements of the darker side of life better known in the Doom Metal-scene as the masters of Doom.
Enhaced by « Csaba Zvekan » being the Main protagonist of the darker side melodies that incorporate the depths of the past in a new light.
The Reincarnation of Doom, EXORCISM is a congregation that you can only experience from swamp-soaked groove. some people have reported sightings or even strange phenomena after listening to the Hellenic power of EXORCISM. | http://musicdeal.fr/exorcism/ |
Social, Ethical, and Legal Implications
Hello there,
This assignment will include all the previous weeks assignments put together. Please ensure you incorporate the instructor’s feedback, which most includes incorporating Starbucks into all weeks. Week 3 and 4 were the weeks that an understanding of the topic was illustrated, but where it was not applied to the marketing plan of the overall Starbucks theme. Please fix that as I left that out. I have attached the following.
1)The marketing plan outline with the explanation of all weeks and the executive summary in more detail
2) I have attached Weeks 2, 3, 4, along with the instructor’s feedback for you to incorporate into the paper as needed from previous weeks. Week 5 is good to go.
3) I have attached the combined weeks 2, 3, 4, and 5, but the instructor feedback has not been incorporated. Please do so. All the weeks combined are the start of Week 6. So week 6 assignment just needs to be added to the combined document.
Please do not hesitate to let me know if there is any confusion.
Kotler, P.T. & Keller, K.L. (2016).
Marketing management (15
th ed). Upper Saddle River, NJ:Pearson/Prentice Hall
Cite 3 new sources for this week.
Please ensure citations are in alphabetical order. If you have any questions on clarity at all please do not hesitate.
Purpose of Assignment
The purpose of this assignment is to help students think through the importance of social, legal, and ethical issues that may arise with their product or service and the implications of decisions made within those frameworks. It is designed to help the learners understand ethical and legal issues related to marketing practices. This knowledge helps to prevent such issues when developing the marketing strategies in their marketing plan. The executive overview of the marketing plan is not a summary and conclusion, but an overview of what the plan entails and what it does not address.
Assignment Steps
Note: the Social, Ethical, and Legal Implications assignment is part of the total marketing plan as outlined in the grading guide. It is not a separate paper.
Producing and marketing a product without regard to ethical, legal, and social considerations is detrimental to the overall success of any company.
Assess in a maximum of 700 words the ethical, legal, and social issues affecting your product or service in two markets: the United States and one international market. Domestic market generally means the market where the company headquarters are located. If you choose a domestic market that is not the U.S., then your other market is required to be the U.S. marketplace. This will be added to the Target Market section of your Marketing Plan.
Include the following:
- Develop a process to monitor and control marketing performance. This process could be a flowchart but a flowchart is not required (flowcharts do not count towards your word count requirement).
Formulate a maximum 350-word executive summary including at a minimum the following elements to include in your marketing plan:
- Required executive summary elements:
- Strategic Objectives
- Products or Services
- Optional executive summary elements:
- Resources Needed
- Projected Outcomes
Integrate the previous weeks’ sections, social, ethical, and legal implications, and executive summary into the marketing plan. Incorporate corrections and suggestions from the instructor’s weekly feedback. The marketing plan should be a minimum of 3,850 words and include the following:
- Incorporate Understanding Target Markets (Week 2)
- Incorporate Promotion and the Product Life Cycle (Week 3)
- Incorporate Price and Channel Strategy (Week 4)
- Incorporate Marketing Communication and Brand Strategy (Week 5)
- Incorporate Executive Summary, Legal, Social and Ethical Considerations (Week 6)
Cite a minimum of three peer-reviewed references.
Include all peer-reviewed references from the previous weeks’ individual assignments in your marketing plan.
Format your assignment consistent with APA guidelines.
<h5 style=" | https://essaysprompt.com/social-ethical-and-legal-implications-8/ |
Windsor residents will have say in new school boundary
Windsor residents will soon have opportunities to weigh in on attendance area boundaries for Thompson School District's new High Plains School, which will welcome K-8 students next fall.
Community forums will be held at 6 p.m. April 29 and June 10 at Mountain View High School in east Loveland and at 6 p.m. May 18 at the Thompson School District Administrative Building in Loveland. The meetings will allow community feedback on boundary options for the new school, which will be located at 4255 Buffalo Mountain Drive on the east side of Loveland. A groundbreaking ceremony will take place at 5 p.m. on April 22.
Dr. Dan Maas, chief operations officer for the Thompson School District, said the school will cater to neighborhoods located in the eastern portion of the district, including Belmont Ridge, Highland Ridge, Steeplechase, High Pointe Estates and Highland Meadows.
"It gives certain residents in our district access to a school much closer to their homes," he said. "We have an obligation to serve the communities within our boundaries and this is an important step for the district to serve students along the I-25 corridor and along our eastern borders."
Maas said there are currently four attendance area boundary options that will be presented to community members for feedback and which will be subsequently presented to the school board for a final decision. Yield ratios, which are based on the type of housing that exists inside a particular boundary area, will be used to project the number of students who will have access to the new school.
Michael Hausmann, public information officer for the Thompson School District, said High Plains School has been designed for 550 students and will offer an early childhood program in addition to being a non-traditional K-8 school.
"The Board of Education recently approved an authorization to go ahead and build the school as originally designed, which will make it unique for the district," he said.
Hausmann said the K-8 designation is not the only thing that will make the school unique. High Plains School will incorporate elements of modern design and a focus on technology that will make it stand out. The building has been designed to meet LEED gold certification, the second-highest standard for energy efficiency and utilization, and will utilize geothermal climate control to help moderate temperatures.
Maas said the school will also observe a 1-to-1 computer-to-student ratio and will feature a number of spaces intended for collaborative learning and project-based learning.
"The building itself is intended to be a learning tool, not just an enclosure for classrooms," he said. "Even mechanical closets will have some visibility in the hallways so that students can see how the building is working, as opposed to covering those kinds of things up."
Between the location and the bells and whistles, High Plains School is poised to make a long-term impact on students living along the district's eastern border, Hausmann said.
"This is an opportunity for us to construct a school building in an area that has a distinct need for it both right now and in the future," he said. | https://www.coloradoan.com/story/news/local/windsor/2015/04/03/windsor-high-plains-school/25236081/ |
In 2016 Good Practice Committee ran an online survey of third-year students and an online survey of graduate students, particularly exploring issues of ‘good practice’. Some of the issues raised (such as progression of female students in academic mathematics) were then explored in more detail in focus groups.
Feedback received was considered by Teaching Committee, Graduate Studies Committee, and others as applicable. The following summarises some key actions which have been informed by the feedback.
Note: these 'good practice' consultations supplement a wide range of other student consultation which takes place via committees and other mechanisms such as lecture and class questionnaires and national surveys.
Undergraduate students
1) Prior to university, many students had had some additional impetus to study maths, and felt that this was helpful in overcoming societal preconceptions against maths. For example, their family or a particular teacher had inspired them with a love of maths; they had been at an all-female schools where they felt that Maths was seen as more the ‘norm’ for women.
We have further developed our extensive outreach programme aimed at encouraging women to study mathematics post-16. We are now developing online materials with partners in Cambridge, aimed at students in year 10/11 to encourage them to study Further Maths A-level. A pilot ‘module’ on complex numbers will be published online in summer 2017.
2) There was concern that women in the UK might be less likely to take Maths/Further Maths under government funding reforms which restrict funding in state schools to three A-levels per student. The female students anticipated that there might be a particular reluctance amongst women to study both Maths and Further Maths to A-level if they were restricted to three A-levels in total.
In response to this feedback, and our own concerns on this issue, the Head of Department and Director of Undergraduate Studies met with Nick Gibb, Minister for Schools, in November 2016. They argued that Further Mathematics should be a special case: schools should receive funding for students taking this as a fourth A-level. We hope that this recommendation will be reflected in the report on a Review of post-16 Mathematics which is due to be published by the government soon.
3) The students noted a perception amongst their peers at school that mathematics did not lead to good career opportunities.
More information on career opportunities has been incorporated into outreach materials.
4) There was feedback on a number of pedagogical issues.
This has been incorporated into guidance which has been sent to all maths tutors:
Extract from Guidance to Tutors - Oct 2016.pdf
5) It was felt that specific efforts to integrate the students within a class, and to foster an atmosphere where students feel able to ask questions, may be helpful.
This has informed our proposed new format for fourth-year classes. We plan that each class will be split into two ‘workgroups’ for the first session. These sessions will allow the students to better get to know the teaching assistant and each other, and they will be encouraged to work with other students in the workgroup for the rest of the term.
6) There was some lack of awareness that graduate research students could receive a stipend (like a salary), and did not necessarily have to incur further debt.
We have tried to make this as clear as possible in all our events and literature, and will incorporate this into a new ‘Careers’ webpage for both undergraduate and graduate students.
Graduate students
1) When applying, most participants had had Skype interviews, and a number agreed that they were therefore not aware of the ‘atmosphere’ here before arriving, but were ‘pleasantly surprised’ when they arrived. They suggested having a ‘virtual open day’ and/or arranging opportunities to meet with current students before arrival.
A virtual open day was run in December 2016, and we have continued efforts to ensure that those attending Oxford for interview meet with current students over lunch and other events. In 2015-16 6 interviewees were taken to lunch; in 2016-17 51 interviewees were taken to lunch.
2) Students felt that there was not much mixing of Research Groups within the AWB, and would welcome more social events.
The department continues to support the weekly ‘Happy Hour’ which is organised by the graduate students themselves (the department pays the annual license fee, provides facilities management support and some support with coordination if required). ‘Coffee club’ has been introduced: all staff and graduate students are invited to morning coffee on a regular basis. Graduate students have set up ‘welfare brunches’, inviting all students to attend.
3) A number of female students were either unaware of the welcome lunch for female graduate students and postdocs held at the start of Michaelmas term, or had been unable to attend it. They suggested it should be run later in the year, as there was a lot to absorb in the first few weeks in Oxford.
The welcome lunch in 2016 was moved to Thursday of week 6 of Michaelmas Term.
4) Students noted that supervisors could make a real difference, and encouraged more training for supervisors, including in how to deal with sensitive issues.
In March 2017 the University Counselling Service ran a session in the department on ‘Supporting students in distress’. Academic and support staff were invited to attend, and a number did so. In May the Service ran a session on ‘Managing Expectations’ for both students and academic staff as part of the ‘Fridays@4’ seminar series. The department will continue to explore ways of supporting supervisor training.
5) Students were critical of the level of pay of Teaching Assistants (TAs) in classes, and also the limitations of the role of TA. It was felt that TAs only marked work and didn’t teach, which was frustrating. It would be more fulfilling if TAs could teach more, and their teaching could be informed by what they had learned in the marking. A number noted that they would really enjoy teaching, but did not enjoy the TA role.
Pay for TAs has since been increased by 10%, and a new format of classes is being proposed for fourth-year/MSc teaching. This should give experienced TAs somewhat more autonomy, more opportunities for teaching/interaction with students, and a lower marking load.
6) Many had felt reluctant to approach faculty/postdocs to ask them to be their ‘mentor’ under the recently introduced mentoring scheme.
We have developed clearer guidance for mentors and mentees, and aim to make oversight of mentoring a formal responsibility of Research Groups: a faculty member and DPhil student within each Research Group will have responsibility for liaising to find suitable mentors for graduate students in the Group. The mentoring scheme will then be re-launched.
All students
Women in both groups noted that the need to be on a succession of fixed-term contracts and to move around geographically in the early years of an academic career could be particularly off-putting, as this mitigated against becoming ‘settled’ and having a family. There were concerns about the challenges which this poses to personal life, and about opportunities for progression from ‘postdoc’ to a more stable ‘academic’ career.
This is a significant issue in international academia. The department continues to offer ‘Hooke’ and ‘Titchmarsh’ fellowships from department funds (aiming for 6-8 per year in steady state), designed to offer greater opportunities for career progression: the researcher is not tied to a particular research project and is free to conduct their own research programme. The department continues to pursue further opportunities to offer such fellowships (via philanthropy, in partnership with colleges, and via fee income from a new taught programme). All such posts are advertised as being available on a part-time or job share basis.
Staff consultations
In 2016 Good Practice Committee ran an online survey of all staff, particularly exploring issues of ‘good practice’. Some of the issues raised were then explored in more detail in focus groups.
Feedback received was considered by Good Practice Committee, senior managers within the Department, and others as applicable. The following summarises some key actions which have been informed by the feedback.
Academic and research staff
1) The overwhelming view was that heavy workload is by far the most serious difficulty facing academic staff, becoming a critical problem for those with small children. Women appeared to be disproportionately affected.
Work is ongoing to undertake a comprehensive review of the workload (of academic staff) across the department as a whole, with the aim of reducing and streamlining the workload overall.
2) There was support for the idea that the department should have a workload model which recognises all duties undertaken, and should enable the department to think about people’s careers in the longer term.
Work is ongoing to develop a more comprehensive workload planning model, considering both quantitative and qualitative approaches. The aims are to better enable us to more formally recognise the full range of burdens on individuals, to support them in managing their career development, and to provide better formal recognition for external and other roles.
3) There was support for more clearly defining the role of Research Group Head, and introducing other supporting roles within Research Groups, also with defined responsibilities. These responsibilities might rotate, which could provide greater opportunities.
Department Committee has now agreed a statement on the responsibilities of Research Groups:
Research Group Responsibilities.pdf
responsibilities in specific areas
4) It might be more helpful to have a career development review with someone relatively close in discipline – rather than necessarily with the Head of Department.
We have considered whether career development reviews for academic staff might be more effective in some cases if managed differently – for example compulsory five-yearly reviews might also be carried out by Associate Heads of Department; and/or staff could be encouraged to take-up non-compulsory reviews more frequently if these could be carried out by Heads of Research Groups, or other senior staff in the relevant field. Proposals to pilot a scheme of 'Career Development Discussions' for academic staff are being considered by Good Practice Committee in Hilary term 2018.
Professional and support staff
1) Feedback indicated that most staff felt that they had good access to appropriate training to support them in undertaking their current role. However, some felt that training was not encouraged.
More training sessions have been run ‘in house’ (e.g. on supporting students in difficulties, on implicit bias); information on some training has been circulated in the new weekly bulletin. The department will continue to take opportunities to reinforce the message that training is supported/encouraged. The department will also consider mechanisms to support staff who wish to develop the skills and experience to enable them to move beyond their current role.
2) There was a generally unanimous view that the departmental system of ‘PDR’ needed reform. There was feedback that Annual Personal Development Review (PDR) is insufficiently structured, with insufficient guidance for participants.
New guidance has been issued, based on central University guidance. This will be reviewed again priot to PDR in summer 2018.
3) Feedback indicated that many staff did not see the process of re-grading of posts as being transparent/accessible.
The following statement has been added to the new PDR guidance. “There is no direct link between the scheme and salary, promotion, or discipline, for which there are separate University procedures. If, however, an annual review indicates that performance has been exceptional, or that the job has grown significantly in terms of the level of responsibility, information from the discussion may be used, with the consent of the individual, to inform the separate ‘Awards for Excellence’ or re-grading procedures.”
4) A number of people were unaware of the University’s mentoring scheme for support staff.
This has been re-publicised.
5) Some felt that managers could benefit from more training/support/guidance in general.
We plan to hold some ‘in-house’ training sessions for managers.
6) There was some feeling that rules on emergency leave, parental leave, attending training, etc. were unclear/inconsistent.
This feedback will be considered in detail by the Department’s HR team. | https://www.maths.ox.ac.uk/members/good-practice/consultations |
As part of an ongoing transformation of the Scientific Research and Experimental Development (SR&ED) Program, the CRA has updated the SR&ED eligibility information on our website.
The new Guidelines on the Eligibility of Work for SR&ED Tax Incentives, which have replaced the Eligibility of Work for SR&ED Investment Tax Credits Policy, provide clearer and simpler information about how SR&ED work is defined under the Income Tax Act. This will make it easier for businesses to assess whether their work is eligible for SR&ED tax incentives at the outset, before they apply. The new guidelines incorporate feedback we received through extensive consultations with stakeholders.
We are here to help
Whether this is your first time filing a claim and you are looking for guidance, or you’re returning and would like a consultation, we offer services and tools to help you determine whether your work qualifies for SR&ED tax incentives before you submit a claim.
Stay connected
To receive updates on what is new at the Canada Revenue Agency (CRA), you can: | https://www.fortnelsonchamber.com/news/details/cra-news-release-new-sr-ed-eligibility-guidelines-8-17-2021 |
Develop a multi-pronged approach to enhance student professional planning and development within undergraduate programs in the College of Biological Science (CBS). Here, professional planning refers to deliberate consideration of one’s educational goals and development of attributes and experiences that align with post-graduate aspirations.
Background
Over the years CBS has focused on providing our students with the disciplinary content and technical skills needed to be successful in their field of study. While the disciplinary content is key we have come to recognize the value and need to incorporate an emphasis on transferable skills, program planned self-reflection, and preparation for life after graduation.
CBS recognizes the value of professional planning to the students, the discipline and the degree. As such CBS has explicitly stated professional planning and preparation as a learning outcome of the B.Sc. and therefore for all majors offered by CBS.
B.Sc. Learning Outcome A.3. Professional and Ethical Behaviour
- Demonstrate personal and professional integrity by respectfully considering diverse points of view ad the intellectual contributions of others, and by demonstrating a commitment to honesty and equity, and awareness of sustainability, in scientific practice and society at large.
- Collaborate effectively as part of a team by demonstrating mutual respect, leadership, and an ability to set goals and manage tasks and timelines.
- Plan for professional growth and personal development within and beyond the undergraduate program.
Our graduates have been successfully transitioned into all professional programs and graduate studies, as well into careers for non-for-profit and for profit industries and are successful business owners and entreprenuerers. Even with that, CBS recognizes it must be more explicit and purposeful in educating and preparing our students for a wide diversity of careers.
Strategy
In addition to the development of discipline-specific knowledge and skills, a strategy for improving student outcomes and satisfaction requires guided opportunities for educational and professional development as part of the undergraduate program.
Key elements of the CBS Professional Planning and Development Strategy include:
- Identify of a set of transferable, professionally oriented skills and attributes, which are introduced and developed throughout the undergraduate program.
- Adopt a digital platform for students to reflect on and communicate their skills and accomplishments.
- Incorporate opportunities within CBS programs for students to reflect on their own attributes and accomplishments, plan their academic path, and align it with their post-graduate goals.
- Offer opportunities, within and outside of the curriculum, to assist in meeting post-graduate goals. This may include promoting awareness of existing resources and facilitating special events, workshops or tutorials.
- Ensure at least one experiential/leadership activity is available within all CBS majors.
- Adopt a system for communicating with and tracking student outcomes throughout the degree and after graduation.
- Develop and implement a system for soliciting feedback from students and their employers about their preparation for their chosen post-graduate path.
There were a number of opportunities and intitaitives that helped CBS define it's career ready strategy: | https://www.uoguelph.ca/ada-cbs/teaching-and-learning/career-ready |
To help agencies improve their cloud services contracts, the General Services Administration's Secure Cloud Portfolio division has requested feedback from industry on agency attempts to enforce requirements via contract language. The request for information (RFI) asks for specific examples of both effective and ineffective contract language as well as suggestions on how to incorporate cloud services into different contract vehicles for direct solicitations, resellers and system integrators.
The information gathered in this RFI will be used to identify the examples of contract language that agencies should and should not use in their solicitations. These examples will be used to generate further or new guidance and education to agencies.
As a 3PAO, EmeSec has vast experience supporting cloud providers in navigating the accreditation process and continually meeting and maintaining the requirements for ongoing compliance. We support GSA’s efforts to create more inclusive and fair processes, especially for small and mid-sized businesses (SMBs). Small businesses often bear a greater burden to demonstrate compliance and often need to strain their resources to meet information security mandates while still participating in competitive opportunities.
Our advice to the FedRAMP office is to look for feedback and suggested proposals that would alleviate the burden on SMBs and offer more applicable teaming protocols or fair competitive process for contracts. | https://www.emesec.net/insight-posts/2017/12/11/zqp3h6d9h6e5hthhxmam7dopgb31yi |
All rights not expressly granted to the user of project files, templates and sounds are reserved.
All rights not expressly granted to the user are reserved.
This license is granted for a single user only (and is given on a worldwide basis).
Q: Can I incorporate sound elements of the templates/project files into my own tracks/compositions, if I completely change the melodies and chord progressions?
A: Yes this is possible, but you need to change the melodies and chord progressions more than 50% to the original template files.
directly or indirectly license, sub-license, sell or resell the Item, or redistribute the work alone (even for free), or any of these things.
Get in touch via E-Mail for further questions / licensing offers.
If you're looking to obtain a Commercial License for a product, please get in touch via E-Mail. | https://www.productionmusiclive.com/pages/pml-licensing-agreement |
Last month, CET Jordan hosted the inaugural School of Record Assessment Visit as part of our partnership with the University of Minnesota. As School of Record, the Learning Abroad Center at UMN assesses one CET program per year. Assessments review all elements of a program: pre-departure materials, on-site orientation, academics, housing, student life, health & wellness, and program infrastructure. Assessors have full access to CET staff, faculty, and students over the course of the visit and submit a report of recommendations and commendations for CET to review, respond to, and incorporate into the program in the future.
CET Jordan has evolved over the course of the pandemic, adopting a new center and new models of local integration, and CET was eager to receive feedback on these elements in particular from the assessors as we continue to build back better. The final report from the assessment, including assessor recommendations and CET’s response to the recommendations, will be available in the spring, but below are a few of the reflections from the draft report:
- “CET Jordan does an outstanding job upholding rigorous academic standards and providing a rich, challenging, and context-appropriate curriculum, while simultaneously providing a plethora of support services to enable students to meet these ambitious standards.”
- “The development and implementation of the Neighbors program in response to COVID restrictions has been very successful and seems to have a positive impact on participants and their Jordanian Neighbors.”
- “The array of extra-curricular options is impressive and exposes students to the history, culture, and landscape of Jordan.”
Members of the CET Jordan assessment team included: | https://cetacademicprograms.com/cet-jordan-program-review/ |
I’ve always admired my instrumentalist friends’ rhythmic ability. They seem to have a built in sense of maintaining the tempo through dicey passages, which I assume is from their experience in ensembles. As young pianists, we tend to slow down or stop when we get stuck, without immediate feedback that’s inherent in an ensemble (i.e., nobody else is slowing down and you’re no longer with the group) so it’s easy to be unaware that you’re not counting correctly. Yet rhythmic vitality is as important as playing the right notes!
Since it’s nice to have a change of pace in the spring and summer, it seems that a duet boot camp might be fun for the students and a chance to sneak in work on new skills: counting, collaboration and listening, sight-reading, and recording (so that they can create accompaniment tracks for themselves and others). By using music that’s a level or two below playing level, students can also reinforce technique and musical concepts learned over the past year.
I love the Norton Microjazz duets – they come with a CD accompaniment track recorded as an ensemble; a hit with the kids! Solo beginner pieces in a number of series (e.g., Waxman, Faber, etc.) offer an opportunity for slightly older students to test their composition and keyboard harmony skills by creating secondo parts and playing with younger students. It would be great to get students who are learning other instruments to play melodies with solo piano pieces, and help piano students understand which instruments have to be transposed and can’t play from piano score.
What are your favorite duet series and stories? | https://devonshirepiano.com/tag/tempo/ |
The current version of the best-selling textbook on concrete structural design and analysis, Structural Concrete: Theory and Design, is now available as a PDF. This book presents the most up-to-date knowledge along with a concise and clear explanation in an approachable manner. This sixth edition has recently been revised to incorporate the most recent version of the ACI 318-14 code, and it places an emphasis on having a conceptual understanding of the topic. It also helps engineering students build their knowledge base by presenting design methods alongside relevant standards and code. The reader will be able to grasp the real-world implementation of the industry’s best practices with the assistance of numerous examples and practice problems, in addition to explanations and insights on the substantial ACI revision. Examples utilizing SI units and US-SI conversion factors are covered in each chapter, and SI unit design tables are provided for your convenience throughout the book.
Concrete is favored as a building material in the majority of regions across the world due to its exceptional resistance to the elements and its solidity. During the casting process, rebar and steel beams are typically added in order to offer additional support for applications relating to civil engineering and structural engineering. It is becoming more usual to employ pre-cast concrete, which enables improved quality control, the use of unique admixtures, and the manufacture of creative shapes that would be too difficult to produce on site. This ebook offers comprehensive guidance on all areas of the design of reinforced concrete, including the ACI amendments that address these new techniques.
- Shear, axial loading, diagonal tension, and torsion are all important concepts to understand.
- Create retaining walls, footings, slender columns, staircases, and other architectural elements.
- Gain an understanding of the planning issues for reinforced beams, strut and tie systems, and more.
- Examine the qualities of reinforced concrete, including models for the processes of creep and shrinkage.
It is essential for students of engineering to acquire knowledge of the most recent standards and practices that are considered to be the industry’s best. The American Concrete Institute generally revises the structural concrete code once every three years. The most recent edition of Structural Concrete offers the most up-to-date information, together with clear explanations and specific recommendations. | https://www.yakibooki.com/download/structural-concrete-theory-and-design-6th-edition/ |
THE Arroyo administration’s much-touted “highest economic growth” is “among the most inequitable” in the region, according to a new report of the Asian Development Bank which also said government corruption continues to hamper development in the country.
In an 83-page study “Philippines: Critical Development Constraints,” the ADB downplayed Malacañang’s declarations of an economic take-off, saying that “while growth has picked up in recent years, with the economy in 2007 posting its highest growth of 7.3 percent in the last three decades, both public and private investment remain sluggish and their share in gross domestic product has continued to decline, raising the question of whether the current economic momentum can be sustained.”
“In per capita terms, the growth was even less favorable,” said the ADB, pointing out from 1961-2006, “per capita gross GDP grew 1.4 percent annually compared with 3.6 percent in Indonesia, 3.9 percent in Malaysia, and 4.5 percent in Thailand.”
The low per capita GDP growth has resulted in a slow pace of poverty reduction and high income inequality.
The government yesterday reported that 26.9 percent of families in 2006 were below the official poverty threshold.
“In 2003, about 25 percent of Philippine families and 30 percent of the population were deemed poor and, in 2006, the Gini coefficient of per capita income – at slightly over 0.45 – was among the highest in Southeast Asia,” said the ADB.
The Gini coefficient measures inequality of income or wealth distribution.
The ADB study also said corruption and governance issues are among the biggest stumbling blocks to attaining long-term and equitable growth.
“Poor performance on key governance aspects, in particular, control of corruption and political stability, has eroded investor confidence,” the ADB said citing several international studies and surveys suggesting that “the Philippines’ ranking in the control of corruption and maintaining political stability has worsened.”
According to the ADB, “the Philippines has scored lowest among countries with similar per capita GDP levels on control of corruption and political stability since 1996, and on rule of law since 2002.”
STABILITY SLIPPING
The country has also “lost momentum in controlling corruption, and has allowed Vietnam and fairly soon, Indonesia, to pass it. In the case of political stability, the Philippines has slipped, particularly relative to the 1998 level,” the ADB added.
The ADB explained that political problems comparable to the 1980s, which caused a decline in foreign direct investments, have not disappeared “in sharp contrast to surges in Malaysia, Indonesia, and Thailand” that have cleaned up their governments and instituted reform measures.
The report said “instability was manifested in a number of political events in 2000, 2005-2006, and 2007 that sorely tested constitutional processes.”
“The perception of worsening corruption was found to partly explain the low investment rate in the Philippines. Poor governance was also found to translate into higher lending rates, reflective of premiums for worsening corruption, political instability, and internal conflict, acting as disincentives to private investment. A key reason for weak revenue generation – leakages in revenue collection – is rooted in persistent corruption and patronage problems,” said the report.
The report argues that governance concerns underline other critical constraints. For instance, corruption undermines tax collection and reduces resources for infrastructure development.
“Similarly, the political instability hinders investment and growth and reduces the tax base,” said the report.
TIGHT FISCAL SITUATION
The country’s fiscal situation also “remains tight despite the government making good progress to reduce deficits and aims to balance its budget in 2008.”
“It said that much of the reduction in fiscal deficit has been driven by deep cuts in spending on social and economic services and sale of government assets,” said the report.
The ADB also noted “declining public and private sector investments in infrastructure” which has led to “inadequate and poor infrastructure and bottlenecks” that raised the cost of doing business in the country and eroded the competitiveness and attractiveness to both foreign and local investors.
“Per capita paved road length for the Philippines is roughly one-sixth that of Thailand and one-fourth of Malaysia,” said the report.
Poor infrastructure and weak investor confidence have led to weak flows of foreign direct investment (FDI), the report said pointing out that the Philippines only got FDIs worth $1.1 billion in 2001-2006, compared with $6.1 billion for Thailand and $3.9 billion for Malaysia.
It said the country’s lower FDI “partly explains a smaller and narrower industrial base compared to its neighbors whose share of manufacturing in GDP is 34.8 percent in Thailand and 30.6 percent in Malaysia. The Philippines’ record is 23.5 percent.
IMPACT ON POVERTY
In a statement, ADB chief economist Ifzal Ali said “targeting and removal of the most critical constraints will lead to the highest returns for the country. It will spur investment, which in turn will lead to sustained and high growth and create more productive employment opportunities.”
“This would ensure that the fruits of development are shared by all,” Ali added.
The United Opposition said government figures showing an increase in the number of poor Filipinos is the best argument for President Arroyo to resign.
“Her misplaced economic policies and the massive corruption have led us to this situation,” said UNO president and Makati Mayor Jejomar Binay.
He said Arroyo has consistently justified her stay in power by citing the supposed gains in the economy under her term.
“Now that government figures show that she has failed to improve the lot of million of Filipinos, and has in fact increased the number of poor Filipinos, it’s time for her to go,” he said.
The National Statistical Coordinating Board said Tuesday that poverty incidence in the Philippines worsened to 32.9 percent in 2006 from 30 percent in 2003.
ONLY ARROYO ALLIES
Binay said the only ones benefiting are Arroyo cronies and business associates, and political allies “who make millions in kickbacks and juicy government contracts.”
Sen. Mar Roxas bewailed the rising incidence of poverty from 2003 to 2006 as reported by the NSCB.
He said this only shows government is busy covering up anomalies and neglecting its duty to provide relief for the public in the midst of rising prices of oil and other commodities.
The NCSB figures, he said, clearly showed a disconnect between the financial markets and the grassroots economy, and a widening gap between rich and poor. From 4 million poor families in 2003, this went up to 4.7 million in 2006.
The National Economic and Development Authority on Wednesday said poverty worsened because of increasing prices of commodities and the insufficient income of the citizenry, with “external factors” like high oil prices playing a role. | https://tonyocruz.com/?p=779 |
Readers Question: Does the Rahn Curve support the empirical evidence? If not, why not? Can you prove that there is a relationship between the level of Government Spending and GDP growth?
The Rahn Curve suggests that there is an optimal level of government spending which maximises the rate of economic growth. Initially, higher government spending helps to improve economic performance. But, after exceeding a certain amount of government spending, government taxes and intervention diminishes economic performance and growth rates.
Diagram of Rahn Curve
Reasons moderate levels of government spending increase economic growth
- Governments can spend on infrastructure ignored by free-market, e.g. road, railways. This helps to reduce the cost for business and improve productivity.
- Governments can support education which helps to increase labour productivity and economic growth
- Governments can spend money to provide law and order and help improve social and political stability which is necessary for economic growth.
Why government spending can start to hold back rates of economic growth
- Higher spending requires higher tax rates. Higher tax rates can create disincentives to work and disincentives for entrepreneurs to take risks.
- Higher government spending may crowd out private sector spending. Private sector spending and investment is likely to be more efficient because of the profit incentive to be efficient, whereas government spending is more prone to inefficiency and misplaced spending due to poor information.
- Nationalisation of key industries can lead to greater inefficiency due to problems of government managing business.
- A generous welfare state can create disincentives to work.
- Government regulation of industry can create additional costs to business.
How reliable is the Rahn Curve?
- Be wary of ideological preferences. Proponents of the Rahn curve tend to use it as a tool to argue that beyond a certain level – high levels of government spending hinders economic growth. For example, the Centre for Freedom and Prosperity [link] point to empirical studies which suggest that the optimal level of government spending for economic growth is between 15 and 25% of GDP. That page also shows links to other reports and empirical studies which would be worth investigating for your paper.
- The Centre for Freedom and Prosperity has a clear ideological stance that they dislike government spending. It is not surprising they highlight studies which show results favourable to their belief in reducing the role of government.
Catch-up effects. When countries are in a certain stage of development growth tends to be higher and government spending a smaller % of GDP. However, this does not necessarily prove the high growth is caused by low spending. A more convincing explanation would be that at certain stages of development it is easier to maintain high growth rates (e.g. China and India) because countries can easily catch up – by adopting technology from advanced economies. And it may be that these growth rates could have been even higher if the government had invested in infrastructure improvements.
Government spending can be of different types. To say that the optimal level of government spending is 20% is like plucking a figure out of the sky. It depends on what the government spends its money. If the government is spending money on generous benefits for the unemployed it is unlikely to be boosting growth rates. If the government is spending money on overcoming market failures such as providing education, training and infrastructure improvements then these can be helpful in increasing growth rates. Some argue with ideological fervour that government spending is always ‘inefficient’ But, this is lazy economics; some government spending can be inefficient, but, there is no reason why it has to be always inefficient.
Cherry-picking of data. Trying to find a link between economic growth and government spending makes it tempting to ‘cherry-pick data’ It is always easy to find particular examples of high growth with either high/low government spending.
Many factors affecting economic growth. Economic growth is determined by confidence, infrastructure, political stability, education, skills, the attitude of workers/entrepreneurs, technological development and many more. The point is levels of government spending is one minor factor out of very many. So it becomes very difficult to prove empirical links – there are too many factors involved.
Conclusion
I am rather dubious of the Rahn Curve; there are too many difficulties in deciding whether the level of government spending can influence the rate of economic growth. It would be more useful to examine whether particular government spending decisions have an impact on growth.
There is also another issue which can get lost – the fact that maximising growth rates is not necessarily the government’s highest priority. Issues of equity, fairness and concern for the environment are arguably more important than maximising rates of economic growth.
Real GDP per capita and government spending
Real GDP is different to economic growth. Economic growth is the rate of change of real GDP. But, it is worth having the perspective of living standards as this matter more to individuals than economic growth.
- Government spending in the US is approx 35% of GDP, in UK approx 38% of GDP, in western Europe, some countries have more than 50%.
The countries with the lowest rates of real GDP tend to be the poorest.
Source: Our world in data
Quality of life and government spending
Countries with the highest quality of life index tend to be those who have the highest levels of government spending as a % of GDP.
Developing countries with the lowest levels of quality of life tend to have lower levels of government spending as a % of GDP.
Quality of life index
Optimal Levels of Government Spending
Nevertheless, it is still worth considering the optimal levels of government spending.
- For example, increasing state benefits will at some point have a trade-off of lower economic growth.
- A current issue is the level of state pensions and the optimal age for retirement, reducing state pensions (making people work longer) will probably lead to lower government spending and higher rates of economic growth. | https://www.economicshelp.org/blog/447/economics/the-rahn-curve-economic-growth-and-level-of-spending/ |
Leading economic data released last week indicate that European economies were accelerating as 2016 drew to a close. The political uncertainties facing the region, including elections in France, Germany, and the Netherlands, along with the opening of Brexit negotiations, apparently has not dampened business and consumer optimism. Investors have been more hesitant because of the risks, but over the last month, the iShares MSCI Eurozone ETF surged to a 6% total return gain.
The good economic news was widespread. The Markit Eurozone Manufacturing Purchasing Managers’ Index registered 54.9 for December, its highest level since April 2011, with manufacturing PMIs higher in all eurozone countries surveyed except Greece. The surge in production was accompanied by rising price pressures, Seeking Alpha reported.
The depreciation of the euro is believed to be the main factor behind these developments. Service sector growth in the eurozone was also strong in December, close to November’s 11-month high. Overall, the Composite PMI (manufacturing plus services) expanded at the quickest pace since May 2011. It is therefore not surprising that, according to the European Commission, eurozone economic confidence reached in December its highest level since March 2011.
Turning to the largest economy in the eurozone, Germany’s Manufacturing PMI registered strong growth in December, the highest in nearly three years, with improving business conditions. Germany’s service sector experienced a slight easing in business activity but remained robust. Germany’s Composite PMI reached a 5-month high.
France’s Manufacturing PMI reached a 67-month high in December; and with rising growth in the service sector over the last six consecutive months, the Composite PMI for France reached an 18-month high. French companies reported the strongest optimism since March 2012. In addition to the weaker euro, the nomination of market-friendly Francois Fillon as the conservative candidate in France’s presidential election likely contributed to this optimism.
Further south, Italy’s Manufacturing PMI registered its strongest growth in six months, which is impressive in view of the political uncertainties and banking sector difficulties in that country. Service sector activity also continued to rise but at a slower rate, which caused the Composite PMI to register slower growth in overall output compared with November’s 9-month high.
Finally, in Spain, manufacturing performance continued to strengthen in December, with the Manufacturing PMI advancing at its fastest pace since January 2016. The same was the case for new orders. Service sector growth has remained broadly at the strong pace reached in December. Spain’s Composite PMI attained a six-month high in December. Over the past year, Spain created jobs at the highest pace in the last ten years. | https://financialtribune.com/articles/world-economy/57285/eu-growth-gains-momentum |
Fitch Ratings – Frankfurt am Main – 08 Apr 2022: FitchRatings has affirmed Romania´s Long-Term Foreign-Currency Issuer Default Rating (IDR) at ‘BBB-‘ with a Negative Outlook.
A full list of rating actions is at the end of this rating action commentary.
KEY RATING DRIVERS
Credit Fundamentals: Romania’s ‘BBB-‘ rating is underpinned by EU membership and EU capital flows that support investment and macro-stability, and GDP per capita, governance and human development indicators that are above ‘BBB’ category peers. These are balanced against larger twin budget and current-account deficits than peers, a weak record of fiscal consolidation and high budget rigidities, and a fairly high net external debtor position.
Negative Outlook: The Negative Outlook reflects continued uncertainty regarding the implementation of policies to address structural fiscal imbalances over the medium term and the impact of the Ukraine war and energy crisis on Romania’s economic, fiscal and external performance.
Heightened Short-Term Challenges: The Russian invasion of Ukraine represents a significant macro headwind, as it will heighten short-term risks to growth and inflation, and to a lesser extent, to public and external finances. Trade and export links with Russia—as well as Ukraine and Belarus—are very limited (exports to the three countries accounted for only 2.3% of the total in 2020), and unlike other countries in the region, Romania imports only a modest share of its gas from Russia (20%, the is rest domestically produced). However, steep increases in commodity prices, supply-side disruptions and weaker growth in Romania’s main trading partners (mainly the eurozone) will have significant spillovers, heightening short-term risks.
Public Investment Key Growth Driver: Fitch expects GDP growth to slow to 2.1% in 2022 (from 5.9% in 2021), primarily reflecting a slowdown in private consumption and exports. Although the government has put some measures in place to offset higher energy costs, they will likely be insufficient to prevent a loss of purchasing power. We expect public investment to provide some momentum in 2H22, in line with higher absorption of the 2014-2020 Multi-Annual Financing Framework and from the Recovery and Resilience Fund (RRF). In 2023 we expect investment dynamics to further accelerate, which combined with our assumption of normalisation of external trade and supply chains, will lift growth to 4.8%.
Inflation Higher for Longer: We forecast the harmonised index of consumer prices (HICP) will average 10% in 2022 (the highest rate since 2004), with inflation likely to reach double digits in 2Q22 and possibly 3Q22 (from 7.9% in February), reflecting significant pass-through from higher energy and commodity prices as well as second-round effects. The government has placed a cap on electricity and gas prices for households and some companies until April 2023, which should limit inflation pressures somewhat. Unlike other countries in the region, wage growth appears moderate (largely due to restraint on public wages), but pressures are likely to rise as the labour market continues to tighten and employees feel a squeeze on their living standards. Fitch expects inflation to soften to 5.5% in 2023, in large part reflecting base effects.
Central Bank’s Multiple Priorities: The National Bank of Romania (NBR) has tightened its main policy rate by 1.75bp since September 2021 (to 3% in April) and increased its interest rate corridor in an effort to tackle rising inflation. The authorities have also focused on exchange rate stability to limit inflation pass-through, with the currency maintaining broad stability in 1Q22 following interventions by the central bank. We expect the tightening cycle to continue but at a modest pace to prevent an even faster economic slowdown. The NBR reactivated its programme of government bond purchases in March to improve liquidity and limit volatility in domestic bond yields, a tool we expect to be used only sporadically. However, if volatility persists and macro-challenges accentuate, the NBR might find it more challenging to balance multiple policy priorities, raising the risks of a sharper adjustment on the growth or fiscal side.
Challenging Public Finance Outlook: The government overperformed its budget targets in 2021 (we estimate the accrual deficit at 7.5% of GDP versus 8% in the budget) thanks to strong revenue performance and CAPEX under-execution. This better-than-expected starting position, as well as the government’s commitment to adhere to wage and pension spending limits in 2022 (as was the case in 2021), will help the authorities manage increasing expenditure pressures stemming from rising macro-challenges. The energy cap will have a modest net cost to the budget (most of it will be financed by taxing profits of energy producers) and the authorities estimate costs for Ukrainian refugees to total at least EUR1 billion (0.4% of GDP). However, we believe there are likely to be additional demands for support measures, the scope of which will largely be dependent on access to funding. Overall, and despite expectations of solid revenue growth due to a high deflator, we expect the fiscal deficit to reach 7.1% of GDP this year, compared with the budget target of 6.3%.
The ruling coalition remains committed to medium-term fiscal consolidation and implementation of ambitious revenue measures to boost tax collection and expenditure reforms tied to the RRF. However, this will require difficult political compromises and the passage of key pension and wage bills by mid-2023, just before the busy 2024 electoral cycle begins. Romania has a very weak record of adopting structural fiscal reforms, often relying on under-execution of investment to meet deficit targets.
Broadly Stable Debt, Financing Pressures: Under our baseline scenario, strong nominal growth and a modest reduction in the primary balance will keep the public debt/GDP trajectory on a very gradual upward trend, rising from an estimated 48.9% in 2021 to 51.3% in 2023 (and compared with a ‘BBB’ median of 55%). The strong commitment to exchange rate stability somewhat moderates the potential risks from high foreign-currency exposure (50% of total debt). Financing needs will remain large in 2022 (at around 11% of GDP), requiring significant domestic and external issuances. This will heighten the risks around financing flexibility, in particular in the event of additional domestic or external shocks.
Large External Imbalances: Fitch expects Romania´s current account deficit (CAD) to average 6.8% of GDP in 2022-2023, down slightly from a 13-year high of 7.0% in 2021 and compared with the current ‘BBB’ median of 1%. Some of the improvement will be due to a weakening of domestic demand in 2022 (which lowers import demand), followed by our expectations of a recovery in manufacturing exports in 2023. Foreign direct investment picked up in 2021 but in conjunction with capital transfers only covered around 66% of the CAD last year. We expect this ratio to remain broadly constant in 2022-2023, even as EU flows accelerate. High public external debt issuances will keep the net external debt position at around 22% of GDP over the forecast period, compared with the ‘BBB’ median of 5%.
Political Stability: The grand coalition of the centre-left PSD and centre-right PNL has proven remarkably stable since taking power in December, despite various areas of policy disagreement and confrontational stance in the past. The coalition government has turned its focus to dealing with Ukrainians and the cost of living crisis while fully supporting the EU stance against Russia. There are few risks around short-term stability, in particular as parties want to focus on meeting RRF milestones to unlock generous funding. Nevertheless, public discontent could increase rapidly if the cost of living crisis accentuates, potentially risking more populist policies or sharpening internal divisions within the coalition.
ESG – Governance: Romania has an ESG Relevance Score (RS) of 5[+] for both Political Stability and Rights and for the Rule of Law, Institutional and Regulatory Quality and Control of Corruption. These scores reflect the high weight that the World Bank Governance Indicators (WBGI) have in our proprietary Sovereign Rating Model (SRM). Romania has a moderate WBGI ranking at 59.2 percentile, reflecting a recent record of peaceful political transitions, a moderate level of rights for participation in the political process, moderate institutional capacity, established rule of law and a moderate level of corruption.
RATING SENSITIVITIES
Factors that could, individually or collectively, lead to negative rating action/downgrade:
-Fiscal: Reduced confidence in the capacity to implement fiscal consolidation that undermines fiscal policy credibility, leads to a faster-than-projected increase in public debt, reduces financing flexibility or increases risks to macro-economic and external sector stability.
-External: A sustained deterioration in the balance of payments, for example, reflecting a sharper widening in the CAD and/or failure to attract non-debt financing flows.
-Macro: Weaker growth prospects, for example, reflect a more pronounced or longer period of an economic slowdown that leads, for example, to increased fiscal pressures.
Factors that could, individually or collectively, lead to positive rating action/upgrade:
-Fiscal: Improved confidence that the government´s fiscal strategy will lead to a narrowing fiscal deficit and broad stabilisation of general government debt/GDP over the medium term.
-External: Evidence of increased economic and external resilience to tighter financing conditions and geopolitical risks.
SOVEREIGN RATING MODEL (SRM) AND QUALITATIVE OVERLAY (QO)
Fitch’s proprietary SRM assigns Romania a score equivalent to a rating of ‘BBB’ on the Long-Term Foreign-Currency (LT FC) IDR scale.
Fitch’s sovereign rating committee adjusted the output from the SRM to arrive at the final LT FC IDR by applying its QO, relative to SRM data and output, as follows:
– External Finances: -1 notch, to reflect Romania’s higher net external debtor and net investment liabilities positions than the ‘BBB’ median, as well as higher external vulnerability than implied by the SRM model, given adverse policy developments in recent years that have impacted external competitiveness and aggravated its exposure to shocks.
Fitch’s SRM is the agency’s proprietary multiple regression rating models that employ 18 variables based on three-year centred averages, including one year of forecasts, to produce a score equivalent to an LT FC IDR. Fitch’s QO is a forward-looking qualitative framework designed to allow for adjustment to the SRM output to assign the final rating, reflecting factors within our criteria that are not fully quantifiable and/or not fully reflected in the SRM.
BEST/WORST CASE RATING SCENARIO
International scale credit ratings of Sovereigns, Public Finance and Infrastructure issuers have a best-case rating upgrade scenario (defined as the 99th percentile of rating transitions, measured in a positive direction) of three notches over a three-year rating horizon; and a worst-case rating downgrade scenario (defined as the 99th percentile of rating transitions, measured in a negative direction) of three notches over three years. The complete span of best- and worst-case scenario credit ratings for all rating categories range from ‘AAA’ to ‘D’. Best- and worst-case scenario credit ratings are based on historical performance. For more information about the methodology used to determine sector-specific best- and worst-case scenario credit ratings, visit https://www.fitchratings.com/site/re/10111579.
REFERENCES FOR SUBSTANTIALLY MATERIAL SOURCES CITED AS KEY DRIVERS OF RATING
The principal sources of information used in the analysis are described in the Applicable Criteria.
ESG CONSIDERATIONS
Romania has an ESG Relevance Score of ‘5[+]’ for Political Stability and Rights as WBGI has the highest weight in Fitch’s SRM and is therefore highly relevant to the rating and a key rating driver with a high weight. As Romania has a percentile rank above 50 for the respective governance Indicator, this has a positive impact on the credit profile.
Romania has an ESG Relevance Score of ‘5[+]’ for Rule of Law, Institutional & Regulatory Quality and Control of Corruption as WBGI has the highest weight in Fitch’s SRM and is therefore highly relevant to the rating and is a key rating driver with a high weight. As Romania has a percentile rank above 50 for the respective governance indicators, this has a positive impact on the credit profile.
Romania has an ESG Relevance Score of ‘4[+]’ for Human Rights and Political Freedoms as the Voice and Accountability pillar of the WBGI is relevant to the rating and a rating driver. As Romania has a percentile rank above 50 for the respective governance indicator, this has a positive impact on the credit profile.
Romania has an ESG Relevance Score of 4[+]’ for Creditor Rights as a willingness to service and repays debt is relevant to the rating and is a rating driver for Romania, as for all sovereigns. As Romania has a record of 20+ years without a restructuring of public debt, which is captured in our SRM variable, this has a positive impact on the credit profile.
Except for the matters discussed above, the highest level of ESG credit relevance, if present, is a score of ‘3’. This means ESG issues are credit-neutral or have only a minimal credit impact on the entity, either due to their nature or to the way in which they are being managed by the entity. For more information on Fitch’s ESG Relevance Scores, visitwww.fitchratings.com/esg. | https://lobbyromania.ro/fitch-affirms-negative-outlook-romania-april-2022/ |
Exports from Latin America and the Caribbean hit their highest level in six years thanks to a 9.9 percent increase in 2018, albeit amid growing downside risks in the future, according to a new report by the Inter-American Development Bank. The region exported $1.08 trillion last year. While this is the highest level since record exports in 2012, the rise fell short of the 12.2 percent growth rate for 2017. The region’s performance also lagged a worldwide trade increase of 11.6 percent for the Jan-Sept period (compared to the same period the previous year).
IDB bolsters Barbados’ macroeconomic emergency program
Tuesday, November 6, 2018 - 10:26
THE $100 MILLION LOAN IS INTENDED TO SUPPORT RESTORING BARBADOS’ ECONOMIC STABILITY, PROMOTE GROWTH, AND IMPLEMENT REFORMS Relying upon a $100 million loan from the Inter-American Development Bank (IDB), the Government of Barbados seeks to regain macroeconomic stability, implement fiscal adjustment measures that foster a sustainable fiscal balance in the short and medium term, and protect social spending programs for the most vulnerable Barbadians. | https://www.iadb.org/en/news?f%5B0%5D=filter%3A1126&f%5B1%5D=filter_news_by_topic%3A1135&f%5B2%5D=filter_news_by_country%3A1032&%3Bf%5B1%5D=filter%3A1126&%3Bf%5B2%5D=filter_news_by_country%3A1007&%3Bf%5B3%5D=filter_news_by_country%3A1005&%3Bf%5B4%5D=filter_news_by_topic%3A1124&%3Bf%5B5%5D=filter_news_by_topic%3A1100&%3Bf%5B6%5D=filter%3A1138 |
This contribution discusses the experience of the euro, and the extent to which its future is promising. The discussion begins with the Economic and Monetary Union (EMU) theoretical framework and policies, and adopted by the European Central Bank (ECB), and the extent to which fiscal policy, pursued by the EMU member countries, and monetary policy, pursued by the ECB, have been successful. The theoretical background of this approach is based on the New Consensus Macroeconomics (NCM), but it differs; and this is elaborated upon and discussed extensively. This discussion inevitably includes the post Global Financial Crisis (GFC), the Great Recession (GR) and the euro-crisis period in an attempt to examine the consistency of the theoretical background and the fiscal and monetary policies pursued. In terms of the future of the euro, the discussion just suggested enables this contribution to conclude that the extent to which the euro would survive requires further economic policies. Such policies, in addition to the current monetary policy, should be especially proper EMU fiscal policy, and other policies as discussed in this contribution. In effect this amounts to the suggestion that political integration is what is required to enable the euro to survive in the future.
Keywords: EMU, monetary policy, fiscal policy, political integration
JEL Classification: E52, E60, E62, O52
INTRODUCTION
The EMU and ECB launched the single currency (euro) in 1999.2 The euro has been operating since 1999 as a virtual currency and since 2002 when its introduction was technically accomplished. The euro-area countries have not experienced promising economic performance since the introduction of the euro. Inflation has not always met the 'close to 2 percent from below' ECB inflation target. More recently it has been a continuous undershooting of inflation (this is the annual core inflation, which excludes volatile prices of energy and unprocessed food and tobacco and at which the ECB looks in its policy decisions), which was 1.1 per cent in February 2018 and expected to remain below the ECB's target in the future. Also, economic growth has been sluggish, and unemployment has remained high. Unemployment in the euro area, and in 2010, was 10% of the labour force. The latest figures (The Economist 12 May, 2018) suggest that it is 8.5% (as in March, 2018). In the USA it was also 10% in 2010 (the top US unemployment rate of the 'Great Recession'; the equivalent during the 'Great Depression' was 25 percent), but the latest figure is 3.9% (as in April, 2018). It is also the case that euro area youth unemployment (people unemployed who are between 15 and 25 years old) is high at 17.3% (as in March, 2018).3 In the USA it is not so high, 8.5% (as in March, 2018).4 Unemployment has been significantly different among the EMU countries; some examples make the point. Unemployment in March 2018 was: in Spain it was 16.1%, in Italy 11.0%, in France 8.8%, and in January 2018 was 20.6% in Greece (The Economist, 12 May, 2018). By contrast, and in March 2018, it was 5.0% in Austria, 3.4% in Germany and 4.1% in Denmark (The Economist op. cit.).
The euro area framework has not been effective to address these significant differences. This has been particularly so with the peripheral countries but not only as shown above. This raises issues about the future longer-term euro-area growth potential. It is also the case that the euro-area inflation is stubbornly below the ECB's 'close to 2% from below' target. Another serious problem is convergence among member countries, which has not materialised. This is so in view of the non-existence of a euro area fiscal policy; the latter is in the hands of national governments as shown below, and in the form of fiscal rules. These, however, have not been effective to promote convergence. It is also the case that banking supervision, and until recently, remained in the hands of the euro-area countries. This had serious differences in their financial fragility. However, and more recently, there have been signs of economic growth upswing; the GDP euro-area growth rate of the 4th quarter of 2017 was 2.7% (The Economist 05 May, 2018) and expected to be 2.3% in 2018 (The Economist, 12 May, 2018).5 According to the IMF (Lagarde 2018), the euro-area GDP growth rate is expected to be 2.2% for 2018, and the global growth rate 3.9%. But at the 2018 World Economic Forum (WEF), delegates were cautious on the upswing, not just in the EMU but also in the global economy. Especially so according to the IMF Managing Director, who stated that "use this time to find lasting solutions to the challenges facing the global economy" (Financial Times 26 February, 2018). Also, the ECB President acknowledged after the governing council meeting on 26 April 2018, that a "moderation" in the euro area's recovery and "a loss of momentum that is pretty broad-based across countries and all sectors" is evident (as reported in the Financial Times 27 April 2018).
We proceed in section 2 to discuss the EMU theoretical framework, along with the economic policies in the EMU; the emphasis being on the EMU fiscal and monetary policies. Section 3 discusses the extent to
2 When the euro was launched, the EMU members were eleven countries, namely Austria, Belgium, Finland, France, Germany,
Ireland, Italy, Luxembourg, Netherlands, Portugal and Spain. Another eight countries joined the EMU subsequently, Greece, Cyprus, Estonia, Latvia, Lithuania, Malta, Slovakia and Slovenia.
3 Available at: http://www.tradingeconomics.com/euro-area/youth-unemployment-rate
4 Available at: http://www.tradingeconomics.com/united-states/youth-unemploymen t-rate
5 Examples in terms of growth rates, as expected in 2018, are: Austria 2.8%, France 2.0%, Germany 2.3%, Greece 1.6%, Italy
1.4% and Spain 2.8% (The Economist, 2018). Spain apparently has done better than the rest of the euro-area countries. This is due to the economic policies undertaken by the government there, most important of which have been structural changes, essentially falling labour costs.
which the EMU economic policies have been successful since its creation along with the question of whether the euro can survive without political integration. We summarise and conclude in section 4.
EMU THEORETICAL FRAMEWORK AND POLICIES
The EMU approach, in terms of its theoretical dimension and economic policies, relies on the New Consensus Macroeconomics (NCM) model (see, for example, Arestis 2007). Its key elements are therefore as follows: the market economy is essentially stable, and as such does not need macroeconomic policies, particularly discretionary fiscal policies. The only policy that can be effective is monetary policy. Monetary policy is the only instrument that can affect inflation in the long run in view of the assumption that the inflation rate is the only macroeconomic variable that monetary policy can affect in view of the further assumption that inflation is a monetary phenomenon. Fiscal policy is not viewed as a powerful macroeconomic instrument in view of the Ricardian Equivalence Theorem (Arestis 2012). Monetary policy has, thus, been upgraded and fiscal policy has been downgraded; fiscal policy can only serve to achieve a balanced budget. Monetary policy can achieve the set inflation target and thereby macroeconomic stability emerges.
The adoption of the price stability as the only objective of monetary policy to be achieved by the ECB as an 'independent' Central Bank, with only one instrument, namely manipulation of the ECB's rate of interest, clearly signals the adoption of the NCM agenda (Arestis and Sawyer 2006a, 2006c). The ECB and the national euro-area central banks (NCBs) are not allowed to seek or take instructions from any other institutions or of any of the EMU member countries. Monetary policy should be operated by experts (whether bankers, economists or others) in the form of an independent Central Bank; "this is an effective and credible monetary-policy making" (Yellen 2017). However, Angeriz et al. (2008) show that central bank independence has not produced the expected outcomes. Also, Angeriz and Arestis (2007, 2008) demonstrate that low inflation and price stability do not always lead to macroeconomic stability. The emergence of the GFC and the GR provide ample evidence of this proposition.
The main rationale for the ECB independence, the proponents suggest, is that governments should not be trusted to deal with price stability in view of their concern with electoral success; thereby sacrificing higher inflation for lower unemployment. In this sense central bank independence has a fundamental impact on inflationary expectations, which is a major factor in enabling the monetary authorities to achieve their inflation target. This approach of course raises the issue of whether independence of the ECB has worked as intended. Angeriz et al. (2008) demonstrate that there has only been a marginal effect of such independence. Harcourt et al. (2018) examine the emergence of the ECB and demonstrate that the notion of central bank independence is based on a political rather than on an economic argument, thereby leading to sub-optimal economic results. It is also argued by Harcourt et al. (op. cit.) that the view of central bank independence based on the argument that inflationary expectations have a significant impact on inflation, is questionable; there is no theoretical justification or empirical evidence on this issue (see, also, Forder 2004). Furthermore, and in terms of recent experience and the euro crisis in particular, the ECB independence seems to have been undermined. The Outright Monetary Transaction (OMT) programme, suggested in September 2012, but never implemented, was opposed by some EMU member governments, especially by Germany.6 Similar objections were raised for the Quantitative Easing (QE). The latter, however, was eventually introduced in March 2015, after the European Court of Justice decion (see footnote 5).
6 Germany's Central Bank, the Bundesbank, opposed OMT in that it was close to monetary financing. The Bundesbank objection
was referred to the German constitutional court, whose view on the matter was similar to the Bundesbank's. The ECB OMT scheme was referred to the European Court of Justice (ECJ) on 7 February 2014. On the 14th of January 2015, the ECJ released
an Advocate General opinion, which suggested that the OMT was in line with EU law, with a final ruling issued on the 16th of June
In terms of the ECB policy dimension, it contains the view that inflation is best controlled through interest rate manipulation to achieve the 'close to 2 per cent from below' inflation target. However, the money supply is also taken on board. A reference value of 4.5 percent for the M3 money supply is in place. This, it is hoped, improves communication between the public and policy-makers and provides discipline, accountability, transparency and flexibility to monetary policy. Strictly speaking, then, the ECB model differs from the inflation targeting of the NCM. It contains: an economic analysis and a monetary analysis. The rationale of the 'two-pillar' approach is based on the theoretical premise that there are different time perspectives in the conduct of monetary policy that require a different focus in each case. There is the short to medium term focus on price movements that requires economic analysis. The long-term price trends require monetary analysis.
The ECB economic analysis is an assessment of price developments and the risks to price stability over the short to medium term. A range of indicators are accounted, and summarized in ECB (2004: 55-57). The ECB monetary approach relies heavily on monetary developments in terms of the long-run link between money and prices. Deviations from the 4.5 percent reference value would 'signal risks to price stability'. Monetary analysis is utilized by the ECB as a 'cross check' for consistency between it and the short-term perspective of economic analysis. In this approach, there is the strong belief that there is a long-term link between money (M3) and inflation as a result of a stable demand for money. This focus, of course, reflects the notion that inflation is a monetary phenomenon to be tackled by both manipulating the rate of interest and watching movements in M3. Short-term volatility of inflation is allowed but not in the long run, reflecting the view that monetary policy affects prices with a long lag.
In the long run there is no trade-off between inflation and unemployment, and the economy operates at the NAIRU if accelerating inflation is to be avoided. In the long run, inflation is viewed as a monetary phenomenon in that the pace of inflation is aligned with the rate of interest and the money stock. The essence of Say's Law holds, namely that the level of effective demand does not play an independent role in the (long run) determination of the level of economic activity, and adjusts to underpin the supply-side determined level of economic activity, which itself corresponds to the Non-Accelerating Inflation Rate of Unemployment (NAIRU). The NAIRU is a supply-side phenomenon closely related to the workings of the labour market.
We proceed in the next two sub-sections to discuss the EMU fiscal policy, followed by the ECB monetary policy. The discussion in both sections follows their implementation since the inauguration of the EMU.
EMU Fiscal Policy
Fiscal policy in the EMU is based essentially on the Stability and Growth Pact (SGP). Underlying the SGP approach is the notion of sound public finances (European Commission 2000). The SGP is essentially based on the proposition that government budgetary positions should be close to balance or in small surplus over the course of a business cycle and not to have a deficit of more than 0.2 percent of GDP. However, budgetary positions should not exceed 3 percent of GDP in any given year. Also of great importance is the debt to GDP ratio, which should not exceed 60%. The EMU member countries sharing the euro are expected to submit annual stability programmes,7 which would be monitored in terms of their implementation. It should be noted, though, that there is no reason that a maximum of 3 percent of deficit to GDP is relevant for all countries. It is also the case that the budget position is sensitive to the business cycle. There is, thus, no economic theory or empirical evidence to justify the SGP.
Two changes emerged as a result of a number of countries broke the SGP rules, and problems that emanated from the euro crisis. The first ones were based on proposals endorsed by the European Council in March 2005. They were as follows: more budgetary consolidation in good times; more flexibility in reducing deficits in bad times; more focus on cutting the debt to GDP ratio; more room for manoeuvre for countries carrying out structural reforms; countries with sound finances allowed to run small deficits to invest. Although these proposed changes contained some flexibility, they still did not address the main problems as identified above.
The second changes, suggested at the meeting of the European Leaders that took place on the 8th/9th of December 2011 in Brussels, were signed on 2 March 2012 by most member countries of the European Union (EU). The result was an inter-government treaty, not a change to the EU treaties. A revised version of the SGP was agreed, the Treaty on Stability, Cooperation and Governance in the Economic and Monetary Union, what is now called the 'Fiscal Compact' (FC). Its main ingredients are the following: a firm commitment to 'balanced budgets' for the euro-area countries, defined as a structural deficit of no greater than 0.5% of GDP, which should be written into the national constitutions; automatic sanctions for any euro-area country whose deficit exceeds 3% of GDP; and a requirement to submit their national budgets to the European Commission, which has the power to request that they be revised. Also, the rule of the old SGP of 60 percent debt to GDP ratio is retained. Any excess should be eliminated at an average rate of a 20th of the excess each year. FC entered into force on 1 January 2013. In effect the FC retains the principles of the previous SGP version but with the added one that countries that break the deficit rules may be punished. Clearly, the problems discussed above in relation to the SGP remain the same for the FC as well.
Arestis (2015) concludes that fiscal policy is a strong tool of economic policy in curing unemployment, especially so when coordinated with monetary and financial stability policies. Indeed, such coordination is the best way forward. Not only did the GR highlight the importance of fiscal policy but also that of financial stability, both of which had been seriously downgraded prior to the 'great recession'. Corsetti et al. (2016) suggest that monetary and fiscal policies 'together' are necessary to stabilise the level of economic activity and inflation, especially so when the central bank's policy rates stay close to the lower bound for a lengthy period. This is so since "the multiplier effect of government spending on output at the lower bound can be sizable. For the multiplier to be sizable, it is essential that monetary policy accommodates the fiscal stimulus" (Corcetti et al., op. cit.: 8; see, also, BIS 2011). It is also suggested that "The necessary fiscal accommodation might be sizable, potentially falling outside the limits of the Stability and Growth Pact" (p. 15).
EMU Monetary Policy
The ECB and the national central banks of countries making up the Euro area comprise the European System of Central Banks (ESCB). The ECB is responsible for the EMU monetary policy and is "independent from political influence" (ECB 2004: 12). The ESCB Treaty, Article 105 (1), states that "the primary objective of the ESCB shall be to maintain price stability" and that "without prejudice to the objective of price stability, the ESCB shall support the general economic policies in the Community with a view to contributing to the achievement of the objectives of the Community as laid down in Article 2". Achieving price stability, it is suggested, ensures macroeconomic stability. However, the GFC has proved that this is not a valid proposition.
Another problem with the ECB is the function of the lender of last resort. The ECB intervenes only in the secondary government bond market, which was only introduced in September 2012 under a great deal of opposition. This is justified on the argument that buying debt instruments directly from the government in the primary markets is equivalent to monetary financing of the government budget deficit; an unacceptable occurrence in this view. It is important to note, though, that the lender of last resort,
especially in the government bond market, is an essential tool for maintaining financial stability. No wonder Lagarde (2018) in a speech on 26 March 2018 urged the euro area leaders to set up a 'rainy day fund' to help cushion member countries in economic downturns.
A number of changes have been introduced as a result of the GFC and the GR, and the euro crisis, which are worth discussing. The most important ones are the following. The ECB pumped limited liquidity into commercial banks after the GFC; however, it raised its rate of interest twice before it started reducing it from 4.25 percent in September 2008 to an all-time low of 0.25 percent in November 2013. In May 2009 the ECB increased its credit support to euro area banks at very low interest rates through the introduction of the Long-Term Refinancing Operations (LTROs) scheme, which was designed to provide longer-term liquidity. From December 2011 to February 2012 the ECB provided €1trillion to the euro area banks.
In December 2010 the European Stability Mechanism (ESM) was established, the euro area's permanent bailout fund; and as a permanent firewall for the euro area. The ESM was designed as the permanent crisis resolution mechanism for the countries of the euro area with a maximum lending capacity of €500 billion. In fact the ESM in mid-2013 replaced the then existing European Financial Stability Facility (EFSF) and the European Financial Stabilisation Mechanism (EFSM), whose functions were to handle money transfers and programme monitoring for the approved loans to the euro area countries. The Single Supervisory Mechanism (SSM) was introduced, 16 April 2014, to oversee all systemic banks in the euro area. It comprises the ECB and the national supervisory authorities of the participating countries. Its main aims are to: (i) ensure the safety and soundness of the European banking system; (ii) increase financial integration and stability; and (iii) ensure consistent supervision. The Single Resolution Mechanism (SRM) was also introduced, with the aim to orderly restructure a bank when it is failing or likely to fail. The SRM applies to banks covered by the SSM.
A number of decisions were taken at the 28/29 June 2012 European Union (EU) summit meeting. Banking licence for the ESM; this gives access to the ECB funding, and thus increase its firepower. Two further decisions were the banking supervision by the ECB, and a 'growth pact', which would involve issuing project bonds to finance infrastructure. Banking Union and a single euro area bank deposit guarantee scheme were two long-term decisions.8 The introduction of euro bonds and euro bills were further decisions. Most important at the time was the announcement by the President of the ECB in July 2012 that the ECB would do 'whatever it takes' to save the euro. That statement by the ECB President was considered as a turning point in the euro crisis. The July 2012 statement was also confirmed by the ECB President after the ECB's rate-setting governing council meeting in 2014 (Thursday 9th of January).
Further steps were introduced in June 2014 by the ECB to counter deflation. Reduced its benchmark interest rate from 0.25 percent to 0.15 percent (reduced further to 0.05 percent in September 2014 and by June 2014 the rate became 0.00%); introduced a negative deposit rate, whereby the ECB would be charging commercial banks 0.1 percent (changed to 0.20 percent in September 2014 and to 0.4 percent in March 2016) on their deposits with it. In September 2014, the 'Targeted Long-Term Refinancing Operation' (TLTRO) was reinforced whereby the ECB would support bank lending to the euro area non-financial sector. Banks could borrow up to 7 percent of their loans to companies and individuals (exclusive of mortgages) in two tranches in September and December 2014 initially; additional operations were carried out in March, June, September and December 2015 and in March and June 2016. Banks could borrow for up to four years so long as they used the funds to lend to households and companies. It is the case that a total of
8 A project is in place to strengthen the euro area banking system, which includes the plan for a banking union. However political
negotiations on progressing on the banking union have entered a 'critical phase', which could lead the project unfinished for some time (as the Vice-President of the European Commission suggested, and reported in the Financial Times, 11 April, 2018).
400bn euros was injected into the banking system through five TLTRO auctions between September 2014 and September 2015. However, bank corporate lending grew over the same period by only just 4bn euros.
QE was eventually introduced in March 2015; and as stated above, negative interest rates had already been introduced. In fact, the ECB at its meeting on 22nd January 2015 decided to undertake QE; the ECB carried on to purchase euro area bonds and other safe financial assets, every month, starting in March 2015 and promised to continue until inflation is back to the ECB's inflation target. The ECB decided to continue its QE in 2018 (and the 60bn euros monthly was reduced to 30bn euros in early March, 2018, and with the ultra low interest rates unchanged). It is expected to probably and finally end the programme in December 2018, paving the way for a rise in interest rates in the first half of 2019. Whether the ECB QE has been successful is an interesting question. There is the argument that EMU banks, insurance groups and pension funds need the relevant QE assets to meet their capital requirements. So banks and other relevant financial institutions may not be persuaded to buy riskier assets, such as equities, to boost the economy – and this is desperately needed in the EMU area. There is also the argument that QE programmes increase inequality. It increases the value of the assets for the relevant holders, while it harms savers in view of the record low interest rates. These arguments raise the issue of whether the ECB has been successful in its monetary policy as discussed above. This aspect is further discussed in the section that follows.
A more recent ECB initiative concerns euro banks in terms of tackling their Non-Performing Loans (NPL). The rule that was imposed in January 2018 aims at requiring banks to hold collateral against loans that become non-performing (once their payment becomes 90 days overdue). This ECB action is entirely due to the fact that the health of the euro banks is a very worrying aspect. Non-performing loans prevent banks from lending to more productive borrowers. The level of NPL, in relation to total loans, is estimated to be 5.1% for the EU, compared to 1.3% in the US and 1.5% in Japan. The relevant consultative paper aroused strong criticism, including legal opinions from the European Parliament and the European Council. Both institutions stated that the ECB would overstep its mandate if the proposal was adopted in its current form. The ECB is in the process of reviewing the feedback received during the consultation period before the final text is issued.
HAVE THE EMU ECONOMIC POLICIES BEEN SUCCESSFUL?
The ECB managed to bypass a complete collapse of the financial system and the real EMU economies after the emergence of the GFC and GR and the euro crisis. It is the case, though, that ECB policies have not really been further successful as argued in what follows. The decline in inflation rates in the 1990s and early 2000s was mainly due to globalisation (Angeriz and Arestis 2008). And since the relevant crises, very low and negative interest rates along with QE have not produced sustainable recovery (Arestis 2018). It should be noted, though, that the euro area growth seems to be bouncing back. Growth was 1.7 percent in 2016, and 2.5 in 2017, with unemployment falling to 10.1 in June 2016 from 10.4 percent in February 2016; it also fell further to 9.6% in December 2016 (Eurostat, December 2016); and more recently, March 2018, it fell to 8.5% (The Economist 12 May, 2018). Still this is a high unemployment rate. It is the case that fiscal policy in advanced countries, including the euro area, has shifted from its austerity stage over the past few years, which has been helpful on this score. Inflation is expected to be 1.5% in 2018 (The Economist op. cit.); this is below the ECB's inflation target of 'close to 2% from below'.9 In addition, there is little sign of wage growth, which is an important element in achieving economic health; employees have lost bargaining power in view of weaker trade unions. Proper wage policy is thereby urgently required. There is also a series of disappointing business surveys, and an unexpected third successive monthly fall in February (2018) of industrial production; also the euro area GDP growth is slowing unexpectedly (Financial
9 The core measure of inflation, which does not include the cost of food and alcohol, and viewed by policymakers a better price
Times 14 and 18 April, 2018). In other words, ECB policies have not helped sustained stability and growth in the euro area. It is not surprising that the ECB President suggested that institutional reforms would be necessary, especially so in terms of more euro-area integration (Draghi 2016a, 2016b; see, also, IMF 2015).
Clearly our analysis demonstrates that there are serious problems with both the EMU fiscal and monetary policies. They have not acted as stabilising forces as had been expected by the policy makers. Orphanides (2017) suggests that both fiscal and monetary policies in the euro area have been very restrictive following the GFC, the GR and the euro crisis. Fiscal policy's weak response is due to the SGP/FC, with the required monetary policy failing in view of the ECB's unwillingness to support all its member countries in a similar manner. More seriously, though, these problems are rooted in the absence of economic integration, and as such, without a political union the EMU and the euro cannot have a good record of long-term survival.
A relevant recent proposal is the 'Five Presidents' report (European Commission 2015), where the most important item in the report is the creation of a Banking Union. The objectives of the Banking Union are to reduce financial risk and improve access to liquidity. Once the Banking Union is completed, a Capital Markets Union is to be launched for all the EMU members. However, in terms of fiscal policy the 'Five Presidents' report emphasises the importance of fiscal discipline, referring to 'responsible budgetary policies'. It is recommended that "Responsible national fiscal policies are therefore essential. They must perform a double function: guaranteeing that public debt is sustainable and ensuring that fiscal automatic stabilisers can operate to cushion country-specific economic shocks" (European Commission 2015: 15). More important, though, is the suggestion that "The Stability and Growth Pact remains the anchor for fiscal stability and confidence in the respect of our fiscal rules" (European Commission op. cit.: 18).10
Although the focus of the 'Five Presidents' Report' is "on the need to promote real convergence, it is far from achieving economic or indeed political integration. As such, the proposed changes are rather cosmetic ones, although the extent to which banking union is achieved is a way forward" (Arestis 2016: 36). It is also the case that obsession with rules rather than with proper discretion does not help. Especially so under the current arrangements whereby the euro area's closest federal institution, the ECB, and in the absence of political integration, is exposed to each of the 19 member countries of the EMU political pressures. This dimension makes the EMU a fragile institution. Interestingly enough, Lagarde (2018) also calls for a 'modernised capital union', an improved banking union, and a firm move towards greater fiscal integration. All these, according to Lagarde (op. cit.), require improving "the euro area architecture" and building "a stronger economic union in the days ahead". It is also important to note the importance of introducing a deposit insurance scheme, whereby bank deposits would be protected. Such a scheme would be funded by all the euro area banks, which would reduce pro-cyclical runs on weak banks or on banks in fiscally weak countries.
SUMMARY AND CONCLUSIONS
The euro area has had economic problems since its inception, which have become even more serious more recently, especially since the recent crises. The design of the euro area project and subsequent amendments contained a number of faults, as argued in this contribution. Clearly, then, significant, fundamental and indeed urgent changes are desperately needed so that the euro area can become a proper economic and political union.
10 The European Commission proposes (available at: http://europa.eu/rapid/press-release_IP-17-5005_en.htm) a deeper
euro-area integration by creating a finance ministry and euro-budget; also turning the ESM into a European Monetary Fund, which would be a kind of embryonic treasury for the euro area.
We have argued in this contribution that political integration is paramount (see, also, Arestis and Sawyer 2006a, 2006b, 2006c, 2012). The requirement for effective political union is to have in place proper monetary and fiscal policies. Political integration is very important for it provides both monetary and fiscal possibilities, which enable coordination of taxation and spending throughout the EMU, along with monetary and financial stability policies, as well as appropriate wage policies. Such union would allow the EMU to spread risk across its area and eliminate uneven booms and busts in different regions. Under such arrangements the ECB single interest rate would never be inappropriate for any one country; clearly this is not the case under current arrangements. Banking union is another relevant aspect to which attention should be paid.11 Especially so, as Berger et al. (2018) suggest, in view of the absence of a fiscal union, which contains serious current dangers of not allowing sovereign debt to be restructured without threatening the local banking systems. In this sense the EMU is incomplete without banking union; but this would not be enough. Fiscal union is also required, which is the most efficient way against economic risks (see, also, Berger et al. 2018). Although the euro area is experiencing some recovery, it still remains vulnerable to shocks and future financial crises. Proper economic and political integration is thereby and desperately needed to avoid these risks. Without it the euro area continues to face serious risks that policymakers should not ignore. For if they ignore them the euro might be doomed at the end of the day.
REFERENCES
Angeriz, A., Arestis, P. and McCombie, J. (2008): "Does Central Bank Independence Affect Inflation Persistence and Volatility?", CCEPP Working Paper, Cambridge Centre for Economic and Public Policy, Department of Land Economy, University of Cambridge.
Angeriz, A. and Arestis, P. (2007): "Monetary Policy in the UK", Cambridge Journal of Economics, 31(6), pp. 863-884.
Angeriz, A. and Arestis, P. (2008): "Assessing Inflation Targeting Through Intervention Analysis", Oxford Economic Papers, 60(2), pp. 293-317.
Arestis, P. (2007): "What is the New Consensus in Macroeconomics?", in P. Arestis (ed.), Is There a New Consensus in Macroeconomics?", Houndmills, Basingstoke: Palgrave Macmillan.
Arestis, P. (2012): "Fiscal Policy: A Strong Macroeconomic Role", Review of Keynesian Economics, Inaugural Issue, Vol. 1, No. 1, pp. 93-108.
Arestis, P (2015): "Coordination of Fiscal with Monetary and Financial Stability Policies Can Better Cure Unemployment", Review of Keynesian Economics, 3(2), pp. 233-247.
Arestis, P. (2016): "Can the Report of the 'Five Presidents' Save the Euro?", European Journal of Economics and Economic Policies: Intervention (EJEEP), 13(1), pp. 28-38.
Arestis, P. (2018): "Monetary Policy since the Global Financial Crisis", in P. Arestis and M. Sawyer (eds.), Economic Policies since the Global Financial Crisis, Annual Edition of International Papers in Political Economy, Houndmills, Basingstoke: Palgrave Macmillan.
Arestis, P. and Sawyer, M. (2006a): "Alternatives for the Policy Framework of the Euro", in W. Mitchell, J. Muysken and T.V. Veen (eds.), Growth and Cohesion in the European Union: The Impact of Macroeconomic Policy, Cheltenham: Edward Elgar Publishing Limited.
Arestis, P. and Sawyer, M. (2006b): "Reflections on the Experience of the Euro: Lessons for the Americas" in M. Vernengo (ed.), Monetary Integration and Dollarization: No Panacea, Cheltenham: Edward Elgar Publishing Limited.
11 The 19 euro area states intend to complete the banking union in 2018. A capital market union is also important to be considered
Arestis, P. and Sawyer, M. (2006c): "Macroeconomic Policy and the European Constitution", in P. Arestis and M. Sawyer (eds), Alternative Perspectives on Economic Policies in the European Union, Annual Edition of International Papers in Political Economy, Houndmills, Basingstoke: Palgrave Macmillan.
Arestis, P. and Sawyer, M.C. (2012): "Can the Euro Survive after the European Crisis?", in P. Arestis and M.C. Sawyer (eds.), The Euro Crisis, Annual Edition of International Papers in Political Economy, Houndmills, Basingstoke: Palgrave Macmillan.
Bank of International Settlements (BIS) (2011): "Fiscal Policy and its Implications for Monetary and Financial Stability", BIS Papers, No. 59, December, Basel, Switzerland: Bank for International Settlements. Available at: http://ssrn.com/abstract=2002654
Berger, H., Dell'Ariccia, G. and Obstfeld, M. (2018): "The Euro Area Needs a Fiscal Union", IMF Blog, 22 February.
Corsetti, G., Dedola, L., Jarociński, M., Maćkowiak, B. and Schmidt, S. (2016): "Macroeconomic Stabilization, Monetary-Fiscal Interactions, and Europe's Monetary Union", Discussion Paper No.
1988, December, Frankfurt: Germany: European Central Bank.
Draghi, M. (2016a): "On the Importance of Policy Alignment to Fulfil Our Economic Potential", 5th Annual Tommaso Padoa-Schioppa Lecture at the Brussels Economic Forum 2016, Brussels, 9 June. Available at: https://www.ecb.europa.eu/press/key/date/2016/html/sp160609.en.html
Draghi, M. (2016b): "Stability, Equity and Monetary Policy", 2nd DIW Europe Lecture, German Institute
for Economic Research (DIW), Berlin, 25 October. Available at: https://www.ecb.europa.eu/press/key/ date/2016/html/sp161025.en.html
European Central Bank (ECB) (2004): The Monetary Policy of the ECB, Frankfurt: European Central Bank, pp. 1-126.
European Commission (2000): Public Finances in EMU, European Commission: Brussels, Belgium.
European Commission (2015): Completing Europe's Economic and Monetary Union, Report by J-C. Juncker, in Close Cooperation with D. Tusk, J.Dijsselbloem, M. Draghi and M. Schulz, European Commission: Brussels, Belgium.
Forder, J. (2004): "The Theory of Creditability: Confusions, Limitations and Dangers" in P. Arestis and M. Sawyer (eds), Neo-Liberal Economics Policy, Cheltenham, UK: Edward Elgar.
Harcourt, G.C., Kriesler, P. and Halevi, J. (2018): "Central Bank Independence Revisited", UNSW Business School Research Paper No. 2018 ECON 01, University of New South Wales, Sydney, Australia.
IMF (2015): "Euro Area Policies: Selected Issues", IMF Country Report No. 15/205, July, Washington D.C.: International Monetary Fund.
Lagarde, C. (2018): "Speech at the Europe Lecture", German Institute for Economic Research (DIW), Berlin, Germany, 26 March.
Orphanides, A. (2017): "The Fiscal-Monetary Policy Mix in the Euro Area: Challenges at the Zero Lower Bound", MIT Sloan School Working Paper 5197-17, MIT Sloan School of Management, Massachusetts Institute of Technology, Cambridge, Massachusetts.
Yellen, J.L. (2017): "A Challenging Decade and a Question for the Future", The 2017 Herbert Stein Memorial Lecture, Washington DC, USA. | https://1library.co/document/ye96l94q-the-past-and-future-of-the-euro.html |
For the latest updates on the key economic responses from governments to address the economic impact of the COVID-19 pandemic, please consult the IMF's policy tracking platform Policy Responses to COVID-19.
The economy of Bhutan, one of the world's smallest and least developed economies, is based on agriculture and forestry. However, the main source of revenue is the sale of electricity to India. The country has a sustained growth due to the development of the hydroelectric sector and the dynamism of the tourism sector. Growth was estimated to have risen to 5.3% in 2019 from 3.7% a year earlier higher power exports and household consumption provided support. This was also 0.4 percentage point higher than IMF's previous estimate for growth. A fall in tourism revenue in 2020 and 2021 owing to the coronavirus pandemic and social distancing measures threatened financial stability and weighed on economic growth. The annual GDP growth rate in 2021 was negative and stood at -1.9%. Economic growth will continue to be supported by the hydropower sector and in 2022 it is expected to reach 4.2% (IMF, 2022). Ties to the outside world - with the exception of India - will remain minimal.
The government has been working on the 12th five-year plan (2019-2023), which focuses on decentralization from national to local governments, by almost doubling the share of resources allocated and by increasing their authority and functions. National self-sufficiency and inclusive socio-economic growth also remain among the pillars of the new five-year plan, meaning the government will pursue an expansive budgetary policy. Debt-to-GDP ratio spiked from 106.6% in 2019 to 120.7% in 2020 and 123,3% in 2021 (Statista, 2022); however, most debt is considered sustainable as they are covered by financial arrangements with India under which the latter finances the construction of hydropower plans in Bhutan in exchange of power imports. At the same time, government debt is anticipated to decrease to 120.5% in 2022 and 117.3% in 2023. Current account deficit narrowed to USD 360 million in 2019 from USD 480 million a year earlier as higher tourism revenues and exports partially offset strong imports. Bhutan modified its tourism policy in order to attract more Indian tourists, who are not subjected to the daily tourism tax of USD 250. Due to this modification, the number of Indian tourists increased substantially. The current Account deficit in Bhutan reached 522.4 million USD in 2021 (Trading Economics, 2022). Inflation is closely linked to the Indian economy as Bhutan's domestic currency ngultrum is pegged to the Indian rupee. Inflation increased to 4.2% in 2020, against 2.8% in 2019, and then 6.3% in 2021, but is expected to rise further to 6.9% in 2022 (World Economic Outlook IMF, 2022) as the rupee loses value against major international currencies and food prices rise in the country. Fiscal deficit was forecasted to widen to 4.9% by 2021 before stabilising in the medium term amid rising revenue collection. Bhutan is seeking to expand the base for the green tax by including tourist vehicles.
Bhutan remains a poor country, where living conditions are made difficult by hilly areas and a poor-quality infrastructure. However, GDP per capita doubled between 2004 and 2014 whereas the poverty rate fell to 9.9% in 2019. Unemployment remained low but in sharp increase on 2020 at 2.4% in 2021. Bhutan is also the first country using the Gross National Happiness index to measure the well-being of its population not only based on economic indicators, but also on other factors summarized in the four GNH pillars: sustainable and equitable socio-economic development, environmental conservation, preservation and promotion of culture and good governance.
|Main Indicators||2019||2020||2021 (e)||2022 (e)||2023 (e)|
|GDP (billions USD)||2.49||2.50||2.48||2.74||3.00|
|GDP (Constant Prices, Annual % Change)||4.3||-0.8||-1.9||4.2||5.7|
|GDP per Capita (USD)||3,371e||3,359||3,296||3,606||3,910|
|General Government Gross Debt (in % of GDP)||106.6||120.7||123.4||120.6||117.4|
|Inflation Rate (%)||2.8||4.2||6.3||6.9||5.2|
|Current Account (billions USD)||-0.53||-0.31||-0.22||-0.33||-0.29|
|Current Account (in % of GDP)||-21.1||-12.2||-8.8||-12.0||-9.8|
Source: IMF – World Economic Outlook Database - October 2021.
Note: (e) Estimated Data
|Monetary Indicators||2016||2017||2018||2019||2020|
|Bhutan Ngultrum (BTN) - Average Annual Exchange Rate For 1 EUR||71.48||73.56||80.69||79.10||84.64|
Source: World Bank - Latest available data.
|Breakdown of Economic Activity By Sector||Agriculture||Industry||Services|
|Employment By Sector (in % of Total Employment)||55.8||10.1||34.1|
|Value Added (in % of GDP)||15.8||36.1||43.4|
|Value Added (Annual % Change)||1.3||2.0||12.5|
Source: World Bank - Latest available data.
|Socio-Demographic Indicators||2021||2022 (e)||2023 (e)|
|Unemployment Rate (%)||0.0||0.0||0.0|
Source: IMF – World Economic Outlook Database - Latest available data
Learn more about Market Analyses about Bhutan on Globaltrade.net, the Directory for International Trade Service Providers.
|2018||2019||2020|
|Labour Force||374,905||383,196||378,371|
Source: International Labour Organization, ILOSTAT database
|2017||2018||2019|
|Total activity rate||69.35%||69.71%||70.04%|
|Men activity rate||75.81%||76.25%||76.69%|
|Women activity rate||61.90%||62.13%||62.31%|
Source: International Labour Organization, ILOSTAT database
The Economic freedom index measure ten components of economic freedom, grouped into four broad categories or pillars of economic freedom: Rule of Law (property rights, freedom from corruption); Limited Government (fiscal freedom, government spending); Regulatory Efficiency (business freedom, labour freedom, monetary freedom); and Open Markets (trade freedom, investment freedom, financial freedom). Each of the freedoms within these four broad categories is individually scored on a scale of 0 to 100. A country’s overall economic freedom score is a simple average of its scores on the 10 individual freedoms.
Economic freedom in the world (interactive map)
Source: Index of Economic Freedom, Heritage Foundation
See the country risk analysis provided by Coface.
The Indicator of Political Freedom provides an annual evaluation of the state of freedom in a country as experienced by individuals. The survey measures freedom according to two broad categories: political rights and civil liberties. The ratings process is based on a checklist of 10 political rights questions (on Electoral Process, Political Pluralism and Participation, Functioning of Government) and 15 civil liberties questions (on Freedom of Expression, Belief, Associational and Organizational Rights, Rule of Law, Personal Autonomy and Individual Rights). Scores are awarded to each of these questions on a scale of 0 to 4, where a score of 0 represents the smallest degree and 4 the greatest degree of rights or liberties present. The total score awarded to the political rights and civil liberties checklist determines the political rights and civil liberties rating. Each rating of 1 through 7, with 1 representing the highest and 7 the lowest level of freedom, corresponds to a range of total scores.
Political freedom in the world (interactive map)
Source: Freedom in the World Report, Freedom House
The world rankings, published annually, measures violations of press freedom worldwide. It reflects the degree of freedom enjoyed by journalists, the media and digital citizens of each country and the means used by states to respect and uphold this freedom. Finally, a note and a position are assigned to each country. To compile this index, Reporters Without Borders (RWB) prepared a questionnaire incorporating the main criteria (44 in total) to assess the situation of press freedom in a given country. This questionnaire was sent to partner organisations,150 RWB correspondents, journalists, researchers, jurists and human rights activists. It includes every kind of direct attacks against journalists and digital citizens (murders, imprisonment, assault, threats, etc.) or against the media (censorship, confiscation, searches and harassment etc.).
Source: World Press Freedom Index, Reporters Without Borders
Any Comment About This Content? Report It to Us.
© Export Entreprises SA, All Rights Reserved. | https://tradeportal.accio.gencat.cat/ca/mercats-potencials/bhutan/economia |
THE government is targeting faster growth of 8 percent to 10 percent in the manufacturing sector over the next six years driven by the robust performance of the economy, a top trade official said on Monday.
“We will try to hit the high level of about 8 percent to 10 percent as the target, because we really aim to strengthen further the manufacturing sector,” Trade Secretary Ramon Lopez told reporters on the sidelines of the Manufacturing Summit 2016 in Makati Shangri-La.
“As we grow that sector, it will grow the jobs that we need… So with a more robust manufacturing [sector]we will create more jobs, plus the entrepreneurship side will also create more jobs,” Lopez said.
The manufacturing industry grew by 6.9 percent in the third quarter of 2016, more than one percentage point higher than the 5.8 percent rate posted in the same period in 2015.
In the first quarter, manufacturing posted a growth rate of 8 percent, the highest in seven quarters, before slowing down to 6.2 percent in the second quarter.
Lopez noted that in the third quarter of 2016, the country’s gross domestic product (GDP) growth stood at 7 percent, outpacing other Asian countries including China (which grew 6.7 percent), and faring higher than the average consensus forecast of 6.8 percent.
“The recent performance has demonstrated remarkable economic resilience, owing to vigorous governance and economic reforms, as well as our continuous efforts to streamline processes and promote industrial and manufacturing resurgence,” he said.
More importantly, Lopez said the necessary ingredients for investment and employment growth are now present in the Philippines.
He said these include a growing domestic market with a population of over 100 million, an emerging middle class, political stability, strong macroeconomic foundation, rising consumer and business confidence, and a young, English-speaking, highly trainable workforce. | http://www.manilatimes.net/manufacturing-sets-6-yr-target-growth-8-10/298986/ |
• This factor affects the purchasing power of consumers and the Verizon’s cost of capital.
Social Factors
• Cultural and demographics of the environment would affect the customer’s needs as well as potential market size.
Technological Factors
• This can lower barriers to entry, improve production efficiency and influence outsourcing decisions.
Political Economical Social Technology
Stability of the internal/external political environment Economic growth Population growth rate Automation
Trading agreements Interest rates Age distribution Technology incentives employment laws Inflation rate Career attitudes Rate of technological change environmental regulations Budget allocation Perception of technological change within the unit
Trade restrictions and tariffs The level of inflation
Political stability Employment level per capita
SWOT Analysis
Strength – Is defined as a firm’s ability to use its resources and capabilities to develop a competitive advantage over competitors.
Weakness – Is defined as strengths that a firm is lacking.
Opportunities – New opportunities for profit and growth can be found in the external environment. | https://www.educationindex.com/essay/5-Forces-Model-of-Verizon-PKASBJSCNZ |
Serbia Economic Outlook
June 7, 2022The pace of economic growth moderated notably in the first quarter, and the breakdown of components calls for caution regarding future growth. Fixed investment growth moderated sharply in Q1 and, while inventory build-up stimulated growth significantly, it is likely that destocking in the later stages of the year will weigh on economic activity. On the other hand, household spending growth remained upbeat, seemingly undeterred by rising price pressures. Turning to the second quarter, inflation rose further in April and is unlikely to have peaked; producer price inflation also intensified in April. More positively, merchandise exports expanded robustly in the same month. Meanwhile, political pressure on Serbia is building. While most of Europe is turning away from Russian oil and gas, the government signed a new three-year gas supply deal with Russia in late May.
Serbia Economic GrowthEconomic growth will cool sharply this year, but remain healthy nonetheless. Sanctions placed upon Russia will hurt tourism and trade, denting goods and services exports. Meanwhile, higher commodity prices will stoke inflationary forces and eat into consumers’ pockets. However, solid wage growth and a tighter labor market should provide some support. FocusEconomics panelists see GDP expanding 3.5% in 2022, which is down 0.2 percentage points from last month’s forecast, and 3.8% in 2023.
Serbia Economy Data
Sample Report
5 years of Serbia economic forecasts for more than 30 economic indicators.
Sample Report
Get a sample report showing our regional, country and commodities data and analysis.
Serbia Facts
|Value||Change||Date|
|Bond Yield||3.05||0.0 %||Dec 31|
|Exchange Rate||104.9||-0.31 %||Jan 01|
Request a Trial
Start working with the reports used by the world’s major financial institutions, multinational enterprises & government agencies now. Click on the button below to get started.
Serbia Economic News
-
Serbia: Inflation rises in October to highest level since 2008
November 14, 2022
Inflation came in at 15.0% in October, up from September’s 14.0%.
-
Serbia: Central Bank continues its hiking cycle in November
November 10, 2022
The National Bank of Serbia (NBS) hiked the key policy rate by 50 basis points to 4.50% from 4.00% at its 10 November meeting.
-
Serbia: Industrial output falls in September
October 31, 2022
Industrial production fell 0.3% year on year in September, contrasting August’s 0.3% expansion.
-
Serbia: Inflation comes in at highest level since April 2011 in September
October 11, 2022
Inflation rose to 14.0% in September, which was up from August’s 13.2%.
-
Serbia: Central Bank continues its hiking cycle in October
October 6, 2022
The National Bank of Serbia (NBS) hiked the key policy rate by 50 basis points to 4.00% from 3.50% at its 6 October meeting. | http://www.focus-economics.com/countries/serbia |
- Session:
- Time: Monday, November 9, 2015 - 1:30pm-1:50pm
Correlation between
CO Oxidation and Coke Oxidation over Cerium-Zirconium Mixed Oxides
Kehua Yin1,
Shilpa Mahamulkar2, Hirokazu Shibata3, Andre Malek4,
Christopher W. Jones2, Pradeep Agrawal2, Robert J. Davis1*
(1) University of Virginia, Charlottesville,
VA 22904 (USA)
(2)Georgia Institute of Technology, Atlanta, GA 30332 (USA)
(3)
Hydrocarbons R&D, The Dow Chemical Company, Dow Benelux B.V. P.O. Box 48,
NL 4530 AA, Terneuzen (The Netherlands)
(4)
Hydrocarbons R&D, The Dow Chemical Company, 1776 Building, Midland, MI
48674 (USA)
Coke formation reactions at high temperature are common in
chemical processes, such as on catalysts in hydrocarbon cracking and on
reactor walls in steam cracking . While carbon rejection via coke may be
desirable in some processes (e.g. FCC), it often leads to catalyst
deactivation, poor heat transfer through reactor walls and even damage to the
reactor. Thus, mitigation of coke by catalytic oxidation reactions can be an
important remediation step in high temperature processes. Cerium-zirconium
mixed oxides have demonstrated higher oxidation activity than cerium oxide at
high temperatures because of their increased thermal stability . However,
kinetic studies of coke oxidation catalyzed by cerium-zirconium mixed oxides
are still lacking. Hence, we investigated the reaction kinetics of coke
oxidation catalyzed by cerium-zirconium mixed oxides and explored the
relationship between mixed oxide composition and activity for oxidation of both
coke and CO.
Model coke was prepared by flowing 40 cm3
min-1 ethylene (50%) in He through a quartz tube reactor at 1073 K
and atmospheric total pressure. Soluble carbonaceous deposits in the quartz
tube were removed with toluene and the remaining solid coke was collected for
further study. Cerium oxide, zirconium oxide, and cerium-zirconium mixed oxides
(Ce/Zr = 0.2, 0.5, 0.8) were prepared by precipitation or co-precipitation at
pH=10. Coke and the oxide catalysts were characterized by XRD, BET, SEM and
Raman spectroscopy. Kinetics of coke oxidation reactions were determined in TGA
(TA SDT Q-600) experiments at isothermal conditions with coke and the oxide
catalysts in tight contact mode . Tight contact was determined by grinding the
coke with the catalysts in a mortar until there was no increase in activity
with further grinding. Oxidation of CO was conducted in a fixed-bed reactor
system equipped with an on-line gas chromatograph.
Coke oxidation rates over four cerium-based catalysts
are shown in Figure 1, with the most active catalyst composition being Ce0.8Zr0.2O2.
The CO oxidation rates are also correlated with coke oxidation rates in Figure
1, which likely indicates that the CO oxidation reaction can be used as a probe
reaction for the screening of coke oxidation catalyst.
The apparent activation energy of both the non-catalyzed
and the ceria-catalyzed coke oxidation reaction was determined by measuring the
first order rate constant as a function of temperature. The presence of
catalyst decreased the Eobs for coke oxidation by 20-30 kJ mol-1.
In addition to lowering the activation energy, the presence of catalyst reduced
the order in dioxygen from unity for the non-catalyzed reaction to 0.26 at high
loading of Ce0.8Zr0.2O2.
Reaction kinetics will be interpreted in light of the
results from characterization of the coke and oxide catalysts.
References
Cumming,
K.A., and Bohdan W. Wojciechowski. "Hydrogen transfer, coke formation, and catalyst decay and their role in
the chain mechanism of catalytic cracking." Catalysis Review: Science
and Engineering 38 (1996): 101-157.
Cai, Haiyong, Andrzej Krzywichi, and
Michael C. Oballa. "Coke formation in steam crackers for ethylene
production" Chemical Engineering and Processing 41 (2002):
199-214.
Atribak, Idriss, Agust¨ªn Bueno-Lopez,
and Avelina Garc¨ªa-Garc¨ªa. "Thermally stable ceria¨Czirconia catalysts for soot
oxidation by O2" Catalysis Communications 9 (2008): 250-255.
Neeft, John P.A., Olaf P. van Pruissen,
Michiel Makkee, and Jocob A. Moulijn. "Catalysts for the oxidation of soot from
diesel exhaust gases II. Contact between soot and catalyst under practical
conditions" Applied Catalysis B: Environmental 12 (1997): 21-31. | https://www.aiche.org/conferences/aiche-annual-meeting/2015/proceeding/paper/138d-correlation-between-co-oxidation-and-coke-oxidation-over-cerium-zirconium-mixed-oxides |
Find out the effect of concentration on the rate of reaction between marble chips and acid.
Extracts from this document...
Introduction
Chemistry Coursework Task Plan and carry out an investigation to find out the effect of concentration on the rate of reaction between marble chips and acid. Do not take anything away with you from the lab that you write during the lesson! Background Knowledge Marble is the chemical Calcium carbonate. The general equation for an acid reacting with a carbonate is: Acid + carbonate Salt + Water + Carbon dioxide I have decided to use Hydrochloric acid. So this equation becomes: Hydrochloric acid + Calcium carbonate Calcium chloride + water + Carbon dioxide The symbol equation for this reaction is: 2HCl + CaCo CaCl + H O + CO When acid is added to the marble, fizzing will be observed because a gas is being made. If a gas is allowed to escape the mass will decrease. The gas could be collected by using water displacement or a gas syringe. Providing there is enough acid the marble chips will disappear as the calcium chloride is soluble. This type of reaction is described as a neutralisation reaction. As the reaction proceeds the pH will increase. (From 1 to 7) The state symbols in this equation were found by using the 'Rubber Hand Book'. Prediction The input in this investigation is the concentration of Hydrochloric acid. The outcome is the rate of reaction. The higher the concentration the faster the rate of reaction because there will be more collisions between acid particles and calcium carbonate particles. This is because there are more Hydrochloric acid molecules in a higher concentration. ...read more.
Middle
* For the appearance method I found that the reactions were taking too long and that I would not be able to complete the experiment in the time given. * I was able to get a full set of results and to the best accuracy for the change in mass method so I have come to the conclusion that this is the best method to use. I will now repeat the change in mass method but this time I will use 10 cm of Hydrochloric acid instead of 20. I found that the percentage error is less for 20 cm so I will use this volume for my experiment. I also repeated the experiment with 0.5g of marble chips instead of 1g. I found that the percentage error is less for 0.5g so I will use this mass for my experiment. N.b. in an experiment for it to be classified as accurate the total errors must be less than 10%. The reliability is how small or large the range is. E.g. Exp.1 - 125, 126, 127 Avg. 126 Exp.2 - 136, 126, 116 Avg. 126 Experiment 1 is more reliable as it has a much smaller range. Plan First I will measure 0.5g of small marble pieces on a balance accurate to three decimal places then I will place the chips into a 100 cm conical flask. Measure 20 cm of 5M hydrochloric acid with a 25 cm measuring cylinder. Place the conical flask onto the balance accurate to three decimal places then add the hydrochloric acid and record the starting mass. ...read more.
Conclusion
There is one point on the graph that does not fit the pattern. Where as the difference between all the points before are gradually decreasing this point goes up, but the error bars for this point were fairly large so it is an unreliable piece of data. I think the method was suitable and worked well as overall I received a good set of data. I think my results have given me enough evidence to support my conclusion but I think that the experiment should be repeated so that I can investigate my original prediction better than in this experiment by changing the surface area of marble chips. I could also do further work by changing the chemical used, such as magnesium instead of calcium carbonate, to prove that the idea of the higher the concentration the faster the rate of reaction, works in the same way as in this experiment. To further improve this experiment I could conduct it in a place where such an accurate balance is less likely to be affected by foreign materials from the environment. The anomalies in the experiments for the 4 and 5M concentrations may be caused by the different amounts of surface area available and that some of the chips may have had slightly larger surface areas than in the other experiments which meant that more acid molecules could collide with the marble molecules. With a smaller concentration this would make very little difference but as the concentration gets to this level there are many more particles to take advantage of the extra surface area provided. It is very hard to keep the surface area the same with the given resources. Andrew Webster ...read more.
This student written piece of work is one of many that can be found in our GCSE Aqueous Chemistry section.
Found what you're looking for? | http://www.markedbyteachers.com/gcse/science/find-out-the-effect-of-concentration-on-the-rate-of-reaction-between-marble-chips-and-acid.html |
Abiotic attenuation, which involves chemical reactions between contaminants and a soil/sediment constituent, is an important sink for many contaminants in the environment. If engineered correctly, the process can also be an inexpensive, semi-passive approach to control plume migration in soil and groundwater. Under anoxic subsurface conditions, redox-labile chemicals are especially susceptible to reductive attenuation processes. Hence, sites where these contaminants are present have the greatest opportunities for successful application of enhanced abiotic attenuation. Many soil constituents have been identified that can either reduce or catalyze the reduction of these contaminants, including ferrous iron-containing minerals, ferrous iron complex, natural organic matter, and black carbon. However, the reactivity of both the reductants and contaminants can vary by many orders of magnitude, depending on the type and nature of the reactants and the geochemical conditions.
The objective of this project is to develop a methodology for predicting (1) the abiotic reduction rates of munitions compounds in the solid matrix of any given geochemical state of a solid, and (2) the longevity and enhancement frequency necessary to control plume migration.
Technical Approach
The linear free energy relationship (LFER) model can quantitatively predict abiotic reduction rate constants of nitro compounds. It measures the reduction potential distribution and electron exchange capacity of a soil using a chemical redox titration. However, all LFERs established to date for abiotic redox transformation are based on a single reductant. That is, they can predict contaminant degradation rates if and only if the identity and concentration of the reductant involved are known.
The project team has tested the model using a data set of measured rate constants for six nitro benzenes that vary over five orders of magnitude in a system containing H2S and dissolved organic matter (H2S/DOM) with pH varying from 5.5 to 8.6 with remarkable success, whereas presently available LFERs fail. The new feature is that pH + pe is used rather than pe alone to quantify the redox potential. The quantity of reductant in the soil is determined by relating the change in pH + pe to the quantity of chemical reductant titrant added in the redox titration. This provides both the LFER parameter (pH + pe) and the soil reductant quantity over the entire range of soil reduction enhancement. This improved model can predict the increase in reduction rate constant of energetic compounds at various levels of soil enhancement.
In this project, the team plans to measure the reaction rate constants of eight model compounds of diverse reactivity in 10+ soils titrated to different redox potentials in order to calibrate and validate the LFER model. They will evaluate the feasibility of enhancing soil reactivity using flow-through soil columns. The model will be used to predict the observed contaminant distributions throughout the column over time.
Benefits
A calibrated and validated predictive model will permit practitioners to develop designs for more cost-effective long-term remediation. The methodology can be directly incorporated in the groundwater models that are used to design and evaluate remediation strategies. The data requirements are not excessive, as it only requires redox titrations of a number of soil samples. The model can be used to predict the quantity of reductant needed and also to evaluate the reduction rates of the reduction intermediates and new munitions compounds. Results of this research will help confirm that enhanced abiotic attenuation can be a viable long-term remedial option. (Anticipated Project Completion - 2020)
Publications
Murillo-Gelvez, J., K.P. Hickey, D.M. Di Toro, H.E. Allen, R.F. Carbonaro, and P.C. Chiu. 2019. Experimental Validation of Hydrogen Atom Transfer Gibbs Free Energy as a Predictor of Nitroaromatic Reduction Rate Constants. Environmental Science & Technology, 53(10): 5816-5827.
Di Toro, D.M., K.P. Hickey, H.E. Allen, R.F. Carbonaro, and P.C. Chiu. 2020. Hydrogen Atom Transfer Reaction Free Energy as a Predictor of Abiotic Nitroaromatic Reduction Rate Constants: A Comprehensive Analysis. Environmental Toxicology & Chemistry, 39(9): 1678-1684.
Hickey, K.P., D.M. Di Toro, H.E. Allen, R.F. Carbonaro, and P.C. Chiu. 2020. A Unified Linear Free Energy Relationship for Abiotic Reduction Rate of Nitroaromatics and Hydroquinones Using Quantum Chemically Estimated Energies. Environmental Toxicology & Chemistry, 39(12): 2389-2395.
Cárdenas-Hernández, P.A., K.A. Anderson, J. Murillo-Gelvez, D.M. Di Toro, H.E. Allen, R.F. Carbonaro, and P.C. Chiu. 2020. Abiotic Reduction of 3-Nitro-1,2,4-triazol-5-one (NTO) and the Hematite–Fe(II) Redox Couple, Environmental Science & Technology, 54(19):12191-12201. | https://serdp-estcp.org/Program-Areas/Environmental-Restoration/Contaminated-Groundwater/Persistent-Contamination/ER-2617 |
Investigating factors affecting the rate of photosynthesis ..
affecting the rate of photosynthesis | Nuffield Foundation.
In this experiment, rhubarb sticks, which contain oxalic acid, are used to reduce and consequently decolourise potassium manganate(VII) solution. The experiment can be used to show how the rate of reaction is affected by surface area or concentration and is available from the Nuffield Foundation [Additional Resource 1], which contains health and safety guidance especially cautioning against the use of rhubarb leaves, which contain too much oxalic acid and are harmful. To investigate the effect of surface area cut three 5cm lengths of rhubarb. Leave one complete and divide the others into two and four pieces respectively. Place the pieces into a beaker containing 50cm3 of acidified potassium manganate (VII) and start the timer. Once the purple colour disappears stop the timer. This can be repeated for each set of rhubarb pieces and more able students may be able to identify the number of pieces as the independent variable and the time taken as the dependent variable in order to plot a graph of the relationship. To investigate the effect of concentration make an extract of rhubarb by boiling in a beaker until the rhubarb falls to pieces. Allow it to cool and strain and filter the mixture keeping the solution you have extracted. Then conduct a similar reaction to the first experiment, initially adding one drop of the extract to 50cm3 of the potassium manganate (VII) solution and timing how long it takes to decolorise. Repeat for 2, 3, 4 and 5 drops plotting a graph of the results. The concentration of the potassium manganate (VII) solution is not critical for these experiments; it can be made by dissolving a few crystals in 1 M sulfuric acid, giving a light purple colour. By carrying out these experiments students should be able to observe that as the surface area or concentration of the rhubarb increases, so does the rate of the reaction. Higher ability students may observe (or be prompted) that putting in more drops of the rhubarb extract has increased the total volume. You may then like to discuss the implications of this with the students. If the drop volume is small enough compared to the total volume it should not have a significant effect on the relationship observed.
Photosynthesis | Nuffield Foundation 2017
Rate of reaction provides a link between the particle model students study in physics at the start of KS4 and how a chemical reaction takes place. Students enjoy practical chemistry and rate practicals extend students’ dexterity in manipulating laboratory equipment such as gas syringes. They are also adaptable for the less well-stocked department as upturned measuring cylinders are equally as effective and cheap to provide in class sets. Data generated in rate experiments is typically reliable enough to analyse mathematically and cross-curricular links to GCSE mathematics, in particular the gradient at different points on a curve, lend themselves well to team teaching between faculties. Living in the “Rhubarb Triangle” I succumb to any opportunity to get this leafy vegetable (yes, it is considered a vegetable not a fruit) into my lessons. The rate of reaction experiment in this lesson, using rhubarb, is one of my favourites. Rate of reaction is a key concept at KS4 and requires secure knowledge for students who progress onto A-level. It is also a topic that can be taught very practically and adapted for a range of abilities and is particularly suited to extending your gifted and talented students both chemically and mathematically. Students begin by investigating the factors that can affect the rate of a chemical reaction using their own bodies and use their discoveries to suggest ways of speeding up some basic practical reactions. Links with the chemical industry could be discussed in the context of controlling reactions that may be explosively fast as well as speeding up those reactions that would cost too much because they are too slow. The topic lends itself well to demonstrations of impressive catalysis in the case of the Genie in a Bottle as well as student led discovery learning included in the sequence of practicals in the main part of the lesson. Most rate practicals can easily be adapted to generate data that can be plotted as a graph extending student mathematical understanding to include the changing gradient of a curve.
Learners often approach this topic with some reservations as their interest in plant science is limited and as a result they find it difficult to engage with the content. Many of the concepts are not visible so to generate interest in the topic images, video clips and practical investigation should be used. Learners can conduct research and then plan investigations into the environmental effects on the rate of photosynthesis or the factors effecting enzyme activity therefore leading to a better understanding. | http://agilesolutionsgroup.com/nuffield-foundation-factors-affecting-photosynthesis.html |
If you are into general chemistry, chances are difficulties are already what you live with. There is no doubt in the fact that there are an N number of students all around the world studying the subject of chemistry.
Though it is an interesting subject, yet it is absolutely not free of problems. The people must understand that one of the worst subjects that they can come across with, without a doubt is the chemical kinetics.
This is one of the most important chapters of course. But then again, one must completely understand that the despite being important, it leaves no stones unturned to create confusion for the students.
It is only one reason why opting for help is one of the most necessary things that they can do.
Chemical kinetics:
Every substance can be a part of a chemical experiment. One can absolutely make sure that each of these substances has their own individual results for the same. Of course it is quite necessary to know that why there are variations in the first place.
The chemical kinetics do this for the people. This is a study that helps understand that why there are variations in the results of experiments and what are the factors that play an important role in the same.
There can be various examples of this. A student can easily face questions like- products, time and concentration data were collected and plotted as shown here:
|[A] (M)||[t] (s)|
|0.700||0.0|
|0.520||30|
|0.482||60|
|0.360||90|
All you have to find is the reaction rate, the reaction constant and not to forget the units of the rates of constant as well.
Getting a correct answer to it must be difficult if one fail to understand the chapter in itself. To understand the chapter, people must first know that what are the various factors that they must be aware of.
The various factors that affects the same:
Following is a list of factors that affect the reaction rate of a substance:
- Concentration:
The concentration of the substance is pretty much important when it is about determining the reaction rate of a particular substance. This is really important in more than one ways. The more concentrated a substance is the better packed molecules it has. These will easily collide and therefore end up reacting more.
- The state:
More likely the physical state of the substance matters. The reaction here depends on the area of the contact. In gaseous state vigorous shaking can make a reaction take place as it is the only way for the molecules to collide. Whereas in cases of solid, the reaction is faster as well as much more, the reason being the area it can reach.
- The catalysts:
These play an important role for determination. Catalysts are always known as substances that contribute to the changing of the speed of a reaction. Either in a faster or a slower way. It is absolutely why this factor matters.
- The temperature and the pressure:
These two essentially contributes to the reaction rate. Of course, one must be completely aware of the fact that these affect the molecules directly triggering the changes in the reaction speed.
If you ever come across questions that start with “products, time and concentration data were collected and plotted as shown hereâ€, you must immediately know the solving the equation can be easy with proper understanding. | https://myhomeworkhelp.com/chemical-kinetics-how-the-factors-play-an-important-part-in-the-same/ |
Acetylcholinesterase (AChE) (3.1.1.7) is one of the best studied enzymes found in scientific literature, partly due to its physiological role in the neurotransmission process and also to the remarkably high efficiency displayed by AChE, which has a large turnover number . AChE is responsible for the cleavage of acetylcholine (ACh) within the neuromuscular junction and the synaptic cleft in the central nervous system of both vertebrates and invertebrates . It is highly conserved along evolution. Due to its importance in insects, it is a target for many substances used as pesticides . AChE inhibitors have pharmaceutical and commercial importance, and side effects should be an important issue to be considered .
Carbaryl is a carbamate organic compound and classical noncompetitive inhibitor of AChE, with widespread use as insecticide . Acute intoxication of this pesticide class primarily affects agricultural workers, causing short- and long-term health damages . Clinical conditions include symptoms referred altogether as cholinergic syndrome, embracing among others sialorrhea, diarrhea, bronchial hypersecretion and, in severe cases, mental disorder and even death .
Carbaryl intoxication may be confirmed by assaying AChE activity. Spectroscopic assays comprise the standard approach to determine parameters from AChE activity. They are widely used in laboratories around the world. Ellman’s method is the usual manner to obtain indirectly the kinetic values of AChE using an artificial substrate, acetylthiocholine (ATCh) . As for several enzymes, the product and substrate of AChE are not suitable for direct evaluation by UV-VIS absorbance or fluorescence, thereby, the cleavage of ACh as long the reaction process cannot be observed by spectrophotometric techniques. On the other hand, calorimetric assays by isothermal titration calorimetry (ITC), although not as easily performed, return values considerably accurate . The advantages of ITC are the use of enzyme and substrate only, not requiring any additional compound to generate measurable signal, and the possibility to analyze the natural substrate reaction of AChE .
ITC measures the heat exchange in a physical-chemical process and is used typically for binding assays; however, this technique also allows the determination of enzyme kinetics . When a reaction occurs in the assay cell, there is liberation or absorption of heat by the system inside the cell. This causes a drop (exothermic reaction) or increase (endothermic reaction) of power supplied by the electric devices that maintain the temperature inside the cell. This generates a thermogram showing the heat flow (μcal/s) applied by the equipment .
When comparing the initial reaction rates (v0) with the initial substrate concentration, for most of the enzymes studied a rectangular hyperbole arises; those are called Michaelis-Menten (M-M) enzymes . In the literature, linear approaches to evaluate kinetics from M-M enzymes are widely used. However, this linearization lacks precision, i.e. primarily because results obtained at low substrate concentrations overwhelmingly affect the fitted parameters .
The present study evaluates the kinetics of AChE by ITC, which allowed the comparison of its activity with the natural (ACh) and the artificial (ATCh) substrates. To achieve better precision, the integrated form of Michaelis-Menten equation was used to determinate experimental kinetic values. Additionally, the effect of the pesticide carbaryl, a well-known inhibitor, was also assessed and compared with the conventional methods .
2. Experiment
2.1. Materials
The enzyme acetylcholinesterase, AChE (EC. 3.1.1.7), extracted from electric-fish (Electrophorus electricus), lyophilized, was obtained from Sigma-Aldrich®. The substrates used were acetylcholine chloride (A6625) and acetylthiocholine chloride (A5626), both obtained from Sigma-Aldrich®.
2.2. Enzyme
The concentration of the enzyme stock solution was determined by spectrophotometric assay at 280 nm, using = 125.730 M−1∙cm−1, and the result, confirmed by colorimetric assay using Ellman’s method , was 3.0 mM (± 0.50). The concentration of use, 30.0 pM, was the same for all experiments and obtained by dilution from that stock. All assays were performed in Tris-HCl buffer 0.05 M with ionic strength 0.148 M by the addition of NaCl 0.05 M and MgCl2 0.01 M. The buffer with the enzyme also contained 0.1 mg/L of ultrapure bovine serum albumin (Sigma-Aldrich). The experiments were performed at pH 7.4 and 37˚C (310.15 K), except when otherwise stated.
2.3. Substrates and Inhibitor
The substrates ACh and ATCh were prepared and stocked at 0.1 M, and diluted for use in Tris-HCl buffer 0.05 M, pH 7.4 at 37˚C, with MgCl 10 mM and NaCl 100 mM. Carbaryl analytical standard (Brand Sevin® 99.6%) was stored at a concentration of 10 M in a solution of methanol 12.38 M in deionized water. The final concentration of methanol was 0.1 mM in each sample, the same concentration was added to control solutions.
2.4. ITC Assays
Assays were performed in isothermal titrating microcalorimeter VP-ITC (MicroCal® GE). Periodic calibration of the device was conducted by the author and by laboratory team using the procedure described in the equipment manual provided by GE. The default values are described in the literature . The enthalpy was measured by adding the substrates (4.0 mM ATCh and 10 mM ACh) with 15 successive injections of 8.0 μL into the sample cell in the presence of AChE, with an interval of 120 s between the injections. The reaction was buffered in Tris-HCl buffer 0.05 M, NaCl 0.05 M and MgCl2 0.01 M (pH 7.4) at a temperature of 310.15 K. Control experiments were conducted in the absence of AChE.
2.5. Enzymatic Assay on ITC
Experimental solutions were degassed with a vacuum pump (ThermoVac, MicroCal®) and let to reach thermal equilibrium for 5 min prior to experimental run. The heating reference was 30 μcal/s, and the stirring speed was 215 rpm. To analyze the substrate hydrolysis, sequential injections of 1, 2, 4, 6, and 8 μl, the last until the end of the assay, with an interval of 120 s between them, of 10 mM ACh. For ATCh assay, the same protocol was used.
Todd and Gómez in 2011 developed a method for direct measurement of the rate of product formation, from dq/dt, the heat flow per time, which is proportional to the power variation supplied by ITC device to the system analyzed, as follows in Equation (1):
(1)
where [P] is the molar concentration of product generated, within a certain volume (V). The power variation provided by the temperature controller maintains the temperature at the desired value (310.15 K). By Rearranging Equation (1), the reaction rate can be determined knowing the change of power provided by the VP-ITC, as the rate of heat generated by the enzyme (dq/dt) was equivalent to the variation in instrumental thermal power divided by reaction enthalpy (∆rH˚’) and volume . Thus, in Equation (2):
(2)
The variation of enthalpy can be obtained from the area definite by the power curve until it returns to the baseline, after consuming the whole substrate (STotal), and it is equal to the total heat of reaction , as shown in Equation (3):
(3)
After these steps, the values of rate and substrate concentration could be used to determine the kinetic parameters, kcat, the first-order rate constant, and Km, Michaelis-Menten constant, by applying to Michaelis-Menten Equation (4). It is also possible to determine the thermodynamic parameters of activation by Eyring Equation (5):
(4)
(5)
where kB is the Boltzmann constant (1.3805 × 10−23 J/K), h the Planck’s constant (6.6256 × 10−34 J/s), Δ‡H and Δ‡S the enthalpy and entropy of activation, T the temperature and R the gas constant (8.3145 J/K mol) .
2.6. Integrated Michaelis-Menten
Since its formulation, it is possible to write Michaelis-Menten as a superposition of linear and logarithmic function obtained by integrating the equation . In an enzyme kinetic method, it is important observing the concentrations of substrates and products throughout the time from start until the end of reaction. The integrated M-M equation is a more appropriate approach for the enzymatic reaction, since a usually kinetic reaction is expressed as either product or substrate concentration dependent of time . Main advantages of integrated Michaelis-Menten includes the absence of differentiating data in order to obtain initial velocities, and the ability to determine these initial velocities with high accuracy .
Nonetheless, the function is implicit, with several variables, which can be solved by numerical method with the help of basic software programs , in Equation (6):
(6)
where, t is time observed, S0 is the initial substrate concentration and Pt is the total product formed until a time t, by a certain concentration of enzyme, [Enzyme]. To a reaction with a competitive inhibitor, inhibition constant, Ki, can be calculated the knowing the inhibitor concentration, I, and the turnover number, kcat, by Equation (7):
(7)
2.7. Analysis
The calorimetry data were analyzed with Origin 9.0, and with graphs generated with Origin 9.0, GraphPad Prism 6.01 or Microsoft Office Excel 2013.
The experiments were performed in triplicate. For each result, a statistical weighed analysis was used, which took into account the errors associated with each variable. The F-test with Akaike’s information criteria (AIC) was used to verify the similarity between regressions. Normality was tested using Shapiro- Wilk test, given P-values always superior than 0.05, and Student’s t-test was used to evaluate two groups. Linear regressions had the weight 1/y2 adjusted for each analysis. Data are expressed as mean (±S.E.M.).
3. Results
3.1. Ionization Enthalpy
The value of the ionization enthalpy ΔiH˚ was obtained from the literature . Thus, the apparent enthalpy of the reaction, ΔrH˚’, is a sum of the enthalpy of reaction and enthalpy of ionization for a given number of moles. As a result, the value of ΔiH˚ = ?12.00 (±0.66) kJ/mol was used.
The hydrolysis of ACh by AChE gives a ΔrH˚= ?29.76 (±2.19) kJ/mol (Figure 1(a)). Similarly, the hydrolisis of ATCh shows a value of ΔrH˚ = ?24.81 (±1.70) kJ/mol (Figure 1(b)) and it was slightly lower when compared with the value observed with ACh.
3.2. Activation Energy
The linear analysis from Eyring equation was used 1/y2 as an adjusting of weight, for decreasing the sum of squares of errors, since the y-axes, ln(kcat/T), carry
Figure 1. Enthalpy energy obtained from ITC for both substrates. In (a) there is the isothermal curve for ACh. The standard enthalpy was obtained using 4 mM of ACh, in two different injections protocols in the same experiment. The system was allowed to reach thermal equilibrium at 37˚C. After a 120 s after the equilibrium period, successive injections of 8 μl of 40.0 mM ACh were made every 10 min. The enthalpy was calculated by Equation (3) and a baseline corresponding to heat of dilution was subtracted from the data, and correct to the concentration of AChE 30.0 pM. In (b) the isothermal curve for ATCh 10.0 mM in a protocol only with 8 μL injections by the syringe and titrating them in the cell containing 30.0 pM of AChE, with buffer.
most experimental error than the x-axis, 1/T. The results from calorimetric assays in different temperatures could be observed in Table 1, notably the pronounced similarity in Δ‡G˚ values exhibited by both substrates.
The experiment was conducted in different temperatures; thus, it was possible to evaluate the increase in kcat value with the rising of temperature. The Student’s t-test shown a similar result for both substrate’s Δ‡G˚ (p = 0.85), due to the structural resemblance they have. Nevertheless, the values for Δ‡H˚ and TΔ‡S˚ are significantly distinct from each other, displaying a different driven reaction for acetylcholine (Figure 2(a)) compared with acetylthiocholine (Figure 2(b)).
3.3. Kinetic Parameters
The simultaneous nonlinear regression analysis (SNLR) uses a concurrent analysis of the data adjusted for M-M equation, the result of inhibition kinetic parameters is seen in Figure 3, and it shows the results obtained for kcat and Km values. The kcat could be defined as the number of molecules converted by an enzyme per time unit, or a first-order constant, otherwise, Km is the ratio among direct and inverse reaction constants . Enzymological analysis is typically examined from the point of view of the “catalytic efficiency”, wherein the ratio kcat/Km is seen as a useful indicator of the relative processing power for an enzyme . In presence of different carbaryl concentrations, a classic competitive inhibitor, it is possible to calculate the value of Ki . The Ki may be thought as the amount of inhibitor required to decrease the reaction; smaller this value, the inhibitor would be more effective . The results for ACh and ATCh are shown in the graphs from Figure 3. The kcat values for both analyzes were statistically equal when made F-test with Akaike’s information criterion (AIC) where ACh had a probability >88.21% to be equal in all experiments and to ATCh, they had all probability >61.77% being equal.
Table 1. Values of kcat and activation thermodynamic parameters obtained in ACh and ATCh catalysis kinetics at different temperatures.
Figure 2. Eyring linear analysis of catalytic constants obtained in different temperatures. Analysis performed at five temperatures for ACh and four for ATCh and adjusted to Eyring equation, Equation (5), where is possible to obtain the thermodynamic parameters of activation, Δ‡H and Δ‡S. The graph (A) represents the linear regression for ACh and has a correlation coefficient, r = −0.9926. In (B) is the setting for ATCh with r = ?0.9747.
3.4. Kinetic Parameters by Integrated Michaelis-Menten Equation
The integrated form of Michaelis-Menten equation simultaneous analyze various concentration of product formed per time directly by the initial substrate concentration after each injection. This is an iterative nonlinear analysis method and is fitting by the method of least squares, returning the kcat and Km values. For this, a table is made in a simple program, such as Microsoft Office Excel®, so that all values of t, S0, Pt, and [Enzyme] are aligned in columns side by side . Using the Solver tool, within the program, you can perform the analysis using the Equations (6) and (7). Results are seeing in Table 2.
4. Discussion
From the data obtained, the values of Δ‡G˚ ACh and ATCH are very similar for both groups (p = 0.8426), which is plausible since both have a structural similarity, therefore, requiring resembling chemical steps to reach the transition state. However, the replacement of an oxygen by a sulfur in ATCh reflects the difference in the enthalpy factor activation of a substrate to another (p = 0.0001), where Δ‡H˚ for ACh is almost twice that observed for ATCh. This may be due to a greater enzyme specificity for the transition state to the original substrate than for the modified. Although the value of activation entropy, Δ‡S˚, increases in both cases, comments on this variation must be careful, even in a general base catalysis, as occurred in the enzyme deacylation. This reaction has a transitional state with a lot of spatial freedom, although the variation of entropy in the activated state depends not only on factors intrinsic to the reaction, but also the environment in which this occurs .
Figure 3. Nonlinear regression analysis of M-M Equation (4) for substrates ACh and ATCh. Kinetic graphs at 310.15 K in the presence and absence of inhibitor aside their ITC graphs. In (a) SNLR for M-M equation for ACh, (b) ITC assays for ACh, (c) SLNR of M-M equation for ATCh, and (d) ITC assays for ATCh. The concentration of inhibitor is μmol/L. Calorimetric assays were performed with successive injections spaced at 150 s between them, containing increasing volumes of 1, 2, 4, 6, and 8 μl until finishing, 20 mM ACh or ATCh, after the equipment has reached thermal equilibrium at 37˚C. The change in thermal power was acquired from the baseline shift, transformed to rate using equation (2). The rate was further used to obtain kinetics values through MM-equation, Equation 4. The SNLR analysis was performed concomitantly, resulting in a Ki value, as well the Km and kcat.
Table 2. Results from mathematical analysis using the integrated Michaelis-Menten equation.
The activation value of the standard enthalpy change in the working Cabib and Wilson (1956) was between 140 - 190 kJ/mol, however, they had determined that activation energy changes with temperature. As shown, the activation entropy is high, which shows great clutter between enzyme and substrate in the transition state, a possible step used by AChE in order to increase the rate of reaction . However, the entropic factor is also preeminent, which would, at a first glance, bringing the idea that the reaction is listless, which is not the case.
The enzyme velocity value using ATCh is lower when compared with ACh, in all analyses. This possibly occurs by the fact the enzyme has more interaction with the transition state from its natural substrate than for the synthetic substrate. As already mentioned, the catalytic efficiency of an enzyme would be given by kcat/Km, which is the value whose second-order constant, k1, assumes whether the limiting step of the reaction was the collisional frequency between substrate and enzyme. This could only be assumed when the kcat value is large enough to not be considered as limiting step, and the approach given to situations where [S] is very small, the second-order constant, k−1, for the enzyme- substrate complex dissociation to free enzyme and substrate is negligible.
The values from integrated M-M are similar to those obtained by nonlinear regression analysis for both substrates. However, since only one curve was evaluated in each assay for integrated M-M, the deviation observed by the experimental fluctuation cannot be taken into account.
Analyzing all the results, it can be verified a similarity between Km values for both substrates in the absence of inhibitor, motivated by the similarity in affinities to the substrates by the enzyme. As Km is an apparent ratio measured of enzyme-substrate complex formation ratio to substrate catalysis into product, a low Km value means that a lower concentration of substrate is required to achieve maximum rate of kinetics. Similar values of Km explicit similar affinities, but only part of the problem is solved by this interpretation .
The values obtained for Ki allow to say that the inhibition constants of carbaryl are low and similar for both substrates. This may be a reflection of how carbaryl acts as an inhibitor, entering the active site of the enzyme, interacting with the aromatic amino acids of the anionic site of the enzyme, and blocking the entry of the substrate into the enzyme’s gorge . Considering Ki as the amount of inhibitor required to slow down the reaction; lower that value, more effective is an inhibitor. Thus, as the specificity of AChE to ACh is large, higher inhibitor concentrations are needed to displace ACh from the active site, but with ATCh the opposite occurs, a lower concentration of inhibitor is already able to carry carbaryl out from AChE active site, when compared to ACh.
Remarkably, the values from ordinary nonlinear regression are very similar to those obtained by using the integrated M-M analysis for both substrates. In recent fields of enzymology, there is the integrated form of the classical Michaelis-Menten equation, where the solution to catalytic constants can be reached directly by the concentration of product formed as a function of time . However, the way found by direct integration of the equation is implicit, where the mathematical process to obtain it is more exhausting. The method used in this work is supported elsewhere , where the concentration of product formed in near zero time are measured in function of time without the need to use differential for determining velocities. In this method, information is equal to that required to provide a more classical test, but in a single analysis can determine the catalytic constant and varying the inhibitor concentration, also the inhibition constant.
5. Conclusion
Although the difference observed in both substrates structure, there is a slightly change of the values of constants because of their distinct interactions with the enzyme. This is also evidenced in different Ki values, although very close, the values are higher for ACh, showing that more inhibitor should be necessary to move the natural substrate of the enzyme’s active site. An alternative analysis that can save time and return more precise values is the integrated form of the Michaelis-Menten, although in this work, the implicit form was used, requiring more robust computations. The values obtained by this analysis are comparable with those obtained by classical analyses of open explicit form of the equation. Thus, the robustness of ITC enlightens an enzymatic kinetics and thermodynamic parameters values of a reaction, and even better with a more precise mathematical method. Although the traditional spectrophotometric assay is more viable to be used, this work shows comparable kinetics values between ACh and ATCh, while not precisely identical, and any approximation even if accompanied by an error is correct.
Acknowledgements
This study is supported by Brazilian Ministry of Health (n. 17217.9850001/12-025). The authors posthumously thank Professor Marcelo M. Santoro for his immeasurable help in this work.
Rotundo, R.L. (2003) Expression and Localization of Acetylcholinesterase at the Neuromuscular Junction. Journal of Neurocytology, 32, 743-766.
https://doi.org/10.1023/B:NEUR.0000020621.58197.d4
Wilson, I. and Harrison, M. (1961) Turnover Number of Acetylcholinesterase. Journal of Biological Chemistry, 236, 2292-2295.
Perrier, A.L., Massoulié, J. and Krejci, E. (2002) PRiMA: The Membrane Anchor of Acetylcholinesterase in the Brain. Neuron, 33, 275-285.
https://doi.org/10.1016/S0896-6273(01)00584-0
Dvir, H., Silman, I., Harel, M., Rosenberry, T.L. and Sussman, J.L. (2010) Acetylcholinesterase: From 3D Structure to Function. Chemico-Biological Interactions, 187, 10-22.
https://doi.org/10.1016/j.cbi.2010.01.042
Soreq, H. and Seidman, S. (2001) Acetylcholinesterase—New Roles for an Old Actor. Nature Reviews Neuroscience, 2, 294-302.
https://doi.org/10.1038/35067589
Pohanka, M., Hrabinova, M., Kuca, K. and Simonato, J.-P. (2011) Assessment of Acetylcholinesterase Activity Using Indoxylacetate and Comparison with the Standard Ellman’s Method. International Journal of Molecular Sciences, 12, 2631-2640.
https://doi.org/10.3390/ijms12042631
King, A.M. and Aaron, C.K. (2015) Organophosphate and Carbamate Poisoning. Emergency Medicine Clinics of North America, 33, 133-151.
https://doi.org/10.1016/j.emc.2014.09.010
Fantke, P., Friedrich, R. and Jolliet, O. (2012) Health Impact and Damage Cost Assessment of Pesticides in Europe. Environment International, 49, 9-17.
Bretaud, S., Toutant, J.P. and Saglio, P. (2000) Effects of Carbofuran, Diuron, and Nicosulfuron on Acetylcholinesterase Activity in Goldfish (Carassius auratus). Ecotoxicology and Environmental Safety, 47, 117-124.
https://doi.org/10.1006/eesa.2000.1954
Ellman, G.L., Courtney, K.D., Andres, V., Francisco, S. and Featherstone, R.M. (1961) A New and Rapid Colorimetric Determination of Acetylcholinesterase Activity. Biochemical Pharmacology, 7, 88-95.
https://doi.org/10.1016/0006-2952(61)90145-9
Riener, C.K., Kada, G. and Gruber, H.J. (2002) Quick Measurement of Protein Sulfhydryls with Ellman’s Reagent and with 4,4’-Dithiodipyridine. Analytical and Bioanalytical Chemistry, 373, 266-276.
https://doi.org/10.1007/s00216-002-1347-2
Freyer, M.W. and Lewis, E.A. (2008) Isothermal Titration Calorimetry: Experimental Design, Data Analysis, and Probing Macromolecule/Ligand Binding and Kinetic Interactions. Methods in Cell Biology, 84, 79-113.
https://doi.org/10.1016/S0091-679X(07)84004-0
Bianconi, M.L. (2007) Calorimetry of Enzyme-Catalyzed Reactions. Biophysical Chemistry, 126, 59-64.
https://doi.org/10.1016/j.bpc.2006.05.017
Leavitt, S. and Freire, E. (2001) Direct Measurement of Protein Binding Energetics by Isothermal Titration Calorimetry. Current Opinion in Structural Biology, 11, 560-566.
https://doi.org/10.1016/S0959-440X(00)00248-7
Johnson, K.A. and Goody, R.S. (2011) The Original Michaelis Constant: Translation of the 1913 Michaelis-Menten Paper. Biochemistry, 50, 8264-8269.
https://doi.org/10.1021/bi201284u
Leatherbarrow, R.J. (1990) Using Linear and Non-Linear Regression to Fit Biochemical Data. Trends in Biochemical Sciences, 15, 455-458.
https://doi.org/10.1016/0968-0004(90)90295-M
Wadso, I. and Goldberg, R.N. (2001) Standards in Isothermal Microcalorimetry (IUPAC Technical Report). Pure and Applied Chemistry, 73, 1625-1639.
https://doi.org/10.1351/pac200173101625
Todd, M.J. and Gomez, J. (2001) Enzyme Kinetics Determined Using Calorimetry: A General Assay for Enzyme Activity? Analytical Biochemistry, 296, 179-187.
https://doi.org/10.1006/abio.2001.5218
Lonhienne, T., Gerday, C. and Feller, G. (2000) Psychrophilic Enzymes: Revisiting the Thermodynamic Parameters of Activation May Explain Local Flexibility. Biochimica et Biophysica Acta, 1543, 1-10.
https://doi.org/10.1016/S0167-4838(00)00210-7
Goudar, C.T., Sonnad, J.R. and Duggleby, R.G. (1999) Parameter Estimation Using a Direct Solution of the Integrated Michaelis-Menten Equation. Biochimica et Biophysica Acta, 1429, 377-383.
https://doi.org/10.1016/S0167-4838(98)00247-7
Golicnik, M. (2013) The Integrated Michaelis-Menten Rate Equation: DéJà vu or vu jàdé? Journal of Enzyme Inhibition and Medicinal Chemistry, 28, 879-893.
https://doi.org/10.3109/14756366.2012.688039
Bezerra, R.M.F. and Dias, A.A. (2007) Utilization of Integrated Michaelis-Menten Equation to Determine Kinetic Constants. Biochemistry and Molecular Biology Education: A Bimonthly Publication of the International Union of Biochemistry and Molecular Biology, 35, 145-150.
https://doi.org/10.1002/bmb.32
Bezerra, R.M.F., Fraga, I. and Dias, A.A. (2013) Utilization of Integrated Michaelis-Menten Equations for Enzyme Inhibition Diagnosis and Determination of Kinetic Constants Using Solver Supplement of Microsoft Office Excel. Computer Methods and Programs in Biomedicine, 109, 26-31.
https://doi.org/10.1016/j.cmpb.2012.08.017
Das, Y., Brown, H.D. and Chattopadhyay, S.K. (1985) Enthalpy of Acetylcholine. Biophysical Chemistry, 23, 105-114.
https://doi.org/10.1016/0301-4622(85)80068-5
Fukada, H. and Takahashi, K. (1998) Enthalpy and Heat Capacity Changes for the Proton Dissociation of Various Buffer Components in 0.1 M Potassium Chloride. Proteins: Structure, Function and Genetics, 33, 159-166.
https://doi.org/10.1002/(SICI)1097-0134(19981101)33:2<159::AID-PROT2>3.0.CO;2-E
Goldberg, R.N. (1999) Thermodynamic Quantities for the Ionization Reactions of Buffers. Journal of Physical and Chemical Reference Data, 31, 231.
https://doi.org/10.1063/1.1416902
Schnell, S. (2000) Enzyme Kinetics at High Enzyme Concentration. Bulletin of Mathematical Biology, 62, 483-499.
https://doi.org/10.1006/bulm.1999.0163
Cornish-Bowden, A. and Cárdenas, M.L. (2010) Specificity of Non-Michaelis-Menten Enzymes: Necessary Information for Analyzing Metabolic Pathways. The Journal of Physical Chemistry B, 114, 16209-16213.
https://doi.org/10.1021/jp106968p
Forsberg, A. and Puu, G. (1984) Kinetics for the Inhibition of Acetylcholinesterase from the Electric Eel by Some Organophosphates and Carbamates. European Journal of Biochemistry, 140, 153-156.
https://doi.org/10.1111/j.1432-1033.1984.tb08079.x
Yung-Chi, C. and Prusoff, W.H. (1973) Relationship between the Inhibition Constant (KI) and the Concentration of Inhibitor Which Causes 50 Per Cent Inhibition (I50) of an Enzymatic Reaction. Biochemical Pharmacology, 22, 3099-3108.
https://doi.org/10.1016/0006-2952(73)90196-2
Walsh, S. and Diamond, D. (1995) Non-Linear Curve Fitting Using Microsoft Excel Solver. Talanta, 42, 561-572.
https://doi.org/10.1016/0039-9140(95)01446-I
Mulholland, A.J. (2016) Dispelling the Effects of a Sorceress in Enzyme Catalysis. Proceedings of the National Academy of Sciences, 113, 2328-2330.
https://doi.org/10.1073/pnas.1601276113
Wilson, I.B. and Cabib, E. (1956) Acetylcholinesterase-Enthalpies and Entropies of Activation. Journal of the American Chemical Society, 78, 202-207.
https://doi.org/10.1021/ja01582a056
Eisenthal, R., Danson, M.J. and Hough, D.W. (2007) Catalytic Efficiency and Kcat/ KM: A Useful Comparator? Trends in Biotechnology, 25, 247-249. | https://m.scirp.org/papers/74612 |
25.0 cm3 of this solution was titrated against 0.1 moldm-3 HCl and 24.5 cm3 of the acid were required. Calculate the value of x given the equation: Na2CO3 + 2HCl → 2NaCl + CO2 + H2O 6. 25 cm3 of a sample of vinegar (CH3COOH) was pipetted into a volumetric flask and the volume was made up to 250 cm3. This solution was placed in a burette and 13.9 cm3 were required to neutralise 25 cm3 of 0.1 moldm-3 NaOH. Calculate the molarity of the original vinegar solution and its concentration in gdm-3, given that it reacts with NaOH in a 1:1 ratio.
Experiment 8 : Ramen Spectroscopy Objective: To utilize Raman Spectroscopy as an analytical chemistry tool to determine (i) the composition of an unknown chloroform/benzene mixture (ii) the amount of ethanol in vodka. Pre-lab questions: 1) What is a calibration curve and how would you go about constructing one? A calibration curve shows the response of an analytical method to known quantities of an analyte. To construct a calibration curve, we first prepare known samples of the analyte covering a range of concentrations expected for the unknowns and measure the response of the analytical procedures of these standards to generate signal data. After the measurement is done, a linear graph of the signal data against analyte concentration is plotted.
pKa of a Weak Acid Introductory Chemistry 1120 pKa of a Weak Acid 1. A results section for the pKa lab. See the example results section from the chemical equilibrium experiment and the details of a results section on page xi of your lab manual. On this table shows the volume of the titration. Unknown acid buret | | | | Trial #1 | Trial #2 | Initial buret reading | 20.5 mL | 21.8 mL | Final buret reading | 40.5 mL | 45.1 mL | Volume of acid added | 20.0 mL | 23.3 mL | This second table shows the calculation of the pH at half equivalence point, average pKa and the average of the unknown acid.
Materials and Methods In this experiment, there are three main parts and a variety of chemicals required to achieve a desired result. The first step for this experiment was for the student to gather the necessary materials. Vials of each type of transition metal, Fe3+, Co2+, Ni2+, Cu2+, and Zn2+, as well as solutions of 6M HCl, 1.5M HCl, 15M NH4OH, 3M NH4OH, 3M HCl, dimethylglyoxime in ethanol (DMG), and the unknown sample. Materials needed include two pieces of chromatography paper, three 600mL, two spot plates, tongs, six capillary tubes, foil, and an ammonia fumigation chamber. This experiment had two main phases, identification of transition metals and paper chromatography.
Pressure Temperature Relationships in Gases Abstract The purpose of this experiment was to determine the pressure, temperature properties of a specific volume of liquid. This experiment developed our understanding between the temperature of gas and the pressure it exerts. Materials used in this experiment were LabQuest, Vernier Gas Pressure Sensor, Temperature Probe, ice, hot plate, 125 mL Erlenmeyer flask, ring stand, and plastic tubing with two connectors. We measured pressure and temperature for ice (92.90 kPa and 273.7 K), room (100.91 kPa and 294.9 K), boiling (124.01 and 368.1 K), and warm (110.01 kPa and 322 K). Introduction When a substance is in the gas state, its molecules are very spread out and are in constant motion.
Trang Nguyen Lab Report Chemistry 162: Reaction Kinetics Lab The purpose of this experiment: In this experiment, students aim to determine the rate law of the reaction by finding the time for each trial of different runs occurring. Upon studying initial reaction rates at varied reactant concentration, students will give out conclusion about the effect of the concentration towards the reaction rate. Next, the effect of the metal ion catalyst on the reaction rate will be studied by adding a dilute Cu(NO3)2 to the reaction. Finally, students will determine the effect of temperature on the reaction rate by carrying out the reaction at different hot and temperature, then calculate the activation energy. The procedure: Cabasco-Cebrian, T.; Loftus, C.; Schulz, J.; Villarba, M.; Wick, D. “Lab Manual for CHEM 162” Winter 2011, Department of Chemistry, Seattle Central Community College, pp.
Acid-Base Titrations Chemistry Quick Review of an Acid-Base Titration Calculation By Anne Marie Helmenstine, Ph.D., About.com Guide See More About: * titration * calculations * study sheets * acid-base reactions Ads Analytical InstrumentsGC, HPLC, LCMS, Microplate Readers Refurbished Laboratory Instrumentswww.conquerscientific.com Chemistry course in ukGet a Degree in Chemistry from a UK University. Free expert advice.www.click-courses.com/Chemistry Litmus paperRed, blue and neutral paper in books and reels manufactured in UKwww.johnsontestpapers.com Chemistry Ads * Chemistry com * Chemistry * Chemistry Help * Chemistry Experiments * Concentration Problem An acid-base titration is a neutralization reaction that is performed in the lab in order to determine an unknown concentration of acid or base. The moles of acid will equal the moles of base at the equivalence point. Here's how to perform the calculation to find your unknown. For example, if you are titrating hydrochloric acid with sodium hydroxide: HCl + NaOH → NaCl + H2O You can see from the equation there is a 1:1 molar ratio between HCl and NaOH.
Purpose: The purpose of this lab is to prepare and purify a fuel, ethanol C2H5OH. We will learn to do so by the process of fermentation, and distillation. Over the course of 2-3 weeks, we will be collecting and analyzing the data for this lab. Hypothesis: If the alcohol ferments correctly, then 75% alcohol will be produced with a volume on 150 mL out of the initial 200 mL solution. Procedure: Refer to Chemistry Lab Manual pp.
Name General Chemistry 1411 Laboratory Techniques and Measurements May 16, 2013 Professor Frank Pishva Objective/Purpose: The objective of this lab, laboratory techniques and measurements is for us the student to learn about the unit systems and how it relates to measurements in mass, length, temperature, and volume. This lab purpose is to also help us learn how to combine units to determine density, conversions, and trying to become familiar with common laboratory equipment and techniques. Hypothesis/Theory: This lab is pretty self-explanatory. The only theory that could possibly occur is on data table 9, understanding the dilution process. As the dissolved sugar volume transfer increased, the mass will stay approximately the same due to the density of the water decreasing as the sugar water become less diluted.
Place temperature probe through hole in cardboard lid and position probe about 1cm above bottom of calorimeter 15. Obtain an exact mass of hot water (~50mL) d. Should be approx. 45-60C above room temperature 16. Record temperature of cold water and hot water immediately before mixing the two. 17. | https://www.antiessays.com/free-essays/Heating-Curves-And-Phase-Diagrams-536833.html |
Unit 1 Introduction to Chemistry World of Chemistry Chapter 1 Chemistry: An Introduction.
Post on 24-Dec-2015
214 views
Category:
Documents
0 download
TRANSCRIPT
- Slide 1
- Unit 1 Introduction to Chemistry World of Chemistry Chapter 1 Chemistry: An Introduction
- Slide 2
- Chapter Objectives: Identify the importance of studying chemistry Identify chemists role in the real world Identify how to study chemistry Define what chemistry is Identify and demonstrate the steps of the Scientific Method
- Slide 3
- Why is studying chemistry important? To help us - develop medicines - make fireworks - develop fertilizer - to balance pH Who uses chemistry? chemists!! doctors journalists paint manufacturers oceanographers cosmetic companies Where else or who else do you think uses chemistry?
- Slide 4
- What do chemists do? Chemists can focus on a range of areas including materials science biochemistry astrochemistry soil chemistry For more career options see: http://www.acs.org/content/acs/en/careers/
- Slide 5
- As a student, how do I .benefit from this class? Answer questions you might already have about anything nature Develop problem solving skills recognize and analyze a problem draw conclusions from data, evidence Continue to develop and apply algebraic skills Develop science literacy skills Do really interesting experiments and use cool lab equipment .study for this class? Memorize vocab and definitions take extra notes/ideas discussed in class Review notes frequently Learn the fundamentals, principal concepts Do practice problems!! Identify patterns simplify, simplify, simplify
- Slide 6
- What is chemistry? Chemistry: the study of the transformation of matter chemical vs. physical how something reacts vs. how it physically looks chemical behavior ruled by the invisible world of atoms and subatomic particles Pencils are made of rubber, metal, wood, and graphitewhat does this look like?
- Slide 7
- The microscopic world of pencils?
- Slide 8
- The Scientific Method Scientific method: a series of steps used to analyze and solve a problem observed in nature 1.Observation - identify a problem 2.Hypothesis propose an explanation (sometimes use if, then statement) 3.Experiment develop a strategy to test hypothesis 4.Analyze results 5.Draw conclusions/report results
- Slide 9
- Whats necessary for an experiment? Independent variable: manipulated by experimenter Dependent variable: measured by experimenter Control variable: unchanged, constant Not every experiment is required to have variables, usually for if, then hypothesis
- Slide 10
- Precision vs. Accuracy Multiple trials required to validate data (the more the better as long as youre getting good data) Data should be both precise and accurate precision: consistency in trials accurate: how close to theoretical or accepted results http://extensionengine.com/accuracy-precision/#.U_pba7ywIbo
- Slide 11
- Analyzing data To determine the relationship between variables, it is often good to graphically represent data x axis independent variable y axis dependent variable type of graph depends on experiment (bar graph, line of best fit) Quantitative data numerical measurements Qualitative data descriptions
- Slide 12
- Good graph http://misterguch.brinkster.net/graph.html
- Slide 13
- Bad graph http://misterguch.brinkster.net/graph.html
- Slide 14
- Drawing conclusions Conclusions may not support hypothesis Theory provides an explanation for observations (can be disproven) Big Bang Theory for the origins of the universe Law provides general statement about observations, does not explain why but cannot be disproven Law of gravity
Recommended
View more >
Introduction to Chemistry Chapter 1 1. Why Study Chemistry: An Introduction Chemistry affects almost all aspects of life. Chemistry is a very broad area. | https://vdocuments.mx/unit-1-introduction-to-chemistry-world-of-chemistry-chapter-1-chemistry-an.html |
The reaction calorimeter CAP202 (chemical process analyzer) determines thermal effects by measuring the true heat flow (THF) based on unique design principles. In particular, measurements can be performed without requiring any calibration procedures and the obtained results are most reliable and exhibit extremely stable baselines. The benefits in respect of experimental speed, data quality and long term performance are obvious. Due its broad dynamic range the instrument can be employed for measurements ranging from small physical heat to energetic chemical reactions. The CPA allows running experiments seamlessly with reaction volumes between 10 and 180 mL. This volume flexibility simplifies the investigation of multi-step operations and is the basis for various applications employing precious or highly energetic compounds. Due to the fact that calibrations are not required, altering conditions during a single experiment like changes in viscosities, liquid levels or stirring speeds do not affect the results of the measurements.
Abstract
The age of plutonium is defined as the time since the last separation of the plutonium isotopes from their daughter nuclides. In this paper, a method for age determination based on analysis of 241Pu/241Am and 240Pu/236Pu using ICP-SFMS is described. Separation of Pu and Am was performed using a solid phase extraction procedure including UTEVA, TEVA, TRU and Ln-resins. The procedure provided separation factors adequate for this purpose. Age determinations were performed on two plutonium reference solutions from the Institute for Reference Materials and Measurements, IRMM081 (239Pu) and IRMM083 (240Pu), on sediment from the Marshall Islands (reference material IAEA367) and on soil from the Trinity test site (Trinitite). The measured ages based on the 241Am/241Pu ratio corresponded well with the time since the last parent-daughter separations of all the materials. The ages derived from the 236U/240Pu ratio were in agreement for the IRMM materials, but for IAEA367 the determination of 236U was interfered by tailing from 238U, and for Trinitite the determined age was biased due to formation of 236U in the detonation of the “Gadget”.
Abstract
When chemical reactions are performed in semi-batch mode and the reaction rate is relatively low, the reactant added may be accumulated. The resulting thermal accumulation is of major concern regarding process safety, as a fault in the cooling system may lead to a run-away reaction. The feed rate in semi-batch processes is usually constant, but this paper discusses methods of optimizing the feed rate interactively, based on the measured heat flow and the calculated amount of compound that has actually reacted. The prerequisite of such procedures is to run the experiments in a reaction calorimeter in which the heat flows can be measured accurately and continuously. For this purpose a ChemiSens reaction calorimeter CPA202, which is calibration free and gives stable, flat ‘zero-line-type’ baselines, was employed.
Abstract
A method has been developed for the determination of63Ni in environmental samples. The samples are ashed and leached with aqua regia whereafter hydroxides are precipitated with ammonia, leaving Ni in the aqueous phase. Nickel is extracted as dimethyl glyoxime complex by chloroform and back-extracted with HCl. Finially, Ni is electroplated onto a copper disc from an ammonium sulphate medium at high pH. The radiochemical yield is determined by atomic absorption measurements of stable Ni before and after electrodeposition.Nickel-63 on the discs is measured by beta spectrometry using solid state ion implanted detectors and by using a conventional windowless anti-coincidence shielded GM gas flow counter. Using a counting time of 3000 minutes, the minimum detection limits were 8 and 1 mBq, respectively.The method was applied to a series of macroalgae (Fucus vesiculosus) collected at different distances from a nuclear power plant. There was a correlation between distance to the power plant and the63Ni concentration in the algae. The relationship between63Ni and60Co, as well as that between63Ni and stable nickel, was also investigated. | https://akjournals.com/search?f_0=author&q_0=H.+Nilsson |
n (3) Solutions of aqueous sodium hydroxide and hydrochloric acid react to form water and aqueous sodium chloride. co NaOH(s) → Na+(aq) + OH–(aq) ∆H1 = ? Chemistry with Vernier py In this experiment, you will use a Styrofoam-cup calorimeter to measure the heat released by three reactions. One of the reactions is the same as the combination of the other two reactions. Therefore, according to Hess’s law, the heat of reaction of the one reaction should be equal to the sum of the heats of reaction for the other two.
Objectives: The purpose of this lab is to observe the reaction of crystal violet and sodium hydroxide by looking at the relationship between concentration and time elapsed of the crystal violet. CV+ + OH- CVOH To quantitatively observe this reaction of crystal violet, the rate law is used. The rate law tells us that the rate is equal to a rate constant (k) multiplied by the concentration of crystal violet to the power of its reaction order ([CV+]p) and the concentration of hydroxide to the power of its reaction order ([OH-]q). Rate = k[CV+]p[OH-]q To fully understand the rate law, concentrations of the substances must be looked at first. The concentration is measured in molarity.
Lab: I Scream, We All Scream for …Colligative Properties!? Introduction: When a solute is added to water the physical properties of freezing point and boiling point change. Water normally freezes at 0oC and boils at 100oC. As more solute is added, the freezing point drops (“freezing point depression”) and the boiling point increases (“boiling point elevation”). This property is useful in our lives.
We thought that a pH level of 3 would be the optimal pH level for the enzyme catecholase and its substrate catechol to react in. There was no prior knowledge on the effect of pH levels on catecholase and catechol. The results of the experiment did not confirm this hypothesis; instead the hypothesis was rejected by the results of the experiment. The results showed that tube 1 had the slowest rate of reaction, and actually did not react at all. The results showed that tube 2 had the fastest rate of reaction; this tube was exposed to the neutral pH of 7.
determined results were unsatisfactory to how experiment was supposed to turn out.Part B1. Ran compounds of vanillin and vanillyl alcohol on TLC plates2. Prepared 3 TLC plates 10cm X 3.3cm3. Marked each plate with line 1 cm from bottom then with intervals 1 cm apart on lines for compounds to be spotted.5. Prepared 3 development chambers each with a different solvent.
The salts will be dissolved in distilled water by small quantities until the reaction reaches When ionic compounds dissolve in water, they either absorb energy from or release energy to the surroundings. If a chemical reaction absorbs heat from the surroundings, it is an endothermic reaction. If a solution releases heat to its surroundings, it is an exothermic reaction. The enthalpy of dissolution is the enthalpy change associated with the dissolution of a substance in a solvent at a constant pressure. The change in enthalpy relies on the concentration of the salt solution, because different concentrations will produce different enthalpies.
4. Why are boiling chips useful when boiling liquid? Discussion: In this experiment my partner and I misunderstood how far to carry the significant figures. We took them over two decimals places when it really only should have been one decimal place over. Because we didn’t count the proper number of significant figures in the volume we had too many when we went to calculate the density.
Repeat step nine 11. Make a graph reflecting the timed data 12. Use the changes in the freezing point, recorded mass of the solute and solvent and the freezing point constant for cyclohexane to determine the molecular weight of the unknown substances, both with and without the additions by using the change in freezing point for the organic substance
On both plot temperature on the y-axis and time on the x-axis. A. Record the freezing point of the pure water and the freezing point of the salt solution. B. How do these two freezing points compare?
Experiment 1 Freezing Point Depression of Electrolytes Colligative properties are properties of solutions that depend on the concentrations of the samples and, to a first approximation, do not depend on the chemical nature of the samples. A colligative property is the difference between a property of a solvent in a solution and the same property of the pure solvent: vapor pressure lowering, boiling point elevation, freezing point depression, and osmotic pressure. We are grateful for the freezing point depression of aqueous solutions of ethylene glycol or propylene glycol in the winter and are continually grateful to osmotic pressure for transport of water across membranes. Colligative properties have been used to determine the molecular weights of non-electrolytes. Colligative properties can be described reasonably well by a simple equation for solutions of non-electrolytes. | https://www.antiessays.com/free-essays/The-Depression-Of-Frozen-Point-328791.html |
Conducting science experiments with your children can be a fun way to spend time with them while enhancing their knowledge of basic chemical reactions. You do not need a fancy lab with specialised equipment. In fact, many chemical science experiments can be conducted in your own kitchen using common household items.
Other People Are Reading
Precautions First
Certain precautions should be taken when conducting science experiments at home. It is a good idea to lay down sheets of newspaper on your table or countertop to protect from any spills. You may also want to have your children wear aprons to protect their clothing from spills. When working with certain liquids, wear gloves and safety glasses. Follow any precautions that may be listed on the labels of the liquids you are working with. Always follow directions exactly as they are written, as certain liquids may have reactions to other liquids and solids that could result in an injury.
Chemistry with Pennies
Find a couple of dirty pennies and explain to your children that the grime is from a chemical reaction that occurred when the copper pennies reacted with oxygen; the grime, or copper oxide, built up on the penny. You children can recreate the reaction by sprinkling salt over the pennies, then pouring some vinegar on top of that. Have them rub the salt and vinegar liquid onto the pennies. Rinse only one of the pennies with water, then place both on a paper towel to dry. Have your children check on the pennies in about an hour. The rinsed penny will be shiny and new, whereas the other penny will have a new layer of blue-green copper oxide on it. The vinegar and salt speeds up the chemical reaction that causes the growth of copper oxide.
Ballon Inflation
When liquid acetic acid, commonly referred to as vinegar, comes into contact with sodium bicarbonate, or baking soda, the resulting chemical reaction produces a gas which can be used to inflate a balloon. Using a funnel, have your child pour 1 teaspoon of baking soda into a deflated balloon. Then have him pour about 5 tablespoons of vinegar into an empty water bottle. Carefully have him place the balloon onto the top of the water bottle, making sure that the baking soda doesn't fall into the vinegar. Once the balloon is secure, have him lift the balloon, causing the baking soda to drop into the bottle. Be sure to hold the balloon onto the bottle top, as the chemical reaction causes the balloon to fill with air.
Cabbage Juice Chemistry
Chop a head of red cabbage, toss some into a blender that is half-filled with water and blend away. Strain the mixture, reserving the liquid, to teach your children about acids and bases. Pour some of the juice into three separate glass containers. Choose three household chemicals, such as vinegar, ammonia, detergent or salt, and add one to each glass of juice until a colour change is noted. The juice will turn green when mixed with something basic and will turn red when mixed with an acid. Use a pH level colour chart to get a more precise pH level count. | http://www.ehow.co.uk/info_8599853_liquid-chemical-experiments-kids.html |
Biology Important Question Bank 2019
A student conducted an experiment to investigate the rate of enzyme activity. Relate the understanding gained from performing this experiment to an application of biotechnology.
A student conducted an experiment to model the process of accommodation. Describe the experiment and how the first-hand data collected could contribute to an understanding of how accommodation works in human vision.
Analyse the impact of modern technologies in the fields of modern medicine and genetic engineering on human evolution.
Analyse the impact of the Human Genome Project on the development of technologies which benefit society.
Analyse the impact of the development of the electron microscope on the understanding of chloroplast structure and function.
Antidiuretic hormone (ADH) is a protein produced by cells in the hypothalamus. The AVP gene codes for the production of ADH. Outline the steps to show how a mutation in the AVP gene could result in changes in the ADH protein.
Assess how our understanding of the path of a sound wave through the ear has led to the development of technologies that assist hearing.
Biological theories are always provisional in nature and change in the light of new evidence. Give reasons.
Compare mechanisms in the human body for detection and perception of a range of frequencies in visual and auditory communication.
Construct a dichotomous key to classify primates into four groups: prosimians, new world monkeys, old world monkeys, apes.
Construct a flow chart to show how an animal with a diploid number of 32 chromosomes can be cloned and how the clone can be verified. Include reference to chromosome number in each step.
Construct a flow chart to summarise the main steps and products of the light-independent reactions of photosynthesis.
Contrast the distribution and function of cone cells and rod cells in the human eye.
Describe an experiment that could be used to test van Helmont’s observation that soil is not primarily responsible for a plant’s change in mass.
Describe how the lens of the eye changes its shape in order to focus on near and far objects.
Explain ONE benefit and ONE limitation of suppressing the immune system in organ transplant patients.
Explain an advantage and a disadvantage of EITHER the product OR process of a specific animal biotechnology.
Explain how TWO different methods used to treat drinking water reduce the risk of infection.
Explain how both genotype and phenotype influence the inheritance of genes and natural selection in this population.
Explain how polymorphisms have enabled humans to survive in their environment. Use examples in your answer.
Explain how selective breeding has resulted in a series of changes in an agricultural species.
Explain how the energy liberated in the light-dependent reaction is stored and used in cells.
Explain how the human larynx produces sounds of different pitch.
Explain the relationship between advances in scientific knowledge of cell chemistry and modern uses of biotechnology.
Explain the role of isolation in the process of evolution.
Explain the structure and behavior of chromosomes in the first division of meiosis. Include detailed reference to the model.
Explain, using TWO examples, the evolutionary significance of polymorphism.
Models of human evolution continue to change as a result of work done by individual scientists and advances in technology. Give reasons.
What is the difference between polymorphism and clinal gradation?
With the breakdown of proteins, animals produce ammonia, a nitrogenous waste product that must be removed. Direct removal of ammonia requires the excretion of large amounts of water. Explain how both terrestrial mammals and insects conserve water while excreting nitrogenous wastes.
‘Genes influence proteins and proteins influence genes.’ Evaluate this statement with reference to the structure and function of genes and proteins.
‘Over the past 400 years, the development of our knowledge of the chemical transformations occurring both inside and outside plants has led to our current understanding of photosynthesis.’ Evaluate this statement with reference to the experiments of TWO named scientists. | https://www.omtexclasses.com/2019/02/biology-important-question-bank-2019.html |
Area of Study 1: How can knowledge of elements explain the properties of matter?
Students explore the nature of chemical elements, their atomic structure and their place in the Periodic Table. The periodic table is studied as a unifying framework allowing the patterns and trends of elements and their reactivity to be explored. The nature of metals and their properties is investigated. Using their understanding of electronic structure, students study how ionic compounds are formed and explore their structure and properties. Students are introduced to fundamental quantitative concepts of chemistry including the mole concept, relative atomic mass, percentage abundance and empirical formula.
Area of Study 2: How can the versatility of non-metals be explained?
Students explore a wide range of substances and materials made from non-metals including molecular substances, covalent lattices, carbon nanomaterials, organic compounds and polymers. They investigate the relationship between electronic configurations and the resultant structures and properties of a range of molecular substances and covalent lattices. Students are introduced to a variety of organic compounds, grouping them into families. They investigate useful materials and relate their properties and uses to their structures. They apply quantitative concepts to molecular compounds.
Area of Study 3: Research Investigation
Students apply and extend their knowledge and skills developed in Area of Study 1 and/or 2 to investigate a selected question related to materials. They apply critical and creative thinking skills, science inquiry and communication skills to conduct and present the findings of their investigation.
ASSESSMENT
1. Coursework (investigations, class tests and practical work) (50%)
3. Examination (50%)
Area of Study 1: How do substances interact with water?
Students focus on the properties of water and the reactions that take place in water including acid-base, precipitation and redox reactions. They relate the properties of water to the water molecule’s structure, polarity and bonding. They explore the significance of water’s high specific heat capacity and latent heat of vaporisation for living things and water supplies. Students investigate issues associated with the solubility of substances in water. They compare acids with bases and learn to distinguish between acid strength and acid concentration. The pH scale is examined and students calculate the pH of strong acids and bases of known concentration.
Area of Study 2: How are substances in water measured and analysed?
Students focus on the use of analytical techniques to measure the solubility and concentrations of solutes in water, and to analyse water samples for various solutes including chemical contaminants. Students explore the relationship between solubility and temperature and learn to predict when a solute will dissolve or crystallise out of solution. They will apply the principles of stoichiometry to gravimetric and volumetric analyses of aqueous solutions and water samples. They will be introduced to a range of analytical techniques such as colorimetry, spectroscopy and chromatography.
Area of Study 3: Practical Investigation
Students use knowledge and skills developed in Area of Study 1 and/or Area of Study 2 to conduct an investigation through laboratory work and/or fieldwork. Students develop their own question and then plan and carry out an investigation in response to their question.
ASSESSMENT
1. Coursework (class tests and practical work) (40%)
2. Practical Investigation (10%)
3. Examination (50%)
Area of Study 1: What are the options for energy production?
Students focus on analysing and comparing a range of energy resources and technologies, including fossil fuels, biofuels, galvanic cells and fuel cells, with reference to the energy transformations and chemical reactions involved, energy efficiencies, environmental impacts and potential applications. Students explore theoretical aspects of, and also design and conduct practical investigations on, the use of the specific heat capacity of water and thermochemical equations to determine the enthalpy changes and quantities of reactants and products involved in the combustion reactions of a range of renewable and non-renewable fuels. Students explore theoretical aspects of, and also conduct practical investigations involving, redox reactions, including the design, construction and testing of galvanic cells, and account for differences between experimental findings and predictions made by using the electrochemical series. They compare the design features, operating principles and uses of galvanic cells and fuel cells, and summarise cell processes by writing balanced equations for half and overall cell processes.
Area of Study 2: How can the yield of a chemical product be optimised?
Students investigate how the rate of a reaction can be controlled so that it occurs at the optimum rate while avoiding unwanted side reactions and by-products. They explain reactions with reference to the collision theory including reference to Maxwell-Boltzmann distribution curves. The progression of exothermic and endothermic reactions, including the use of a catalyst, is represented using energy profile diagrams. Students explore homogeneous equilibrium systems and apply the equilibrium law to calculate equilibrium constants and concentrations of reactants and products. They investigate Le Chatelier’s principle and the effect of different changes on an equilibrium system and make predictions about the optimum conditions for the production of chemicals, taking into account rate and yield considerations. Students represent the establishment of equilibrium and the effect of changes to an equilibrium system using concentration-time graphs. Students investigate a range of electrolytic cells with reference to their basic design features and purpose, their operating principles and the energy transformations that occur. They examine the discharging and recharging processes in rechargeable cells, and apply Faraday’s laws to calculate quantities in electrochemistry and to determine cell efficiencies.
Area of Study 1: How can the diversity of carbon compounds be explained and categorised?
Students examine the structural features of members of several homologous series of compounds, including some of the simpler structural isomers, and learn how they are represented and named. Students investigate trends in the physical and chemical properties of various organic families of compounds. They study typical reactions of organic families and some of their reaction pathways, and write balanced chemical equations for organic syntheses. Students learn to deduce or confirm the structure and identity of organic compounds by interpreting data from mass spectrometry, infrared spectroscopy and proton and carbon-13 nuclear magnetic resonance spectroscopy.
Area of Study 2: What is the chemistry of food?
Students explore the importance of food from a chemical perspective. Students study the major components of food with reference to their structures, properties and functions. They examine the hydrolysis reactions in which foods are broken down, the condensation reactions in which new biomolecules are formed and the role of enzymes, assisted by coenzymes, in the metabolism of food. Students study the role of glucose in cellular respiration and investigate the principles of calorimetry and its application in determining enthalpy changes for reactions in solution. They explore applications of food chemistry by considering the differences in structures of natural and artificial sweeteners, the chemical significance of the glycaemic index of foods, the rancidity of fats and oils, and the use of the term ‘essential’ to describe some amino acids and fatty acids in the diet.
Area of Study 3: Practical investigation.
A student-designed or adapted practical investigation related to energy and/or food is undertaken in either Unit 3 or Unit 4, or across both Units 3 and 4. The investigation relates to knowledge and skills developed across Unit 3 and/or Unit 4.
The investigation requires the student to identify an aim, develop a question, formulate a hypothesis and plan a course of action to answer the question that complies with safety and ethical requirements. The student then undertakes an experiment that involves the collection of primary qualitative and/or quantitative data, analyses and evaluates the data, identifies limitations of data and methods, links experimental results to science ideas, reaches a conclusion in response to the question and suggests further investigations which may be undertaken. Findings are communicated in a scientific poster format. A practical logbook must be maintained by the student for record, authentication and assessment purposes. | https://www.ggs.vic.edu.au/School/Academic/Curriculum-Guide-2022/Year-11-and-12-VCE/Chemistry |
Ib chemistry - internal assessment lab format the following titles and subtitles should be used for your lab report and given in this order within your lab report use the following sheet as a checklist when writing lab reports. Back when i did it, my teacher said that the chemistry ia definitely needs to involve two quantitative data points, as that allows for the best interpolation, extrapolation, and general analysis topics like ph, uv beads, and the like which only give qualitative data are not good for chemistry ias. Understanding of the electrochemical properties of graphene, especially the electron transfer kinetics of a redox reaction between the graphene surface and a molecule, in comparison to graphite or other carbon-based materials, is essential for its potential in energy conversion and storage to be realized.
In chemical kinetics, the distance traveled is the change in the concentration of one of the components of the reaction the rate of a reaction is therefore the change in the concentration of one of the reactants ( x ) that occurs during a given period of time t. Read our complete ib chemistry syllabus here to learn wondering what exactly you have to learn for ib chemistry hl and sl experimental design and procedure usually lead to systematic errors in measurement, which cause a deviation in a particular direction (internal assessment-ia) - 10 hours for sl and hl. Laboratory manual chemistry 121 fifth edition 2007 dr steven fawl laboratory manual chemistry 121 fifth edition exam #1 - thursday, february 25th - kinetics exam #2 - thursday, march 25th - equilibrium exam #3 - thursday, may 13th - thermodynamics if any discrepancies inform the ia or. The ibc web is a university entrance level chemistry resource with notes on standard level and higher level topics, worked example exam questions, multiple choice test questions, animations, live help, etc, etc.
Ib chemistry and ib biology on enzyme and kinetics experiment for ia ib chemistry and ib biology on enzyme and kinetics experiment for ia how to write a level-7 bio design ia in 2 hours. Ib correlations for chemistry the international baccalaureate program has a complete set of objectives for an ib chemistry class at the standard level (sl) and the higher level (hl) as outlined in the chemistry syllabus in the ib diploma programme guide, published for the first test in 2009, these objectives are arranged according to series of. They are able to design, carry-out, record, and analyze the results of chemical experiments laboratory in physical chemistry: 2-3: or thermochemistry, acid-base theory, oxidation-reduction reactions, basic chemical kinetics, and chemical equilibrium only one of chem 163, 167, 177, or 201 may count toward graduation.
Recommended for physical and biological science majors, chemical engineering majors, and all others intending to take 300-level chemistry courses principles and quantitative relationships, stoichiometry, chemical equilibrium, acid-base chemistry, thermochemistry, rates and mechanism of reactions, changes of state, solution behavior, atomic. A list of ap chemistry labs: lo 210 the student can design and/or interpret the results of a separation experiment (filtration, paper chromatography, column chromatography, or distillation) in terms of the relative strength of interaction among and between the components [sp 42, 52, 64 ek 2b1] chapter 12 chemical kinetics apsi. 020 m ki in 250 ml volumetric flask2 0050 m (nh4)2s2o8 in 100 ml volumetric flask3 000175 m na2s2o3 in 100 ml volumetric flask4 3% w/v starch solution in 25 ml volumetric flask2nd part of the experiment:for this part, you have to find the reactants' order and the rate constant1. All of this has come about due to the new design features of the 787 whilst designing what they believed to be an aircraft for the future, they had pre-empted the future need for in-flight entertainment and access to the internet 2 this can also be 81825345 chemistry design ia kinetics 2 roman republic and empire reflection.
This book gives you all you need to know about the ia of the chemistry and it gives you examples to try and practice by mahmed_864319 in types school work this book gives you all you need to know about the ia of the chemistry and it gives you examples to try and practice. This chemistry ia is still relevant to the new 2016 syllabus for group 4 sciences due to the many similarities between the two marking criteria the “exploration” criterion is highly similar to the previous “design” criterion (topic, background information, method, and safety issues. Simulation models should be employed to design the molding process and optimize the process parameters characterization of the cure kinetics of the resin is one of the prerequisites for. Voltaic cells design ia ib chem ia bleach investigation 06 en ib diploma chemistry hl textbookpdf pearson baccalaureate ib chemistry higher level- p213 2 pearson baccalaureate ib chemistry higher level- p214 documents similar to ib chemistry ia: kinetics ib chemistry revision pdf uploaded by srushti ib chemistry ia.
Ib chemistry may not be quite as easy as this penguin makes it seems so to help you out, i have compiled the best free online ib chemistry study guides and notes into one helpful article. 10 ( 11 h) & 20 ( 12 h) : organic chemistry possible ways of teaching organic chemistry organic chemistry is one of the bigger topics in terms of time and content it takes up 11 of the 95 hours of core time and 23 of the 155 hours allotted to the core and ahl for higher level students. Executive summary the purpose of an executive summary is to summarize a reportexecutive summaries are written for executives who most likely do not have time to read the complete document therefore, the executive summary must cover the major points and be detailed enough to mirror the content yet concise enough for an executive to understand the substance without reading the entire report. Applying ib chemistry to a diet pill blog one of the reasons why life expectancy is starting to fall in certain parts of the world is the increase in obesitythe number of overweight and obese people worldwide has increased from 857 million in 1980 to 21 billion in 2013.
Full ib chemistry internal assessment criteria sheet: personal engagement (2/24) the evidence of personal engagement with the exploration is clear with significant independent thinking, initiative or creativity. This packet includes review questions and answers about kinetics for the ap exam for chemistry this is my attempt during class, my answers are circled and then ones that i got wrong are highlighted with the correct answers there are a few personal notes written, but the document is still legible questions include: 1 the rate of a chemical reaction is related to 2. C graham brittain page 2 of 12 11/14/2010 chemical reactions can only occur when the reactant molecules or ions collide with one another this idea is referred to as the collision theory of chemical kinetics: • in order for reactant “particles” (atoms, ions, or molecules) to react, they must. Wednesday, dec 6th topic 6 & 7 lab summativebegin ia brainstorming use links provided under the ia section of chemistry homepage and begin process of coming up with a topic.
2018. | http://wlessaynnxa.alisher.info/81825345-chemistry-design-ia-kinetics-2.html |
Language of instruction:
NL
Assumed knowledge on:
Chemistry at Dutch VWO level
Contents:
Note: This course can not be combined in an individual programme with PCC-12803 General Chemistry for the Life Sciences or with PCC-12303 General Chemistry 1.
Many disciplines in the fields of life sciences, environmental sciences and technology build on concepts from physics and chemistry. The course General Chemistry for Agrotechnology intends to make you familiar with these general concepts. Among the concepts are energy exchange, Gibbs free energy as driving force, molecular interactions, various types of equilibrium, buffer systems, ATP-coupling and reaction kinetics. A special focus will be on these concepts in biological (living) systems.
Concepts are worked out both theoretically and experimentally in tutorials and practical classes within themes relevant for agrotechnology.
Learning outcomes:
After successful completion of this course students are expected to be able to:
- explain and identify various types of (chemical) bonds in (bio)chemical molecules;
- identify and examine driving forces (total entropy change, reaction Gibbs energy) behind (bio)chemical reactions and apply these to topics like the direction of spontaneity and equilibrium constants of chemical reactions;
- apply principles of reaction kinetics (reaction order, reaction rate, Arrhenius equation) to (bio)chemical reactions;
- identify properties of aqueous solutions of acids and bases and mixtures of these and apply these to topics like the pH of acid, basic or buffer solutions;
- execute experiments in the domain of general and physical chemistry following a given protocol and analyze the outcomes.
Activities:
- participation in lectures;
- participation in tutorials;
- participation in practicals;
- independent study.
Examination:
Written exam with some multiple choice questions . All practical exercises need to be completed successfully (go/no go). Partial results are valid for six years
Literature:
Crowe, J.; Bradshaw, T. (2014). Chemistry for the biosciences: the essential concepts. Third edition. Oxford University Press. 740p. ISBN:978-0-19-966288-3.
Practical manual and workbook. All are available at the WUR-shop. | https://ssc.wur.nl/Handbook/Course/PCC-14303 |
Catalysts that allow common synthetic reactions to take place in water instead of the usual organic solvents have long been sought after because of their potential to offset both the cost and the environmental impact of research and industry. The majority of catalysts used in modern synthetic chemistry fall victim to unfavorable physical and chemical interactions when exposed to water, which render them ineffective. Furthermore, the few compounds that are capable of catalyzing useful reactions in water often rely on chemical species whose actively catalytic forms are transient and poorly characterized, are inactivated by the presence of certain common functional groups, or show poor selectivity for their target reaction.
Our recent work has shown that two gallium(III)-based complexes, [Ga(phen)2Cl2]Cl (A) and [Ga(bispicen)Cl2]Cl (B), are capable of catalyzing the epoxidation of alkenes by peracetic acid in both water and acetonitrile, showing exceptional selectivity for the epoxide in both environments with no observed side products. Further investigation of aqueous activity in buffered solutions showed that both catalysts are equally effective under highly acidic and basic conditions, but nearly completely inactive in the near-neutral pH range. Functional group tolerance experiments conducted in acetonitrile suggest that alcohols, ketones, and organochlorides are not affected by the presence of the catalysts, but that amines and aldehydes might participate in unwanted side reactions.
Tuning reaction conditions in order to maximize product formation as simply as possible is a key aspect of catalysis research. As such, possible future directions for this research include searching for alternate terminal oxidants that allow the chemistry to be performed at neutral pH and a more thorough exploration of these catalysts’ interactions with aldehydes and amines.
Figure 1. Yields of cyclohexene oxide generated in aqueous solutions at various constant pH values (A) Reaction conditions: [Ga(phen)2Cl2]Cl = 0.75 mM, [alkene] = 75.0 mM, [peracetic acid] = 151 mM. (B) [Ga(bispicen)Cl2]Cl = 0.85 mM, [alkene] = 85 mM, [peracetic acid] = 169 mM. Both series of experiments were performed under air at 25 °C.
Statement of Research Advisor
Fraser has contributed to the discovery of small molecule catalysts for hydrocarbon oxidation and the development of redox-responsive contrast agents for magnetic resonance imaging. He performed all of the catalytic reactions and data analysis and synthesized these two compounds when necessary. | http://our.auburn.edu/aujus/expanding-the-scope-of-gallium-catalyzed-olefin-epoxidation/ |
Monopolistic competition refers to a situation where the product to be sold is differentiated and there are many sellers operating to sell it. The competition is not perfect and is between firms making similar products (not substitutes).
Characteristics
- There are many sellers and no seller is big enough to influence the market price.
- Each seller has an independent price-output policy.
- Product is heterogeneous due to differentiation. Product of each firm is a close substitute of the product of other firm.
- Patent rights, advertising, quality differentiation, etc. are used as the main instruments of product differentiation.
- There are no restrictions on the entry and exit of firms.
- Each individual firm enjoys some monopoly power due to product differentiation and hence, the demand curve is more elastic than that of the monopoly firm. | https://mbanotesworld.com/six-characterisitics-of-a-monopolistic-competition/ |
The basic Bertrand model of competition involves firms setting the price for homogeneous products. It is assumed that each firm can supply as much product as is demanded at any given price. Under these assumptions, it can be shown that the Bertrand (Nash) equilibrium is that price is equal to marginal cost (i.e. the same outcome as under perfect competition). The logic for this result is simple. Suppose there are just two firms, A and B. Suppose Firm A prices above marginal cost. Then Firm B’s profit maximising price is to price just below Firm A but above marginal cost and capture all the market demand. But if Firm B does this, then Firm A can do better by pricing just below Firm B. This process of undercutting continues until price is at marginal cost. It would not make sense for either firm to price below marginal cost. The Bertrand (Nash) equilibrium is thus that price equals marginal cost.
This leads to the so-called Bertrand paradox: two firms are enough to generate the same outcome as under perfect competition. The “paradox” is that we normally assume that a duopoly will not be competitive and will price above marginal cost.
There are three ways to solve this apparent paradox. These involve relaxing one of the following assumptions:
– Products are homogeneous
– Each firm can supply the whole market
– Firms act in a short run non-cooperative fashion.
The most common way to change the model in order to get more intuitively plausible results is to assume that the firms sell differentiated products. Now firms do not lose all of their sales even if their competitors price slightly below them. This unravels the logic that leads to price equal to marginal cost in the standard (homogeneous products) Bertrand model. To see this, suppose that both Firm A and B did set their prices at marginal cost. Firm A would have an incentive to raise its price very slightly above marginal cost as it would not lose all its sales: those customers who prefer Firm A’s product to Firm B’s product would continue to buy it. This would increase Firm A’s profits as it would make a positive profit on those sales that it continues to make, rather than the zero profit when price is equal to marginal cost. But Firm B faces the same incentives. The result is that both will raise their prices above marginal cost until the point at which a further price rise is not profitable given the price set by the other firm. This is a Nash equilibrium. The extent to which prices are above costs is driven by the extent of differentiation between products i.e. prices increase more as products become more differentiated. To the extent that more firms in the market implies less differentiation between products, this means that prices will tend to be lower as more firms enter the market. This accords with our intuition for how prices should fall as the number of suppliers increases.
The Bertrand differentiated products model is a workhorse model for merger analysis. When two firms merge in a Bertrand differentiated products setting, the competition that previously existed between them is lost. The merged entity has an incentive to raise prices because sales that were previously lost by one of the firms but captured by the other firm are now recaptured within the merged entity. The stronger the competition between the two firms was pre-merger, the more the merged entity has an incentive to raise prices. A price increase by the merged entity incentivises the other firms in the market to also raise their prices, although these “second order effects” are typically much smaller than the merging parties’ price rise. This price rise is driven by the logic of the Nash equilibrium concept: if all the firms were previously doing the best they could given what all the other firms were doing, this will no longer be true if the merged entity raises its prices.
The Bertrand differentiated products model is applicable to markets where firms set prices for differentiated products and then customers choose whether or not to buy. It is therefore particularly relevant for retail markets. The UK Competition and Markets Authority review of the proposed Sainsbury’s/Asda merger in 2019 is a good example of this. Standard empirical measures of the effect of mergers on pricing incentives, such as upward pricing pressure indices and indicative price rise indices, are typically based on the Bertrand differentiated products model.
The model is not relevant where prices are set separately for each buyer (i.e. markets involving substantial price discrimination) or where firms choose quantities rather than prices (i.e. Cournot competition).
Another way to escape the Bertrand paradox is to remove the assumption that each firm can supply the whole market. Where there are several firms in a market, this is unlikely to be a plausible assumption, at least not in the short run. The Bertrand-Edgeworth model assumes that no firm can supply the whole market. This allows prices to rise above marginal cost as the logic of the standard model (undercutting to win the whole market) no longer holds. This model illustrates another important aspect of the Nash equilibrium concept: it relates to strategies, rather than just to prices. In the Bertrand-Edgeworth model firms do not set a single price, but instead set different prices for each period based on a mixed strategy equilibrium (e.g. set a price of 10 with probably 0.2; a price of 11 with probability 0.18; etc.). This is because there is no single Nash equilibrium price in the Bertrand-Edgeworth model, but there is a Nash equilibrium set of mixed strategy equilibria. The European Commission’s decision on the Inoxum/Outokumpu merger is a good example of the application of the Bertrand-Edgeworth model to a case.
The third way to escape the Bertrand paradox is to drop the assumption that firms think only about the outcome of period. The Bertrand paradox arises because the model is a one-period model. But suppose firms compete over many periods. Is it plausible that in each period they will set price equal to marginal cost and earn zero profits? Maybe. But where there are only a few firms it is also plausible that they will “soften” their pricing in the hope that other firms will as well.
Economists have traditionally assumed that collusion between firms would tend to be unstable as collusion was not a Nash equilibrium. The argument was that each firm has an incentive to cheat on the collusion (i.e. lower prices to win more demand) because, given what the other firms were doing, cheating is profit maximising. However, this argument is hard to square with the observed facts that cartels do exist and do manage to raise prices significantly. The problem with the standard argument is that it is very short termist. When firms compete over time, they aim to maximise profits over time, not just in one period. Fudenberg and Maskin (1986) showed that any collusive output can be sustained as a Nash equilibrium of a pricing game when discount rates are high enough and when punishment mechanisms are credible. | https://www.concurrences.com/en/dictionary/bertrand-nash-equilibrium |
This paper provides a framework to understand how market size affects firms' investments in product differentiation in a model of monopolistic competition. The theory proposes that consumers' love of variety makes them more sensitive to product differentiation efforts by firms, which leads to more differentiated products in larger markets. The framework also predicts an inverted U -shaped effect of trade liberalization on product differentiation, with trade liberalization leading to more differentiated products when starting from autarky but then leading to less differentiated products as the countries approach free trade.
Shon M. Ferguson, 2015. "Endogenous Product Differentiation, Market Size and Prices," Review of International Economics, Wiley Blackwell, vol. 23(1), pages 45-61, February.
Ferguson, Shon, 2011. "Endogenous Product Differentiation, Market Size and Prices," Working Paper Series 878, Research Institute of Industrial Economics.
Ferguson, Shon, 2010. "Endogenous Product Differentiation, Market Size and Prices," Research Papers in Economics 2010:26, Stockholm University, Department of Economics.
Matthias Helble & Toshihiro Okubo, 2008. "Heterogeneous Quality Firms and Trade Costs," Discussion Paper Series 220, Research Institute for Economics & Business Administration, Kobe University.
Helble, Matthias & Okubo, Toshihiro, 2008. "Heterogeneous quality firms and trade costs," Policy Research Working Paper Series 4550, The World Bank.
David Hummels & Volodymyr Lugovskyy, 2009. "International Pricing in a Generalized Model of Ideal Variety," Journal of Money, Credit and Banking, Blackwell Publishing, vol. 41(s1), pages 3-33, February.
David L. Hummels & Volodymyr Lugovskyy, 2008. "International pricing in a generalized model of ideal variety," Proceedings, Board of Governors of the Federal Reserve System (U.S.).
Robert C. Feenstra, 2006. "New Evidence on the Gains from Trade," Review of World Economics (Weltwirtschaftliches Archiv), Springer;Institut für Weltwirtschaft (Kiel Institute for the World Economy), vol. 142(4), pages 617-641, December.
Ekholm, Karolina & Midelfart, Karen Helene, 2005. "Relative wages and trade-induced changes in technology," European Economic Review, Elsevier, vol. 49(6), pages 1637-1663, August.
Ekholm, Karolina & Ulltveit-Moe, Karen-Helene, 2001. "Relative Wages and Trade-Induced Changes in Technology," CEPR Discussion Papers 2677, C.E.P.R. Discussion Papers.
Martin L. Weitzman, 1994. "Monopolistic Competition with Endogenous Specialization," Review of Economic Studies, Oxford University Press, vol. 61(1), pages 45-56.
Lin, Ping & Saggi, Kamal, 2002. "Product differentiation, process R&D, and the nature of market competition," European Economic Review, Elsevier, vol. 46(1), pages 201-211, January.
Ina Simonovska, 2011. "Income Differences and Prices of Tradables," Working Papers 1015, University of California, Davis, Department of Economics.
Yunker, James A., 1979. "Variety, equity and efficiency: Product variety in an industrial society : By Kelvin Lancaster. New York: Columbia University Press, 1979. Pp. 373, Price: $22.50," Journal of Behavioral Economics, Elsevier, vol. 8(2), pages 195-197.
Avinash Dixit, 1979. "Quality and Quantity Competition," Review of Economic Studies, Oxford University Press, vol. 46(4), pages 587-599.
A. Dixit, 1977. "Quality and Quantity Competition," Working papers 198, Massachusetts Institute of Technology (MIT), Department of Economics.
Helpman, Elhanan, 1987. "Imperfect competition and international trade : Opening remarks," European Economic Review, Elsevier, vol. 31(1-2), pages 77-81.
Lancaster, Kelvin, 1980. "Competition and Product Variety," The Journal of Business, University of Chicago Press, vol. 53(3), pages 79-103, July.
Lorz Oliver & Wrede Matthias, 2009. "Trade and Variety in a Model of Endogenous Product Differentiation," The B.E. Journal of Economic Analysis & Policy, De Gruyter, vol. 9(1), pages 1-14, November.
Oliver Lorz & Matthias Wrede, 2009. "Trade and Variety in a Model of Endogenous Product Differentiation," MAGKS Papers on Economics 200902, Philipps-Universität Marburg, Faculty of Business Administration and Economics, Department of Economics (Volkswirtschaftliche Abteilung).
Tabuchi, Takatoshi & Yoshida, Atsushi, 2000. "Separating Urban Agglomeration Economies in Consumption and Production," Journal of Urban Economics, Elsevier, vol. 48(1), pages 70-84, July.
Matthew T. Cole & Ronald B. Davies, 2014. "Royale with Cheese: Globalization, Tourism, and the Variety of Goods," Review of Development Economics, Wiley Blackwell, vol. 18(2), pages 386-400, May.
Lemarié, Stéphane & Parenty, Sébastien, 2016. "Research incentives and tradeoff for improving productivity of different crops," 149th Seminar, October 27-28, 2016, Rennes, France 245161, European Association of Agricultural Economists.
Irlacher, Michael, 2014. "Multi-Product Firms, Endogenous Sunk Costs, and Gains from Trade through Intra-Firm Adjustments," Discussion Papers in Economics 21023, University of Munich, Department of Economics.
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:bla:reviec:v:23:y:2015:i:1:p:45-61. See general information about how to correct material in RePEc. | https://ideas.repec.org/a/bla/reviec/v23y2015i1p45-61.html |
What is a cartel? Are cartels legal in the United States?
-
When does a monopoly exist? What is the pricing rule for a monopoly?
-
Illustrate the dead weight loss of a monopoly.
-
What is characterized by a natural monopoly?
-
Why is breaking up a natural monopoly a bad idea?
-
When the government regulates monopolies, where is price usually set?
-
Illustrate X-inefficiency. Why is this concept important?
-
What does dynamic efficiency measure? What is the implication of dynamic efficiency for monopolies?
-
What are the defining characteristics of monopolistic competition?
-
Explain the three key differences between oligopoly and monopolistic competition.
-
Define the four-firm concentration ratio. Why are concentration ratios so important in studying market structure?
-
Discuss the concepts of strategic interaction and mutual interdependence.
-
What can we say about the difference between monopolistic competition and perfect competition?
-
Discuss the relationship between product differentiation and nonprice competition.
-
Identify four sources of product differentiation.
-
From the economist’s point of view, product differentiation in general and advertising in particular have what two goals?
-
For what two reasons is monopolistic competition sometimes called noncollusive oligopoly?
-
What will profits be for monopolistic competition in the short and long run?
-
Illustrate the short and long run implications of monopolistic competition for market performance.
-
Some economists argue that monopolistic competition leads to both excessive advertising and needless brand proliferation. Why? | https://blog.wuyuansheng.com/2018/06/28/microeconomics-6-monopoly-and-monopolistic-competition/ |
Q 1 Define the term Business Cycle and also explain the phases of business or trade cycle in brief?
Ans: The business cycle is the periodic but irregular up-and-down movements in economic activity, measured by fluctuations in real GDP and other macroeconomic variables.Diagram of Business Cycle (or Trade Cycle) :-
The business cycle starts from a trough (lower point) and passes through a recovery phase followed by a period of expansion (upper turning point) and prosperity. After the peak point is reached there is a declining phase of recession followed by a depression. Again the business cycle continues similarly with ups and downs.
Explanation of Four Phases of Business Cycle
1. Prosperity Phase : Expansion or Boom or Upswing of economy.When there is an expansion of output, income, employment, prices and profits, there is also a rise in the standard of living. This period is termed as Prosperity phase.The features of prosperity are :- High level of output and trade, High level of effective demand, High level of income and employment, Rising interest rates, Inflation, Large expansion of bank credit, Overall business optimism.
2. Recession Phase: from prosperity to recession (upper turning point).
The turning point from prosperity to depression is termed as Recession Phase.
During a recession period, the economic activities slow down. When demand starts falling, the overproduction and future investment plans are also given up. There is a steady decline in the output, income, employment, prices and profits. The businessmen lose confidence and become pessimistic (Negative). It reduces investment. The banks and the people try to get greater liquidity, so credit also contracts. Expansion of business stops, stock market falls. Orders are cancelled and people start losing their jobs. The increase in unemployment causes a sharp decline in income and aggregate demand. Generally, recession lasts for a short period.
3. Depression Phase : Contraction or Downswing of economy.When there is a continuous decrease of output, income, employment, prices and profits, there is a fall in the standard of living and depression sets in.
The features of depression are :- Fall in volume of output and trade, Fall in income and rise in unemployment,Decline in consumption and demand, Fall in interest rate, Deflation, Contraction of bank credit, Overall business pessimism.In depression, there is under-utilization of resources and fall in GNP (Gross National Product). The aggregate economic activity is at the lowest, causing a decline in prices and profits until the economy reaches its Trough (low point).
4. Recovery Phase : from depression to prosperity (lower turning Point).
The turning point from depression to expansion is termed as Recovery or Revival Phase.During the period of revival or recovery, there are expansions and rise in economic activities. When demand starts rising, production increases and this causes an increase in investment. There is a steady rise in output, income, employment, prices and profits. The businessmen gain confidence and become optimistic (Positive). This increases investments. The stimulation of investment brings about the revival or recovery of the economy.Thus we see that, during the expansionary or prosperity phase, there is inflation and during the contraction or depression phase, there is a deflation.
Q2. Monopoly is the situation there exists a single control over the market producing a commodity having no substitutes with no possibilities for anyone to enter the industry to compete. In that situation, they will not charge a uniform price for all the customers in the market and also the pricing policy followed in that situation?
Ans: A market structure characterized by a single seller, selling a unique product in the market. In a monopoly market, the seller faces no competition, as he is the sole seller of goods with no close substitute.In a monopoly market, factors like government license, ownership of resources, copyright and patent and high starting cost make an entity a single seller of goods. All these factors restrict the entry of other sellers in the market. Monopolies also possess some information that is not known to other sellers.
If you need assistance with writing your essay, our professional essay writing service is here to help!Essay Writing Service
Characteristics of monopoly: Only one single seller in the market, There is no competition, There are many buyers in the market, The firm enjoys abnormal profits, The seller controls the prices in that particular product or service and is the price maker, Consumers don’t have perfect information, There are barriers to entry. These barriers many be natural or artificial, The product does not have close substitutes.
Advantages of monopoly
Monopoly avoids duplication and hence wastage of resources.
Due to the fact that monopolies make lot of profits, it can be used for research and development and to maintain their status as a monopoly.
Monopolies may use price discrimination which benefits the economically weaker sections of the society. Monopolies can afford to invest in latest technology and machinery in order to be efficient and to avoid competition.
Disadvantages of monopoly
Poor level of service, No consumer sovereignty, Consumers may be charged high prices for low quality of goods and services, Lack of competition may lead to low quality and out dated goods and services.
Price Discrimination : It is the ability to charge different prices to different individual.
Need for price discrimination: increase output and profit. Buying pattern of individuals will be different. Increase the economic welfare.
Eg: Air tickets, movie tickets , discount coupons etc.
multiple types of price discrimination:
- First-degree price discrimination is an attempt by the seller to leave the price unannounced in advance and charge each customer the highest price they would be willing to pay for the purchase.
- A business may benefit by offering different prices to those who purchase in larger volumes because either they can increase their profit with the increased volume sales or their costs per unit decrease when items are purchased in volume. Businesses can create alternative pricing methods that distinguish high-volume buyers from low-volume buyers. This is second-degree price discrimination.
- Third-degree price discrimination is differential pricing to different groups of customers. One justification for this practice is that producing goods and services for sale to one identifiable group of customers is less than the cost of sales to another group of customers. For example, a publisher of music or books may be able to sell a music album or a book in electronic form for less cost than a physical form like a compact disc or printed text.
Q3 Fiscal policy is a package of economic measures of the government regarding public expenditure, public revenue, public debt or borrowings. It is very important since it refers to the budgetary policy of the government. Explain the fiscal policy and its instruments in detail?
Ans: Fiscal policy is the means by which a government adjusts its spending levels and tax rates to monitor and influence a nation’s economy. It is the sister strategy to monetary policy through which a central bank influences a nation’s money supply.
instruments of Fiscal Policy are Automatic Stabilizer and Discretionary Fiscal Policy:
- Automatic Stabilizer: The tax structure and expenditure are programmed in such a way that there is increase in expenditure and decrease in tax in recession and decrease in expenditure and increase in tax revenue in the period of inflation. It refers to built-in response to the economic condition without any deliberate action on the part of government. It is called built- in- stabilizer to correct and thus restore economic stability. It works in the following manner, Tax revenue: Tax revenue increases when the income increases; as those who were not paying tax go into the higher income tax bracket. When there is depression, the income decreases and many people fall in the no-income-tax bracket and the tax revenue decreases.
ii) Discretionary Fiscal Policy: Under this, to stabilize the economy, deliberate
attempts are made by the government in taxation and expenditure. It entails definite and
conscious actions.
Instruments of Fiscal Policy: Some important instruments of fiscal policy are:
– 1.TAXATION: Taxation is always a very important source of revenue for both developed and developing countries. Tax comes under two headingu2013Tax on individual(direct tax) and tax on commodity (indirect tax or commodity tax).
a) Direct tax includes income tax, corporate tax, taxes on property and wealth. Indirect tax is tax on the consumptions. It includes sales tax, excise duty and custom duties. Direct tax structure can be divided into three bases-
- Progressive tax: Progressive tax says that higher the level of income, greater the volume of tax burden you have to bear. This means as income increases, the tax contribution should also increase. Low income group people pay low tax, whereas the high income group people pay higher tax.
- 2 Regressive tax: It is theoretically possible, though no government implements such tax structure, because that leads to unequal distribution of income. As your income increases the contribution through tax decreases. Low income people will pay more and high income people will pay less.
- Proportional tax: When the tax imposed is irrespective of the income you earn, every income group, high or low pay the same amount of tax.
b) Indirect Tax Or consumpyion tax: tax which is iimposed on every unit of product .
Q4 Explain the various methods of forecasting demand?
Ans : Economic forecasting is the process of making predictions about the economy. Forecasts can be carried out at a high level of aggregation—for example for GDP, inflation, unemployment or the fiscal deficit—or at a more disaggregated level, for specific sectors of the economy or even specific firms.
Methods of forecasting demand:
Assumptions
For many goods, the length of the product cycle is shrinking. Not only does this make it more difficult to build a historical database, it accentuates the need to forecast correctly. Computer technology makes it possible to adjust pricing instantly and to modify sales promotions on the run. Without accurate historical information to measure the impact of price changes, the business owner may be forced to experiment. Sales performance of other goods with similar product attributes may serve as proxies for a current product with no track record.
Trend Analysis
If you have historical data — or if you can create it from related products — trend analysis is the first step in demand forecasting. Plotting sales over time will reveal the presence of a sales trend if one exists. If there are aberrations — “hiccups” in the trend — you can look for explanations, which could include price, weather or demographic changes. If you are proficient with spreadsheet programs, you can chart data points and insert a trend line over the data. A more sophisticated approach is using least squares regression analysis which can also be done with standard spreadsheet software.
Qualitative Forecasting
A more subjective approach uses expert opinions to predict demand. Especially useful when there is a lack of historical data, relying on the collective opinion of experts makes sense. Begin with an analysis of the marketplace, reviewing the economic conditions. Obtain as much information about competitors’ performance as you can. Then gather opinions from a variety of sources within your business. Include the owner, sales manager, accountant, attorney and any others whose opinion you value. If you wish, you can get outside opinions as well. Qualitative forecasting is based on the consensus view of your panel as you digest and aggregate their opinions.
Forecasting with Economic Indicators
Depending on the products you sell and the customers who buy them, basing your demand forecast on one or more economic indicators may be an effective method. This style of demand forecasting works better with industrial buyers rather than retail. First, find the indicators that relate to your business. For example, small businesses in construction-related work can look to housing starts, building permits, loan applications and interest rates for solid indicators of the future. Businesses in agriculture can find clues to the future from farm income, interest rates and weather forecasts. The Departments of Commerce and Agriculture release statistics on an ongoing basis. Agricultural Extension Services and other state agencies provide complementary data
Q5 Define monopolistic competition and explain its characteristics?
Ans: Monopolistic Competition: A market structure in which several or many sellers each produce similar, but slightly differentiated products. Each producer can set its price and quantity without affecting the market place as a whole.
Monopolistically competitive markets exhibit the following characteristics:
- Each firm makes independent decisions about price and output, based on its product, its market, and its costs of production.
- Knowledge is widely spread between participants, but it is unlikely to be perfect. For example, diners can review all the menus available from restaurants in a town, before they make their choice. Once inside the restaurant, they can view the menu again, before ordering. However, they cannot fully appreciate the restaurant or the meal until after they have dined.
- The entrepreneur has a more significant role than in firms that are perfectly competitive because of the increased risks associated with decision making.
- There is freedom to enter or leave the market, as there are no major barriers to entry or exit.
- A central feature of monopolistic competition is that products are differentiated. There are four main types of differentiation:
- Physical product differentiation, where firms use size, design, colour, shape, performance, and features to make their products different. For example, consumer electronics can easily be physically differentiated.
- Marketing differentiation, where firms try to differentiate their product by distinctive packaging and other promotional techniques. For example, breakfast cereals can easily be differentiated through packaging.
- Human capital differentiation, where the firm creates differences through the skill of its employees, the level of training received, distinctive uniforms, and so on.
- Differentiation through distribution, including distribution via mail order or through internet shopping, such as Amazon.com, which differentiates itself from traditional bookstores by selling online.
- Firms are price makers and are faced with a downward sloping demand curve. Because each firm makes a unique product, it can charge a higher or lower price than its rivals. The firm can set its own price and does not have to ‘take’ it from the industry as a whole, though the industry price may be a guideline, or becomes a constraint. This also means that the demand curve will slope downwards.
- Firms operating under monopolistic competition usually have to engage in advertising. Firms are often in fierce competition with other (local) firms offering a similar product or service, and may need to advertise on a local basis, to let customers know their differences. Common methods of advertising for these firms are through local press and radio, local cinema, posters, leaflets and special promotions.
- Monopolistically competitive firms are assumed to beprofit maximisers because firms tend to be small with entrepreneurs actively involved in managing the business.
- There are usually a large numbers of independent firms competing in the market.
Q6 When should a firm in perfectly competitive market shut down its operation?
Ans Definition of ‘Perfect Competition’
A market structure in which the following five criteria are met:
1) All firms sell an identical product;
2) All firms are price takers – they cannot control the market price of their product;
3) All firms have a relatively small market share;
4) Buyers have complete information about the product being sold and the prices charged by each firm; and
5) The industry is characterized by freedom of entry and exit.
Perfect competition is sometimes referred to as “pure competition”.
The reason for firm shut down in perfect competition
A perfectly competitive firm is presumed to shutdown production and produce no output in the short run, if price is less than average variable cost. This is one of three short-run production alternatives facing a firm. The other two are profit maximization (if price exceeds average total cost) and loss minimization (if price is greater than average variable cost but less than average total cost).
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.View our services
A perfectly competitive firm guided by the pursuit of profit is inclined to produce no output if the quantity that equates marginal revenue and marginal cost in the short run incurs an economic loss greater than total fixed cost. The key to this loss minimization production decision is a comparison of the loss incurred from producing with the loss incurred from not producing. If price is less than average variable cost, then the firm incurs a smaller loss by not producing that by producing.
One of Three Alternatives: Shutting down is one of three short-run production alternatives facing a perfectly competitive firm. All three are displayed in the table to the right. The other two are profit maximization and loss minimization.
With profit maximization, price exceeds average total cost at the quantity that equates marginal revenue and marginal cost. In this case, the firm generates an economic profit.
With loss minimization, price is greater than average variable cost but is less than average total cost at the quantity that equates marginal revenue and marginal cost. In this case, the firm incurs a smaller loss by producing some output than by not producing any output.
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: | https://www.ukessays.com/essays/business/phases-business-cycle-4856.php |
Gutierrez-Hita, Carlos and Martinez-Sanchez, Francisco (2013): Environmental Policy to Foster a Green Differentiated Energy Market.
|
|
|
PDF
|
MPRA_paper_47263.pdf
Download (281kB) | Preview
Abstract
Many products are made by technological processes that cause environmental damage. Current environmental concerns are affecting firms' technological processes as a result of government intervention in markets but also due to environmental awareness on the part of consumers. This paper assumes a spatial competition model where two firms sell a homogeneous product with input differentiation: the product is made by green and polluting inputs. In a two-stage game firms first decide what technology bundle to use (the ratio of green and polluting inputs) and then Bertrand competition takes place. First, it is shown that in the absence of government intervention both firms prefer to produce by using a bundle of green and polluting technologies which is not welfare maximizing. Second, the option of subsidizing green technology and the existence of a publicly-owned firm are analyzed. Overall, both policies yield a more environmentally-friendly technology bundle, except when costs of green energy technologies are high enough. Moreover, environmental social welfare is enhanced.
|Item Type:||MPRA Paper|
|Original Title:||Environmental Policy to Foster a Green Differentiated Energy Market|
|English Title:||Environmental Policy to Foster a Green Differentiated Energy Market|
|Language:||English|
|Keywords:||Differentiated inputs · Environmental policy · Green market · Mixed duopoly · Subsidy|
|Subjects:||D - Microeconomics > D1 - Household Behavior and Family Economics > D11 - Consumer Economics: Theory
|
D - Microeconomics > D4 - Market Structure, Pricing, and Design > D43 - Oligopoly and Other Forms of Market Imperfection
L - Industrial Organization > L1 - Market Structure, Firm Strategy, and Market Performance > L11 - Production, Pricing, and Market Structure ; Size Distribution of Firms
|Item ID:||47263|
|Depositing User:||Dr. Francisco Martinez-Sanchez|
|Date Deposited:||29 May 2013 14:08|
|Last Modified:||06 Oct 2019 01:41|
|References:||
|
André, FJ, González P, Porteiro N (2009) Strategic quality competition and the Porter hypothesis. Journal of Environmental Economics and Management 57:182-194
D'Aspremont, C, Gabszewicz JJ, Thisse JF (1979) On Hotelling's stability in competition. Econometrica 47:1145-1149
Bárcena-Ruiz, JC, Garzón, MB (2006) Mixed oligopoly and environmental policy. Spanish Economic Review 8:139-160
Conrad, K (2005) Price competition and product differentiation when consumers care for the environment. Environ Resour Econ: 31:1-19
Cremer, H, Marchand, M, Thisse, JF (1989) The public firm as an instrument for regulating an oligopolistic market. Oxford Economic Papers 41:283-301
Cremer, H., Marchand, M, Thisse, JC (1991) Mixed oligopoly with differentiated products. International Journal of Industrial Organization 9:43-53
Cremer, H, Thisse, jf (1999) On the taxation of polluting products in a differentiated industry. European Economic Review 43:575-594
De Fraja, G, Delbono, F (1989) Alternative strategies of a public enterprise in oligopoly. Oxford Economic Papers 41:302-311
Eriksson, C (2004) Can green consumerism replace environmental regulation? - a differentiated-product example. Resource and Energy Economics 26:281-293
Hotelling, H (1929) Stability in competition. Economic Journal 39:41-57
Kurtyka, O, Mahenc, P (2011) The switching effect of environmental taxation within Bertrand differentiated duopoly. Journal of Environmental Economics and Management 62:267-277
Macho-Stadler, I (2008) Environmental regulation: choice of instruments under imperfect compliance. Spanish Economic Review 10:1--21
Macho-Stadler, I, Pérez-Castrillo, D (2006) Optimal enforcement policy and firms' emissions and compliance with environmental taxes. Journal of Environmental Economics and Management 51:110--131
Matsukawa, I (2012) The welfare effects of environmental taxation on green market where consumers emit a pollutant. Environ Resour Econ 52:87-107
Martínez-Sánchez, F (2011) Bertrand competition in a mixed duopoly market: A note. The Manchester School 79:1058-1060
Moraga-González, JL, Padrón-Fumero, N (2002) Environmental policy in a green market. Environ Resour Econ 22:419-447
Porter, M (1990) The Competitive Advantage of Nations. Free Press, New York. | https://mpra.ub.uni-muenchen.de/47263/ |
1. Showrooming and Webrooming: Information Externalities between Traditional and Online Sellers, Marketing Science, accepted, December 9, 2017.
2. Behavior-Based Pricing, Production Efficiency and Quality Differentiation, Management Science, July 2017.
3. Lowering Customer Evaluation Costs, Product Differentiation, and Price Competition, Marketing Science, Jan-Feb 2016.
4. Customer Recognition in Experience versus Inspection Good Markets, Management Science, January 2016.
5. Finance Sourcing in a Supply Chain, with A. Seidmann, Decision Support Systems, February 2014.
6. Equilibrium Financing in a Distribution Channel with Capital Constraint, Bing Jing, Xiangfeng Chen, Gangshu Cai, Production and Operations Management, Nov-Dec 2012.
7. Seller Honesty and Product Line Pricing, Quantitative Marketing and Economics, Oct-Dec 2011.
8. Social Learning and Dynamic Pricing of Durable Goods, Marketing Science, Sep-Oct, 2011.
9. Product Line Competition and Price Promotions, with Z. J. Zhang, Quantitative Marketing and Economics, July-September 2011.
10. Exogenous Learning, Seller-Induced Learning, and Marketing of Durable Goods, Management Science, October 2011.
11. Pricing Experience Goods: The Effects of Customer Recognition and Commitment, Journal of Economics and Management Strategy, 20, 2, 2011.
12. Putting One-to-One Marketing to Work: Personalization, Customization and Choice, with N. Arora, X. Dreze, A. Ghose, J. Hess, R. Iyengar, Y. Joshi, V. Kumar, N. Lurie, S. Neslin, S. Sajeesh, M. Su, N. Syam, J. Thomas, and Z. J. Zhang, Marketing Letters, December 2008.
13. Finitely Loyal Customers, Switchers and Equilibrium Price Promotion, with Z. Wen, Journal of Economics and Management Strategy, Fall 2008.
14. Product Differentiation under Imperfect Information: When does Offering a Lower Quality Pay? Quantitative Marketing and Economics, March 2007.
15. Network Externalities and Market Segmentation in a Monopoly, Economics Letters, April 2007.
16. On the Profitability of Firms in a Differentiated Industry, Marketing Science, May-June 2006.
17. Product Customization and Price Competition on the Internet, with R. Dewan and A. Seidmann,Management Science, Aug. 2003.
18. Adoption of Internet-based Product Customization and Pricing Strategies, with R. Dewan and A. Seidmann, Journal of Management Information Systems, Fall 2000. | http://hwc666.com/faculty/professor_team/detail/156/JINGBing.html |
Many examples of monopolistic competition exist, such as food shops, coffee stores and pizza businesses. In monopolistic competition, products are non-homogeneous. Monopolistic competition firms act like monopolies in the short run, but the differentiation of products decreases with greater competition. Demand decreases and average total cost increases, resulting in zero economic profit.
Grab recently came into the spotlight when the Malaysian Competition Commission. But here’s how it really works in real life: The market monopoly is more than just a game A combination of 2 “games” As we explained earlier, a monopoly is when a company dominates a particular market. The company is said to have great control on the market, which indirectly gives them power to control.
MARKET STRUCTURES BY: Eghosa Okungbowa Perfect Competition A market structure in which a large number of firms all produce the same product. All firms in a perfectly competitive market sell the same product for the same price. Perfect Competition: Market Structure and.Monopoly and oligopoly are economic market conditions. Monopoly is defined by the dominance of just one seller in the market; oligopoly is an economic situation where a number of sellers populate the market. Contents 1 Characteristi.In a perfect competition, participants are considered to be more or less equal. However, the criteria for creating a perfect competition market is very strict and it is not often met under real-life economies or market situations. When these criteria are not met, then the market is called an imperfect competition. Imperfect competition is very. | http://abney.vipvipslot.xyz/original/Monopoly-competition-examples-in-real-life.html |
top management function is usually conducted by the CEO of the company in coordination with the COO or President, Vice Presidents etc.
What are the responsibilities of top management? Top management responsibilities, especially those of the CEO involve getting things accomplished through and with others in order to meet the corporate objectives. The CEO in particular must successfully handle two responsibilities crucial to the effective
strategic management of the company: Provide executive leadership and strategic vision Manage the strategic planning Executive Leadership is the directing of the activities toward the accomplishment of corporate objectives. Strategic Vision is a
description of what the company is capable of becoming. It is often communicated in the mission statement. People in an organisation want to have a sense of
mission, but only top management is in the position to specify and communicate this strategic vision to the general workforce. Top management
enthusiasm (or lack of it) about the The importance of executive leadership is illustrated by John Welch, Jr, the successful Chairman and CEO of General Electric Company (GE). According to Welch: good business
leaders create a vision, articulate the vision, passionately own the vision and relentlessly drive it to completion. The three Key characteristics of CEOs The CEO articulates a strategic Vision The CEO presents a role for others to identify with and to follow. Eg Dressing, attitude and values. The CEO not only communicates
with high performance standards, but also shows confidence in the followers abilities to meet these Vision Statements A strategic vision is a road map of a companys future the direction it is headed, the business position it intends to stake out, and the capabilities it plans to develop. (Thompson and Strickland (1998).
As Thompson (1997) observes, whilst mission statements have become increasingly popular for organisations, vision statements are less prevalent. Nonetheless, the lack of a published statement does not necessarily indicate a lack of vision. Where they exist they reflect the companys vision of some future state, which ideally the organisation will achieve. Mission Statements
A good mission statement describes an organisations purpose, customers, products or service, markets, philosophy, and basic technology. McGinnis (1981) suggests that a good mission statement should:
Define what the organisation is and what the organisation aspires to be Be limited enough to exclude some ventures and broad enough to allow for creative growth Distinguish a particular organisation from all others Serve as a framework for evaluating both current and prospective activities Be stated in terms clear enough to be widely understood throughout
the The Importance of a Mission Statement To ensure unanimity of purpose within the organisation To provide a basis, or standard, For allocating organisational resources To establish a general tone or organisational climate. To serve as a focal point for individuals to identify with the organisations purpose
and direction, and to deter those who cannot from participating further in the organisations activities. To facilitate the translation of objectives into a work structure involving the assignment of tasks to responsible elements within the organisation. To specify organisational purposes and the translation of
these purposes into objectives in such a way that cost, time, and performance parameters can be assessed and controlled. The Nature of a Business Mission A Declaration of Attitude A Resolution of Divergent Views A Customer Orientation A Declaration of Social Responsibility
Components of a Mission Statement Customers. Who are the enterprises customers? Products or services. What are the firms major products or services? Markets. Where does the firm compete? Technology. What is the firms basic technology? Concern for survival, growth, and profitability. What is the firms
commitment towards economic objectives? Philosophy. What are the basic beliefs, values, aspirations, and philosophical aspirations of the firm? Self-concept. What are the firms major strengths and competitive advantages? Concern for public image. What
is the firms public image? Concern for employees. What is the firms attitude toward employees? ANALYSING THE EXTERNAL ENVIRONMENT Political
Economic Factors Legal Technological Sociocultural Factors Sociocultural Factors Population demographics Income distribution
Social Mobility Attitude to work and leisure Consumerism Legal Factors Competition Law Employment Law Health and Safety
Economic Factors Business Cycles GNP trends Interest rates Money supply Inflation Unemployment Disposable Political Factors Government
Stability Taxation policy Foreign Trade Policy Social Welfare Technological Government spending on research Government and industry focus on technological effort New discoveries/developments
Speed of technology Industry and Competitive Analysis Purpose and Contributions of Industry and Competitive Analysis Identifying and selecting the companys competitive arena by defining its industry and served markets.
Identifying business opportunities Producing a benchmark for evaluating the company Shortening the companys response time to competitors moves Restricting or preempting competitors
moves. Encouraging organisational development through Helping the company to gain a competitive advantage Promoting learning from the competition Aiding in the development of the strategy and its successful
The Process of Industry and Competitive Analysis Defining and choosing the boundaries of the industry and the companys served market. Understanding the structure of the industry Analysing the forces of competition Determining key success factors Conducting strategic group analysis Performing competitive intelligence
Analysing the Forces of Competition The nature and degree of competition in any industry depends on five basic forces: The threat of new entrants. The bargaining power of buyers. The bargaining power of suppliers. The threat of substitute products or services.
Rivalry among existing firms. The Threat of New Entrants. Entry Barriers include Economies of Scale Product Differentiation Capital Requirements Cost Disadvantages Independent of Size Access to Distribution Channels Regulatory Policies
Bargaining Power of Buyers Bargaining Power is Determined by The concentration and size of buyer The relative volume of the buyers purchases in the market and the relative importance to the buyer of the purchase in terms of both cost and quality. The degree of
product standardisation Bargaining Power of Suppliers The power of suppliers is affected by Concentration amongst suppliers The degree to which suppliers are able effectively to differentiate their product or service The extent to which the buyer is important to supplier The availability (or otherwise) of close
substitutes as satisfactory inputs to the buyers requirements The potential for threat of forward The costs, practicability and opportunity for buyers to switch suppliers. The extent to which buyers are well informed about suppliers products, prices, and costs The degree to which buyers are price sensitive The degree of threat of backward integration by buyers
Threat of Substitute Products threats by substitute products depends on Whether attractively priced substitutes are available, How satisfactory the substitutes are in terms of quality, performance, and other relevant attributes, The ease with which buyers can switch to substitutes.
Rivalry Among Existing Competitors threats by substitute products depends on The number and diversity of competitors, and the degree of balance (or equality) between their relative market strengths. The rate of growth of the industry. The degree to which product differentiation is effective Fixed costs are high or the product is perishable, creating
strong temptation cut prices The degree to which capacity is increased in large increments The extent to which competitors are aware of the strategies of their rivals Exit barriers, and the costs of leaving the industry Threats of new Entrants Threats of
Substitutes Competitive rivalry Bargaining power of Suppliers Bargaining power of Buyers Forces of Competition The Nature of Competition
Price competition, which may reduce industry margins and profits or drive some businesses out of the market. Non-price competition in mature markets, based on brand and product differentiation, promotion and new product development, etc Locking-in customers or channels by the use of discounts, credit and preferential financial arrangements, etc. Mergers and takeovers of competitors or new comers so as to consolidate and protect market position. Direct government regulation and
Industry Key Success Factors (KSFs) Relevant Questions for Analysis: On what basis do buyers of the industrys product choose between the brands of the sellers? Given the nature of competitive rivalry, what resources and competitive capabilities does a company need to have to be successful? What shortcomings are almost certain to
put a company at a significant competitive disadvantage Common types of Key Success Factors (KSFs) Technology related KSFs Manufacturing related KSFs Distribution related KSFs Marketing related KSFs Skills and capability related KSFs Other types of KSFs
Technology related KSFs Expertise in a particular technology or in scientific research Proven ability to improve production processes Manufacturing related KSFs Ability to achieve scale economies and/or capture learning-curve effects Quality control know-how High utilization of fixed assets Access to attractive supplies of skilled
labour High labour productivity Low-cost product design and engineering Distribution related KSFs Ability to manufacture or assemble products that are
customized to buyer specifications A strong network of wholesale distributors or dealers Ability to secure favourable display space on retailers shelves Marketing related KSFs Breadth of product line and product selection A well-known and well respected
brand name Fast, accurate, technical assistance Courteous, personalized customer service Accurate filling of buyer orders Customer guarantees and warranties Skills and capability related KSFs A talented work-force National or global distribution capabilities Product innovation capabilities Design expertise
Short delivery-time capabilities Supply chain management capabilities Other types of KSFs Overall low cost Convenient location Ability to provide fast,
convenient, after-the-sale repairs and services A strong balance sheet and access to financial capital Patent protection Competitive Analysis Guidelines for effective Competitive Analysis Identify key competitors, even if they have different organisational types
than your company. Identify substitutes, both domestic and foreign, whether traditional or nontraditional. Use both formal and informal means of collecting information about your competition. Informal sources of data, while costly, are sometimes more revealing and informative than official information.
Develop knowledge of the national and organisational cultures. This knowledge is important in gaining access to vital information and to accurately interpreting data about the competition. Give special attention to the network (group) of companies to which the competitors may belong. Not only do they determine access to markets and resources, but
they also ensure coordination of strategic moves. As with domestic competitive analysis, pay attention to the competitors unique attributes. You should delve deeply into their operations, culture, and organisation. INTERNAL
ENVIRONMENTA L ANALYSIS Functional Areas Covered by an Internal Environmental Analysis Finance and accounting Production and operations Marketing Research and development Human resources Organisational structure
Finance and Accounting The financial condition of an organisation is usually considered as the most critical measure of its competitive position and overall attractiveness to investors. As David (1989) points out, an organisations liquidity,
leverage, working capital, profitability, asset utilisation, cash flow, and equity can eliminate some accounting comprise three decisions:
the investment decision, the financing decision, and the dividend decision. Production and Operations The Basic Functions of Production Management are; Process Capacity Inventory
Work Force Quality Functions Description Process decisions concern the design of the physical production system. Specific Process decisions include choice of technology, facility layout, process flow analysis facility location,
line balancing, process control, and transportation Functions Capacity Description Capacity
decision concern determination of optimal output levels for the organisation not too much and not too little. Specific decisions include forecasting, facilities planning, aggregate planning, scheduling, capacity planning, and queuing analysis.
Functions Description Inventory decisions involve managing the level of raw materials, work in process, Inventory and finished goods. Specific decision includes what to order, when to order, how much to order, and material handling. Functions
Description Work Force Work force decisions are concerned with managing the skilled, unskilled, clerical, and managerial employees. Specific decisions include job design, work measurement, job enrichment,
work standards, and motivation techniques. Functions Description Quality Quality decisions are aimed at assuring that high-quality goods and services are produced. Specific decisions
include quality control, sampling, testing, quality assurance, and cost control. Marketing Marketing can be defined as the process of defining, anticipating, creating, and fulfilling customers needs and wants for products and services. Closely allied with an organisations production and operations
capability is its marketing capability. That is, its ability to produce the right product or service, deliver it at the right place at the right time Research and Development Many organizations today conduct no
research and development, and yet many other organizations depend on successful R&D activities for survival. Organizations pursuing a product development strategy especially need to have a
Byars et al (1996) submission is that every organisation, whether it has a formal research and development or not, must be concerned about its ability to develop new products and services. Human Resources
As Byars et al rightly point out, all the activities of an organisation are significantly influenced by the quality and quantity of its human
Organisational Structure All organisations produce and market their products through an organisational structure. This structure can either help or hinder an organisation in achieving its objectives. SWOT Analysis According to Thompson
and Strickland (1998), sizing up an organisations resource strengths and weaknesses and its external opportunities and threats, commonly known as SWOT analysis, provides a good overview of whether an organisations business position is fundamentally
healthy or unhealthy. Potential Resource Strengths and Competitive Capabilities A powerful strategy supported by good Skills and expertise in key areas A strong financial condition; ample financial resources to grow the business Strong brand-name image/company reputation A widely recognized market leader and an attractive customer base
Potential Resource Weaknesses and Competitive Deficiencies No clear strategic direction Obsolete facilities A weak balance sheet; burdened with too much debt Higher overall unit costs relative to key competitors Falling behind in R&D Weak brand image or reputation Potential Company
Opportunities Serving additional customer groups into new geographic markets or product segments Expanding the companys product line to meet a broader range of customer needs Transferring the company skills or technological know-how to new products or businesses Integrating forward or backward Potential External Threats to a Companys Well-Being Likely entry of potent new
competitors Loss of sales to substitute products Slowdowns in market growth Costly new regulatory requirements Growing bargaining power of customers or suppliers Adverse demographic changes The five primary activities (sometimes called line functions) are
inbound logistics, operations, outbound logistics, marketing and sales, and service. They represent activities of physically creating the product or service, and marketing and transferring it to the buyer, together with after-sale service. Inbound Logistics
They are activities, costs, and assets associated with obtaining fuel, energy, raw materials, parts components, merchandise, and consumable items from vendors; receiving, storing and disseminating inputs from suppliers;
inspection; and inventory Operations Activities, costs, and assets associated with converting inputs into final product form (production, assembly, packaging, equipment
maintenance, facilities, operations, quality assurance, and environmental protection). Outbound Logistics Activities, costs, and assets dealing with physically distributing the product to buyers (finished
goods warehousing, order processing, order picking and packing, shipping, delivery vehicle operations). Marketing and Sales Activities, costs and assets related to sales force
efforts, advertising and promotion, market research and planning and dealer/distributor Service Activities, costs, and assets associated with providing assistance to buyers, such as installation, spare parts
delivery, maintenance and repair, technical assistance, buyer inquiries and complaints They are linked to four support activities procurement, technology development, human resource management, and
general administration. They assist the firm as a whole by providing infrastructure or inputs that allow the primary activities to take place on an ongoing basis. General Administration Activities costs, and assets relating
to general management, accounting and finance, legal and regulatory affairs, safety and security, management information systems, and other overhead functions. Human Resources Management Activities costs, and assets associated
with the recruitment, hiring, training, development and compensation of all types of personnel; labor relations activities; developments of knowledge-based skills. Research, Technology, and Systems Development Activities, costs, and assets relating to product R & D, process R & D, process
design improvement, equipment design, computer software development, telecommunications systems, computer-assisted design and engineering, new data-base capabilities, and development of computerized support systems. Procurement
Activities, costs and assets associated with purchasing and providing raw materials, supplies, services and outsourcing necessary to support the firm and its activities. Sometimes this
activity is assigned as part of a firms inbound logistic The value chain includes a profit margin since a mark-up above the cost of providing a firms valueadding activities is normally part of the price paid by the buyer creating value exceeds cost so as to generate
IDENTIFYING STRATEGIC ALTERNATIVES Stable Growth Strategies The organisation is satisfied with its past performance and decides to continue to pursue the same or similar objectives. Each year the level
of achievement expected is increased by approximately the same percentage. The organisation continues to serve its customers with Growth Strategies They do not necessarily grow faster than the economy as a
whole but do grow faster than the markets in which their products are sold. They tend to have larger-thanaverage profit margins. They attempt to postpone or even eliminate the danger of price competition in their industry. Instead of adapting to changes in the outside world, they tend to adapt the outside world to
themselves by creating something or a demand for something that Ansoffs matrix (product-market Growth Matrix) Present products Present markets New markets New products
Market Penetration Product development Market development Diversification When is each appropriate ? Market Penetration
Current markets are not saturated with your particular product or service The usage rate of present customers could be significantly increased. The market shares of major competitors have been declining while total industry sales have been increasing.
Market Development New channels of distribution are available that are reliable, inexpensive, and of good quality. An organisation is very successful at what it does. New untapped or unsaturated markets exist. An organisation has the needed capital and human resources to
manage expanded operations. An organisation has excess production capacity. An organisation's basic industry is rapidly Product Development An organisation has successful products that are in the maturity stage of their life cycles; the idea here is to attract satisfied customers to try new (improved) products as a result of their positive experience with the organisation's present products or services. An organisation competes in an industry that
is characterized by rapid technological developments. Major competitors offer better quality products at comparable prices. An organisation competes in a high-growth industry. Diversification Strategies in Action Concentric Diversificatio n
Conglomerat e They are appropriate under the following conditions: Concentric Diversification An organisation competes in a no-growth or a slow-growth industry. Adding new, but related, products would significantly enhance the sales of current products. New, but related, products could be
offered at high competitive prices. New, but related, products have seasonal sales levels that counterbalance an organisation's existing peaks and valleys. An organisation's products are currently in the decline stage of their life cycles. An organisation has a strong management Conglomerate Diversification An organisation's basic industry is
experiencing declining annual sales and profits. An organisation has the capital and managerial talent needed to compete successfully in a new industry. There exists financial synergy between the acquired and acquiring firm; note that a key difference between concentric and conglomerate diversification is that the former should
be based on some commonality in markets, products, or technology; whereas the latter should be based more on profit considerations. Market/customer base Product/Service Existing Modified/improved
New but related New and unrelated Existing New Market penetration Market development Product development Concentric diversification Horizontal diversification Conglomerate diversification
Defensive Strategies Turnaroun d Divestmen t Combination Strategies Simultaneo us
Combinatio Generic Strategies There are three main generic strategies: Striving for overall low-cost leadership in the industry. Striving to create and market unique products for varied customer groups through differentiation. Striving to have special appeal to one or more groups of consumer or individual buyers, focusing on their cost or differentiation concerns.
Competitive Advantage Broad target Differentiation Cost Leadership Differentiation Narrow target
Competitive Scope Lower cost Cost Focus Differentiation Focus Gaining Competitive Advantage Cost leadership Differentiation
Focus Cost leadership Cost leadership emphasises producing standardised products at very low per-unit cost for many consumers who are price-sensitive. Low cost provider strategy avenues for achieving cost
advantage Control cost drivers Economies or diseconomies of scale Learning curve effects Capacity utilization effects Revamp the value chain Simplify product design Relocate resources Make greater use of internet When a low-cost provider strategy works best
Price competition is fierce Homogenous or similar/identical products Product differentiation is difficult Most buyers use the product in the same ways Buyers incur low switching costs Predatory pricing is feasible Buyers are large and have
Differentiation Differentiation refers to producing products and services considered unique industry-wide and directed at consumers who are relatively priceinsensitive. When a Differentiation Strategy Works Best There are many ways to
differentiate the product/service Buyers perceive differences as having value Buyer needs and uses are diverse Few rival firms are following a Pitfalls of Differentiation Strategy May not enhance perceived value and yet may be costly Over-differentiating/overdesigned to that service level
exceed buyers needs Trying to change too high a premium Tinkering with differentiation Focus Focus has to do with producing products and services that fill the needs of small groups of consumers.
International Strategic Choices Entry A firm should consider the following issues: Firms marketing objectives Firms Size Mode Availability Method
Quality Risks Human Resource Requirements Market Information Feedback Learning Curve Requirements Modes Exporting
Licensing Franchising Joint venture Foreign direct investments. Exporting Exporting represents an initial stage in a companys international participation. It is the
easiest, cheapest and most commonly used Licensing and Other Contractual Agreements In addition to exporting, the company can use licensing as a means of entering foreign markets. Thus, for a fee, the company transfers one or more of its
intangible assets (such as a trade secret, patent, or trademark) to foreign Several factors encourage companies to engage in licensing agreements: Licensing diffuses the technology and establishes it as the
industrys dominant standard. Licensing encourages others to use its technology so it can control the market or preempt rival technologies. For example, with the use of licensing agreements, Microsofts Windows became a
Royalties generated from licensing are a major source of revenue on products already considered mature in the domestic markets. International licensing keeps the companys technology or trademark
in use. The company may use cross licensing to obtain information about other technologies, products, or processes. For instance, a company may allow another company to use its technology in return for access to that
companys new technology. For users, licensing can speed up access to vital technological innovation. It can also help fill Sometimes the R&D activities of a company generate products or
technologies that fall outside the companys mission. In this case, the company uses licensing to make use of its technology, without assuming the risks Franchising This approach is
fact becoming a popular mode of entering foreign markets. In franchising, a company authorises other companies to do business in a specific manner. Soft drink and fastfood companies have used franchising to expand Joint Ventures
Several factors encourage the use of international joint ventures, including: The companys major industry is experiencing technological volatility Significant entry barriers exist in the target foreign market. There is a need for major economies of scale The company has expertise in international operations Foreign direct investments.
Establishing and running a production facility in an overseas market demonstrates the fullest commitment to that market. Production capacity can be built from scratch, or, alternatively, an existing firm can be
STRATEGY ANALYSIS SELECTION Market Growth Boston Consulting Groups Growth-Share Matrix High Low STARS
QUESTION MARKS CASH COWS DOGS High Low Market Share The following steps are generally followed in using
the growth-share matrix in strategy evaluation and selection: Divide the company into its business units. Many organisational perform this step when the establish strategic business units (SBUs). On the matrix, a circle is used to depict individual business units. Determine the business units relative size within the total organisation. Relative size can be measured in terms of assets employed in the business
unit as a percentage of the total assets or in terms of sales of the business unit as a percentage of total sales. On the matrix, the area in the Determine the market growth rate for each business unit. Determine the relative market share of each business unit. Develop a graphical picture of the companys overall portfolio of business. Select a strategy for each business unit based on its
position in the companys overall portfolio of business. Cash Cow is a leading SBU (high market share) in a mature or declining industry (low growth) A Dog is an SBU with low market share and low market growth A Question Mark is an SBU with low market share and high market growth rate A Star is a leading SBU (high
BCG uses market share to determine the strategic choice for individual business units. The four major strategic choices identified are: Increase market share. Hold market share. Harvest Divest. Market-growth rate
'Stars' High Highly profitable Allocate enough resources to maintain market share Low 'Cash cows' Major sources of profit Invest enough to maintain market share Use surplus profit to finance 'stars'
High Question marks Requires spending which is disproportionate to growth potential Candidates for divestment 'Dogs' Unprofitable Abandon quickly Reallocate resources elsewhere
Relative market share Low Planning Grid It uses two dimenssions to evaluate business units: Business Unit Strength Industry Attractiveness Business Unit Strength Some of the factors that influence business unit strength include
Management Quality Market Share held Profitability competitive position, image, and employees of the business unit. Industry attractiveness Industry attractiveness is also judged on a number of factors size of market, market growth rate,
industry profitability, technological advances, competitive structure, and Business Unit Strengths High High Medium Low
Growth (Growth/Defense & Hold) Borderline (Shrink/Harvest/Rebuild) No Growth (Divest/Exit/Turnaround) Medium Low Competitive Strategy Formulation Porter contends that every firm competing
in an industry has a competitive strategy, whether it is explicit or implicit. He further contends that competitive strategy formulation involves the consideration of four key factors: Company strengths and weaknesses Industry opportunities and threats Personal values of the key managers, and
Broader societal Process for Formulating a Competitive Strategy A. What is the Business Doing Now? Identification What is the implicit or explicit current strategy? Implied Assumptions What assumptions about
the companys relative position, strengths, and weaknesses, competitors, and industry trends must be made for the current strategy to make sense? B. What is happening in the Environment? Industry Analysis
What are the key factors for competitive success and the important industry opportunities and threats? Competitor Analysis What are the capabilities and limitations of existing and potential competitors, and their probable future moves? Societal Analysis What important governmental, social, and political factors will present opportunities or threats? Strengths and Weaknesses Given an analysis of industry and competitors, what are the companys strengths and weaknesses relative to
present and future competitors? C. What should the Business Be Doing? Tests of Assumptions and Strategy How do the assumptions embodied in the current strategy compare with the analysis in B above? Strategic Alternatives What are the feasible strategic alternatives given the analysis above? (Is the current strategy one of these?)
Strategic Choice Which alternative best relates the Life Cycle Approach The life cycle approach to strategy evaluation and selection classifies business units in an organisation by industry maturity and by
competitive position The approach postulates that industries can be grouped into the following stages of maturity. Embryoni c Growth Mature Ageing Embryonic characterised by rapid growth, rapid changes in technology,
pursuit of new customers, and fragmented and changing shares of market. Growth characterised by rapid growth; but customers, market share, and technology are better known and entry into the industry is more difficult. Mature characterised by stability in known customers, technology, and market shares. The industry can, however, still be competitive.
Ageing characterised by falling demand, declining number of competitors, and, in Profit Impact of Market Strategy (PIMS) Model The Strategic Planning Institute (SPI) develops and manages the Profit Impact of Market Strategy (PIMS)
database. Portfolio Analysis which examine an entire portfolio of businesses, identify widespread problems or opportunities, and propose resource allocations to specific businesses. Customer Profiling a process for identifying quality improvement opportunities for winning
in the marketplace. Special Studies on the PIMS Database designed to shed light on specific problems facing a particular business. Analyses of Troubled Businesses which facilitate the design of turnaround strategies. Strategic Planning Process a process to
Qualitative Factors in the Strategy Evaluation and Selection Process Managerial attitudes toward risk. Environment of the organisation. Organisational culture and power relationships.
Competitive actions and reactions. Influence of previous Developing and Communicating Concise Policies Policies are designed to guide the behaviour of managers in relation to the
pursuit and achievement of strategies and objectives. They can guide either thoughts or actions or both by indicating what is Policies can be either advisory,
leaving decision makers with some flexibility, or mandatory, whereby managers have no discretion. Koontz and ODonnell (1968) suggest that mandatory policies should be regarded as rules rather than policies. They argue that mandatory
policies tend to stop managers and other employees thinking about the most efficient and effective ways to carry out tasks and searching for improvements. Policies should guide rather than
They further argue that advisory policies should normally be preferred because it is frequently essential to allow managers some flexibility to respond and adapt to changes in both the organisation and the environment. Moreover, mandatory
policies are unlikely to motivate The Purpose of Policies Policies establish indirect control over independent action Policies promote uniform handling of similar
activities Policies ensure quicker decisions Policies institutionalise Policies reduce uncertainty in repetitive and day-to-day decision making Policies counteract resistant to
or rejection of chosen strategies by organisation members Policies counteract resistant to or rejection of chosen strategies by organisation members Policies counteract resistant to or rejection of chosen strategies by organisation members Formal, written policies have at least seven advantages: They required managers to think through the policys
meaning, content, and intended use. They reduced misunderstanding. They make equitable and consistent treatment of They ensured unalterable transmission of policies.
They communicate the authorisation or sanction of policies more clearly. They supply a convenient and authoritative reference. They systematically enhanced indirect control and organisation-wide
coordination of the key Choosing an Effective Organisational Structure Organisational structure is a firm's formal role configuration, procedures, governance and control mechanisms, and authority and decision-making
processes. The initial growth strategy of such firms was Volume Expansion, which created a need for an administrative office to manage the increased volume. The next
growth strategy was Geographic Expansion, which required multiple field units, still performing the same function but in different locations.
Vertical Integration was usually the next growth strategy. Firms remained within the same industry but performed additional functions.
The final growth strategy was Product Diversification. Firms entered other industries in which they
Four significant conclusions can be made A single-product firm or single dominant business firm should employ a functional structure A firm in several lines of business that are somehow related should employ a multidivisional structure A firm in several unrelated lines of business should be organised into strategic business units
Early achievement of a strategystructure fit can be a competitive advantage Centralisation and Decentralisation Centralisation and decentralisation relate to the degree to which the authority, power and responsibility for decision making is
devolved through the organisation Centralisation and Decentralisation The size of the organisation Geographical locations, together with the: homogeneity/heterogeneity of the products and services. Technology of the
tasks involved Inter-dependences The relative importance and stability of the external environment, and the possible need to react quickly. Generally, how fast decisions need to be made. The work load on decision
makers Issues of motivation via delegation, together with the abilities and willingness of The location of competence and expertise in the organisation.
Are the managerial strengths in the divisions or at headquarters? The significance and impact of competitive and functional decisions and changes The status of the firm's planning, control and Basic Structure for Organisational Design
The entrepreneurial structure The Functional Structure Product Structure Geographic Structure The Matrix Organisational Structure Strategy Deal and Kennedy, lists five reasons that can justify large-scale cultural change. Organisation has strong
values that dont fit a changing environment. Industry is very competitive and moves with lightning speed. Organisation is mediocre or worse. Organisational is about to join the ranks of the very largest organisations.
The Management Analysis Centre (MAC) has developed and successfully used the following six-steps process for changing culture: Start by having senior managers re-examine the companys history, culture, and skills, as well as the
traits of the business they are in. Have the CEO announce a vision of the new strategy and the shared values to make it work. The CEO should then spread the Confront mismatches
between present behaviour patterns and those required by the future strategy. This may entail designing new organisational incentives and controls to encourage different behaviour. Have executives promulgate and reinforce the new values in everything they
Reshuffle power to elevate people who implement the new ways, including outsiders hired mainly for their values. Use levers of change, such as the budgeting process and
internal public relations, to keep people moving toward Strategy and Motivational Systems Encouraging employees to work hard toward the achievement of organisational objectives is one of the
most significant challenges for any Organisational rewards include all types of rewards, both intrinsic and
extrinsic, that are received as a result of employment by the organisation. Incentive Pay Plans Incentive play plans attempt to tie pay to performance and are used by many organisations
to motivate employees to work toward Two major problems seem to exist in the design of most management incentive pay programs: The plans are not coupled to the industrys performance. Thus, managers may receive a high reward for achieving a 15 percent
growth rate while the industry is growing at a rate of 25 percent. The plans are one-dimensional. For example, if compensation, is based solely on return on assets, managers may be tempted to eliminate assets Advantages of Incentive Pay Programmes Incentive compensation is directly related to operating performance. If performance objectives (quantity and/or quality) are met, incentives are paid.
If objectives are not achieved, incentives are withheld Incentives foster teamwork and unit cohesiveness when payments to individuals are based on team Incentives focus employee efforts on specific performance targets. They provide real motivation that produces important employee and organisational gains.
Incentive payouts are variable costs linked to the achievement of results. Base salaries are fixed costs largely unrelated to output. Incentives are a way to distribute success amount those responsible for producing that success. IMPLEMENTING STRATEGY TACTICAL/MANAGEMENT ISSUES Managers are
responsible for developing strategies in the Functional areas that will help achieve corporate objectives. Importance of Functional Strategies Functional
strategy provides an action plan for strategy implementation at the level of the work group and individual. It puts corporate and business strategy into operation by defining the activities needed for implementation.
Depending on the specific strategy to be implemented, functional strategy may need to be formulated by
a variety of work groups within the The most significant challenge lies in coordinating the activities of the various work groups that must work together to implement the strategy. The strategies must be consistent
both within each functional area of the business (such as the marketing department) and between functional areas (such as the marketing department and the production department).
Examples of functional strategies needed to implement a new product development strategy Marketing Coordinate with R&D for formula development Conduct market research
with consumers Develop a pricing strategy Design promotional materials Identify and negotiate with potential distributors Coordinate with Productions as to product specifications Coordinate with Human Resources Production Identify suppliers of input materials Negotiate purchasing
agreements Arrange for storage facilities for both raw materials and finished goods. Design and/or purchase new production equipment Human Resources: Work with Production to assess human resource needs Work with Marketing to assess
human resources needs Identify potential candidates for new positions Develop compensation and benefits packages for new employees Design and provide training A number of strategic tactical issues are likely to arise in the functional areas Marketing Finance Operations/ Production
Human Resource/Personnel Research and Development Marketing Strategies Product Decisions Integrated Marketing Communication Decisions
Marketing Mix Distribution Decisions Price Decisions Product Life Cycle Sales Introduction
Growth Maturity Decline And Profits Sales Profits
Time Pricing Pricing is a complex issue because it is related to cost-volumeprofit trade-offs and because it is frequently used
as competitive weapon. Pricing-policy changes are likely to The benefits of well-conceived pricing include increasing sales to current customers, attracting new customers, maximizing short-run cash flow, and maintaining an established position. The particular benefit or benefits sought should be considered so that the most appropriate approach is
selected. Pricing must be considered in relation to costs, to consistency, and to potential inflation (Byars et al 1996) Distribution The distribution system brings the product or service to the place where it can best fill customers needs. Access to distribution can mean the difference between success and failure for a new product. Because many products require support from distribution channels in the form of prompt
service, rapid order processing, or parts inventory, the choice of Promotion The function strategy for the promotion component defines how the firm will communicate with the target markets. Promotion is more than advertising. Promotion refers to the methods which are used to
put products and services Customer Care It is now recognised that meeting customer needs is the foundation of any successful organisation, and that the customers come first, second and third.
Today, both profit and non-profit organisations have It can be argued that customer service is now the only factor which distinguishes one organisation from another in the same business. At the same time,
the customers have changed, they have become more demanding and they have more choice. It is these changes which have made imperative and change in the role of Managers should work to meet customer needs better: at lower cost at maximum
customer satisfaction with competitive advantages Levinson (1997) provide below 17 ways by which managers can show their customers that they care. Put your customer service principles in writing. Establish support systems that give clear instructions for gaining
and maintaining service superiority. Develop a measurement of superb customer service, and reward employees who practice it consistently. Be certain that your passion for customer service runs rampant Be genuinely committed to providing more customer service excellence than anyone else in your industry. Ask your customers questions, then listen carefully to their answers.
Stay in touch with your customers. Nurture a human bond, as well as a business bond, with customers and prospects. Recognise that your customers have needs and expectations - and meet them. Keep alert for trends; then respond to them. Observe your customers' birthdays and anniversaries. Send postage-paid questionnaire cards and letters asking for suggestions.
Invest in phone equipment that makes your business sound friendly, pleasant to do business with, easy to contact and quick to respond. Design your company's physical layout for efficiency, clarity of signage, lighting, accessibility for the disabled and simplicity. Everything should be easy to find. Act on the knowledge that what customers
value most are attention, dependability, promptness and competence. Operations Issues The operations (or production) function has responsibility for the procurement and transformation of raw materials into products or services. This involves securing raw materials, making decisions to make or buy parts and components, maintaining adequate inventory,
designing and scheduling production, ensuring quality control, and making capacity adjustments. Decisions in the operations area determine a large proportion of the organizations costs and are reflected in measures of efficiency and productivity. Strategic Choices
Product and service plans Positioning strategy Competitive priorities Quality management and control Design Decisions Process design
Technology management Job design and work measurement Capacity Location Layout Operating Decisions
Materials management Aggregate planning Master production scheduling Inventory Scheduling Operations as a Competitive Weapon The Role of Top and Functional Level
Managers in TQM Managers have critical roles to play in the TQM process. Normally, top managers initiate a TQM programme, and a continuing emphasis on TQM requires their longterm commitment and willingness to make TQM an organisation wide priority.
According to Jones et al (2000), it is functional-level managers who Identify defects, trace them back to their source, and fix quality problems Design products that are easy to assemble Identify customer needs, translate those needs into quality requirements, and see that these quality requirements shape the production
system of the organisation. Work to break down the barriers between functional departments. Solicit suggestions from lower-level employees about how to improve the Pitfalls of TQM False Starts Disconnection from Customer Issues Do Versus Develop. Were Doing Okay. The Quick-Fix Syndrome Mandate and Move on
No Space on the Agenda Look Whos Running the Show Outsourcing According to Ellis (2004) outsourcing refers to the transfer of in-house jobs to outside firm to reduce costs, take advantage of others' expertise and focus on what the contracting company does best.
In addition to operations, functions that are frequently outsourced include human resource management, information systems, accounting, legal work and after-sales service on appliances and equipment.
Reasons for New Trends in Outsourcing According to Jones et al (2000) there are at least two reasons why human resource planning sometimes leads managers to outsource: flexibility and cost.
Financial Issues The finance function provides the financial resources necessary to implement the strategy. Schall and Haley (1991) assert that finance is concerned with the lifeblood of a company, money: How it is obtained to finance the business and how it should be used to assure the business's success.
Schall and Haley (1991) have identified the major finance-related functions in a firm as follows: Financing and Investments Accounting and Control Forecasting and LongTerm Planning Pricing Other Functions organizations with international operations include:
Are sources of local funding available and properly developed for nondomestic operations? How will the strategy be affected by currency depreciation and/or inflation? How can overall tax be minimised? How should the transfer of profits from foreign subsidiaries to headquarters be handled for
Strategies for Facing Cash Crisis Some of the most important signs of deteriorating liquidity are: An unexpected building in inventory (an increase in the inventory conversion period). An increase in the firms level of outstanding accounts receivable (an increase in the average collection period). A decline in the firms daily or weekly cash inflows.
Increased costs the firm is unable to pass on to its customers. A decline in the firms net working capital; or an increase in its debt ratio. Managers can take some of the following steps to deal with cash crisis (liquidity problem): Control and reduce investment in inventory Re-examine and tighten up on credit and reduce the firms level of accounts receivable.
Increase short-term or long-term debt, or issue equity. Control overhead and increase awareness of the need for effective asset management. Lay off employees. Reduce planned long-term (capital) expenditures. Qualitative Factors for Decision Making
Management decisions in finance should not only be based on quantitative data alone. In employing the tools for decisions making we have discussed management should also bear in mind qualitative Qualitative factors in decision making will vary with the
circumstances and nature of the opportunity being considered. Here are some examples. The availability of cash Inflation Employees Customers Competitors Timing factors Suppliers
Flexibility and internal control Unquantified opportunity costs Political pressures Legal constraints Human Resource Strategies perspective to human resource management, an organisation should:
Use its strategies (corporate-level, business unit, and functional) to identify what human resources are needed and how they should be allocated. Develop and implement human resource practices that select, reward, and develop employees who best contribute to the accomplishment of organisational objectives. Use its resources to compete for or retain employees who are needed to reach its objectives. Develop mechanisms
that match employees competencies to the organisations present and future needs There are four key human resource functions involved in getting the right people into the right jobs at the right time. Recruiting and Hiring Training and
Development Coaching and Evaluation Career Planning Recruitment Recruitment is the process of finding and attracting job candidates who are qualified for current and future needs.
Forms of Recruitment Internal Recruitment External Recruitment Internal Recruitment The main benefits to the employer of internal appointments are: An organisation with a reputation for internal advancement will find it easier to motivate staff, whereas in organisations where internal advancement is rare, staff
will be less committed to the work and may be preoccupied with external job applications. The organisation will attract better candidates if they see there is a future career in it. Many candidates will be local people who have bought homes there, have children Internal candidates know the business and what will be expected of them, and they can become effective in the new job very quickly. Although there is bound to be bitterness
from other internal applicants who do not get the job, they will at least feel that there will be other career opportunities in the organisation and that their "turn" will come. The organisation will not need to rely upon external references when choosing from internal applicants - accurate information will be available from The disadvantages of appointing internally are equally valid: The successful candidates may suffer role
conflict in that they are now senior to people with whom they worked with as equals there may be a problem for them in asserting their authority. A person promoted internally may be expected to pick up the new job in an unreasonably short space of time. Filling a vacancy internally leaves another vacancy to fill, and so on If the promotion policy is based upon seniority (often called "filling dead mens shoes") young keen staff will leave, whereas a policy of promoting keen young people will demoralize and demotivate older staff who
External Recruitment These benefits are: Much wider range of people from which to choose. Newcomers to the organisation will bring in new ideas. Newcomers are not likely to be more mobile than existing staff and in a multisite business this can be very useful to the organisation. Newcomers may bring skills and
management techniques from their former employers which your organisation might also adopt. There are also disadvantages It is more expensive than internal recruitment, and often much more so. It takes time for a newcomer to get used to his or her new employer, and therefore the newcomer will not be performing effectively for the initial period. People who move between jobs have a better idea of their market value than people who stay with the same
organisation for a long time, and they make the best use they can of this by threatening to leave unless they get high Problems of Successful Implementation Owen (1982) contends that, in practice, there are five problem areas associated with the successful implementation of
strategies. At any time strategy and structure need to be matched and supportive of each other. It is also possible that related products may be produced in various plants nationally or internationally, when
a geography-oriented structure, which keeps the The information and communications systems are inadequate for reporting back and evaluating the adaptive changes which are taking place, and hence the strategic leader is not fully aware of what is happening. Implementing structure involves change,
which in turn involves uncertainty and risk. Management systems, such as compensation schemes, management development, communications systems, and so on, which operate within the For
Successful Implementation Clear responsibility for the successful outcome of planned strategic change should be allocated. The number of strategies and changes being pursued at any time should be limited. The ability of the necessary resources to cope with the changes should be seen as a key determinant of strategy and should not be overlooked. Necessary actions to implement strategies
should be identified and planned, and again responsibility should be allocated. Milestones, or progress measurement points, should be established. Measures of performance should be established, and appropriate monitoring STRATEGY REVIEW, EVALUATION AND CONTROL Strategic Controls Newman and Logan (1976)
use the term steering control to highlight some important characteristics of strategic control. Ordinarily, there is a significant time span between the initial implementation of strategy and the measurement of Financial Controls Financial data are some of the most commonly used warning signals. As commonly used
financial measures include considerations for profit, sales, return on investment return on equity, cost figures, and trends in these and other related measures. Non-Financial Controls In addition to financial data, there are many other early warning signals that can be used to alert the strategist to
potential problems. Some of the most frequently used nonfinancial signals are measures of productivity, measures of quality, personnel-related Most organizations use a combination of financial and non-financial early warning signals. Early warning signals and other similar strategic controls are used to detect something specific which has gone wrong or which is about
to go wrong with operations. Evaluating Corporate Strategy There are a number of criteria developed for assessing the effectiveness of corporate strategy. In
this sense, we will examine Rumelts criteria and McKinneys Rumelts Criteria for Evaluating Corporate Strategy Rumelt (1980) argues that corporate strategy evaluation at the widest level involves seeking answers to three questions: Are the current objectives of the organization appropriate? Are the strategies created previously, and which are currently being implemented to
achieve these objectives, still appropriate? Do current results confirm or refute previous assumptions about the feasibility of achieving the objectives and the ability of the chosen strategies to proposed four criteria that could
be used to evaluate a given strategy: consistency, consonance, advantage, and Mckinseys 7-S Framework A good strategy is not synonymous with
a double one. Nor is a double strategy synonymous with a good one. The challenge is to find a good double Structure
Strategy Shared Values Systems Skills Style Staff Let us examine the meaning of each of 7-S variables.
Strategy. A coherent set of actions aimed at gaining a sustainable advantage over competition, improving position vis-vis customers, or allocating resources. Structure. The organization chart and accompanying baggage that show who reports to whom and how tasks are both divided up and integrated. Systems. The processes and flows that show how an organization gets things done from day to day. (Information systems, capital budgeting systems,
manufacturing processes, quality control systems, and performance measurement systems all would be good examples.) Style. Tangible evidence of what management considers important by the way it collectively spends time and attention and uses symbolic behaviour. It is not what management says is important; it is the way management behaves. Staff. The people in an organization. Here it is very useful to think not about individual personalities
but about corporate demographics. Shared values (or superordinate goals). The values that go beyond, but might well include, simple goal statements in determining corporate destiny. To fit the concept, these values must be shared by most people in an organization.
Skills. A derivative of the rest. Skills are Control through information and measurement The following general guidelines have been suggested towards designing the
control Levels of control Creation of responsibility centers Identification of key factors Diversity in control Misleading measurements should be avoided To guard against negative monitoring Errors Relating to the Use of
Strategic Management Inability to Think Strategically Seven Assumptions that Kill Strategic Thinking: My top team is a closely knit group. Our team controls our organization. If its long term, its strategic. Established corporate strategy means clear divisional or overall strategy. Stable organisations do not need strategy. Our long range plans tell us where were going.
Our top team is bright, experienced; therefore, theyve got what it takes to set strategy. Other errors include Undue Emphasis on Form of Procedure Isolation from the Environment Too Much Emphasis on the Near Term
Improper Use of Planning Unpredictable Changes Most problems which fall into this category stem from unpredictable changes in the external environment. The solution is to develop early
warning signals and then to respond quickly. Some of the most frequently encountered problem sources are discussed in the following paragraphs. Naturally, some of these are Innovations New Products or Services Government Regulations. Weather Shortages in Raw Material
Consumer Preference New Competitors or Changes in a Competitors Abilities Characteristics of Standards The standards to be used in control usually have several characteristics: Relevance. This means that the standard
is logically connected to the activity or outcome being evaluated. It helps the manager to answer the question, Why are we doing this? Stability. This means that, when used by different people or at different points in time, Clarity. Managers and employees
understand, beforehand, what will be measured and how. This requires executives to communicate their intended use of the standard, when and how it will be used, and by whom. Clarity is a prerequisite of an effective measurement criterion. Fairness. The standard should invoke a sense of confidence and fairness. Conversely, irrelevant standards may
cause people to question the system and even create the impression that they are unfair. Standards that are easily Senior Executives and Strategic Controls Clarify and communicate the goals of the control system Establish the informational flow between different units of the control system Highlight the beneficial, not the
punitive, uses of the system. Clarify the responsibilities associated with different aspects of the system STRATEGIC CHANGE MANAGEMENT What is Change Management Change management is
the process of aligning the organisation's people and culture with changes in business strategy, organisational structure, systems, and processes. Properly executed, change management results in: Ownership of
and commitment to the planned change Sustained and measurable improvement Improved capability to manage future change. Major organisational change may include: Reducing the purchasing costs for a car manufacturer, through working more closely an in partnership with several
hundred suppliers Reducing the development and launch time for a major new product by about two years. Restructuring an organisation with several thousand employees from being based on functional departments to being organised around serving market sectors. Thank you
Roundtable: OCLC Roundtable: OCLC Research Library PartnershipRound Robin Round-up - Pacific Northwest style. Today's agenda. I prepared individual snapshots of the data for each institution, showing their activity versus the group averages. Mostly this helped us find several mistakes - some of them mine, some of... | https://www.smackslide.com/slide/slide-1-vk1wjq |
5 Porter forces
The model of the 5 forces of Porter, constitutes a methodology of analysis to investigate the opportunities and threats in a certain industry. That is, it analyzes the structure of an industry.
The analysis of the 5 forces of Porter investigates if it is profitable to create a company in the industry or sector that we are analyzing. Each of the forces is a factor that influences the ability to obtain benefits. As we analyze below, Porter’s 5 strengths are the intensity of current competition, potential competitors, substitute products, bargaining power of suppliers and bargaining power of clients.
The main objective of the analysis of the industry is to seek opportunities and identify threats for companies already located in that industry or for the entry of new companies, which will determine their capabilities to obtain benefits.
According to this model, the degree of attractiveness of an industry is determined by the action of these five basic competitive forces that, together, define the possibility of obtaining higher returns.
1. The intensity of current competition
It refers to the performance of existing competitors in the industry and is crucial to know if the rivalry is high or low. For this, each of the following points in the industry must be studied:
- The number of competitors and balance among them: concentrated industries (few companies and a large market share) have a lower level of competition, compared to fragmented industries (many companies with a homogeneous market share), with a higher level of competition.
- The rhythm of growth of the industry: there are four phases through which an industry goes through its life – emerging, growing, mature or declining -, as the growth rate increases, thus also the intensity of the competition.
- Mobility barriers: those obstacles that prevent companies from moving from one segment to another within the same industry.
- Barriers of exit: They are factors that prevent the abandonment of an industry.
- Product differentiation: as in an industry there is a greater level of product differentiation (marketing strategy based on creating a perception of the product by the consumer that clearly differentiates it from the competition), the intensity of the competition is reduced.
- The diversity of competitors: when competitors have different strategies, the level of competition is intensified, as it is more difficult to predict their behaviour.
2. Potential competitors
It refers to companies that want to enter to compete in the industry. The more attractive an industry is, the more potential competitors there will be. The possibility that new companies enter to compete in an industry depends on the following factors:
- Barriers to entry: We can define them as those factors that hinder the entry of new companies in the industry.
For example, economies of scale and economies reach a barrier to entry, because the reduction of unit costs as business volume increases, slows the entry of new competitors. Another example may be the disadvantage in costs of another nature, such as the product technology that allows producing that company with lower costs.
- Product differentiation: in this case, established companies can have patents or a portfolio of clients that force new competitors to make large investments to retain new customers.
- Other reasons: for example, financing needs, exchange costs or difficult access to distribution channels.
3. Substitute products
They are defined as those products that meet the same needs of customers as the product offered by the industry. As more substitute products appear in the industry, the degree of attractiveness of the industry begins to decrease.
The threat of the appearance of these products depends on the degree to which they meet the needs of consumers, their price or the costs of switching to these alternative products.
4 and 5. Bargaining power of suppliers and customers
The strength 4 of Porter is the power of negotiation with suppliers and 5, is the power of negotiation with clients, but as the analysis of both forces is very similar, they are often analyzed jointly.
The bargaining power is the ability to impose conditions on the transactions that are made with the companies in the industry. Therefore, as the bargaining power between suppliers and customers is greater, the attractiveness of the industry decreases.
According to Porter, the most important factors that affect bargaining power are the following:
- The degree of concentration in relation to the industry.
- The volume of transactions made with the company.
- The degree of importance of purchases made in relation to customer costs.
- The degree of differentiation of products or services.
- Costs of change of provider.
- Level of benefits of the client in relation to the provider.
- The real threat of vertical integration forward or backwards.
- Importance of the product or service sold.
- The product is or is not storable.
- Level of information that one of the parties has in relation to the other. | https://www.takecareofmoney.com/5-porter-forces/ |
Sandeep Garg Class 12 Microeconomics Solutions Chapter 10 Main Market Forms is explained by the expert Economics teachers from the latest edition of Sandeep Garg Microeconomics Class 12 textbook solutions. We at coolgyan’S provide Sandeep Garg Economics Class 12 Solutions to give comprehensive insight about the subject to the students.
These insights act as a priceless benefit to students while completing their homework or while studying for their exams. There are numerous concepts in Economics, however, we at coolgyan’S provide the students with the solution from Main Market Forms, which will be useful for the students to score well in the board examinations.
Sandeep Garg Solutions Class 12 – Chapter 10 – Part A – Microeconomics
Question 1
What is Market?
Ans: Market refers to the whole region where buyers and sellers of a commodity are in contact with each other to effect the purchase and sale of the commodity.
Question 2
What are the forms of market structure?
Ans:
Question 3
What are the features of Perfect Competition?
Ans: Features of Perfect Competition are:
- A large number of buyers and sellers
- Homogeneous product
- Freedom of exit and entry
- Absence of selling costs
- Absence of transportation costs
Question 4
Define a Monopoly.
Ans: Monopoly refers to a market situation where there is a single seller selling a product which has no close substitutes.
Question 5
What is Monopolistic Competition?
Ans: Monopolistic Competition refers to a market competition in which there are a large number of firms which sell closely related but differentiated products.
Question 6
Define Oligopoly.
Ans: Oligopoly refers to a market situation in which there are a few firms selling homogeneous or differentiated products.
Question 7
What are the features of Oligopoly?
Ans: The primary features of Oligopoly are explained as follows: | https://coolgyan.org/commerce/sandeep-garg-microeconomics-class-12-solutions-chapter-10-main-market-forms/ |
Amphibians play essential roles, both as predators and prey, in their ecosystems. Adult amphibians eat pest insects, including those pests that damage crops or spread disease. … Consequently, amphibians influence the populations of other species in their ecosystems.
What are three reasons amphibians are important?
1. Amphibians play an important role in nature – both as predators and prey. 2. They eat pest insects, which benefits agriculture around the world and helps minimise the spread of disease, including malaria.
What are economic importance of amphibians?
Economic importance
Amphibians, especially anurans, are economically useful in reducing the number of insects that destroy crops or transmit diseases. Frogs are exploited as food, both for local consumption and commercially for export, with thousands of tons of frog legs harvested annually.
Why are amphibians important in determining the quality of a terrestrial ecosystem?
For example, amphibians play an important role in various terrestrial and aquatic ecosystems, both as predators and as prey (TOLEDO et al. … A decline of particular amphibian species may thus result in an overabundance of prey species, i.e. various pest arthropods, and/or leave predators with a limited food supply. …
How do frogs contribute to the ecosystem?
Frogs play a central role in many ecosystems. They control the insect population, and they’re a food source for many larger animals. … Frogs can also secrete substances through their skin. Some secretions are beneficial — researchers have used some of them to create new antibiotics and painkillers.
What would happen if amphibians went extinct?
Amphibians are a keystone of many ecosystems, and when they disappear, the environment changes dramatically. In many ecosystems, the population of amphibians outweighs all the other animals combined. “In Central America, some of these amphibians would eat algae off rocks [in streams],” Nanjappa explains.
Why are mammals important to the ecosystem?
Mammals undoubtedly play an important role in ecosystems by providing essential services such as seed dispersal, pollination and regulating insect populations, and reducing disease transmission [20–22] and there is some evidence that some groups act as indicators of general ecosystem health .
How are ecosystems being affected by the loss of amphibians?
Because amphibians are important predators and prey in many ecosystems , declines in their populations may affect many other species that live within the same ecological community. … Moreover, the populations of animals that amphibians eat, such as mosquitoes, may increase as amphibians disappear.
Why are the amphibians considered to be of evolutionary significant?
Evolution of Amphibians
As the earliest land vertebrates, they were highly successful. … For more than 100 million years, amphibians remained the dominant land vertebrates. Then some of them evolved into reptiles. Once reptiles appeared, with their amniotic eggs, they replaced amphibians as the dominant land vertebrates.
How are reptiles and amphibians an important component of the natural environment?
Amphibians and reptiles are both important members of aquatic and terrestrial ecosystems. Both groups serve as both predators and prey, and species that inhabit both ecosystems serve to transfer energy between the two systems. … Amphibians are viewed as indicators of wetland ecosystem health.
Why are amphibians sensitive to changes in environment?
Climate change can also have a major effect on amphibians due to their delicate transdermal uptake system, which makes them sensitive to small changes in temperature and moisture. … When temperatures are optimal, amphibians migrate to small, fresh bodies of water to breed.
Why is it important for researchers to study animals like amphibians?
Amphibians have long been utilized in scientific research and in education. … Scientists have used amphibian embryos to evaluate the effects of toxins, mutagens, and teratogens. Likewise, the animals are invaluable in research due to the ability of some species to regenerate limbs. | https://aslebiennialconference.com/biodiversity/how-do-amphibians-contribute-to-the-ecosystem.html |
Gilles Lepoint is zoologist and marine ecologist, using stable isotopes of C, N and S to assess trophic ecology of animals, to delineate marine and freshwater foodweb. He has extensively studied seagrass ecosystems and macrophytodetritic accumulations. More recently, he was involved in Southern Ocean research (biodiversity, trophic ecology of benthic organism) and in Madagascar ecosystems (coral reef, black corals and seagrass systems).
Additional affiliations
October 2021 - present
Position
- Research Associate
Description
- Head of the newly launched laboratory of trophic and isotopes ecology
September 2010 - August 2015
Position
- Research Associate FRS FNRS
Description
- Introduction to marine biology and fisheries sciences (20h) Complementary master in environmental sciences for developping country (ULg)
May 2010 - present
Position
- Research Associate FRS FNRS
Description
- Tropical seagrass diversity and functioning (One week) given in the frame of the PFS formation "Biodiversity, Ecotourism and Biomanagement of reef ecosystems) Financed by the ARES-CUD (Wallonia-Bruxelles Community, Belgium).
Publications
Publications (262)
The damselfishes, with more than 340 species, constitute one of the most important families that live in the coral reef environment. Most of our knowledge of reef-fish ecology is based on this family, but their trophic ecology is poorly understood. The aim of the present study was to determine the trophic niches of 13 sympatric species of damselfis...
1. Stable isotope analysis, coupled with dietary data from the literature, was used to investigate trophic patterns of freshwater fauna in a tropical stream food web (Guadeloupe, French West Indies). 2. Primary producers (biofilm, algae and plant detritus of terrestrial origin) showed distinct δ13C signatures, which allowed for a powerful discrimin...
Dead leaves of the Neptune grass, Posidonia oceanica (L.) Delile, in the Mediterranean coastal zone, are colonized by an abundant " detritivorous " invertebrate community that is heavily predated by fishes. This community was sampled in August 2011, November 2011, and March 2012 at two different sites in the Calvi Bay (Corsica). Ingested artificial...
In recent years, sea ice cover along coasts of East Antarctica has tended to increase. To understand ecological implications of these environmental changes, we studied benthic food web structure on the coasts of Adélie Land during an event of unusually high sea ice cover (i.e. two successive austral summers without seasonal breakup). We used integr...
Despite the role that goatfishes play in reef ecosystems, knowledge of their ecomorphological diversity remains scarce. Here, we explore the ecomorphology of six species of goatfishes living in sympatry at Toliara Reef (South-West of Madagascar) by using a combination of morphometric and isotopic (δ¹³C, δ¹⁵N and δ³⁴S) data. The shape of cephalic re...
Seagrass, systems export significant amounts of their primary production as large detritus (i.e. macrophytodetritus). Accumulations of exported macrophytodetritus (AEM) are found in many areas in coastal environment. Dead seagrass leaves are often a dominant component of these accumulations, offering shelter and/or food to numerous organisms. AEM a...
Termites feed on vegetal matter at various stages of decomposition. Lineages of wood- and soil-feeding termites are distributed across terrestrial ecosystems located between 45°N and 45°S of latitude, a distribution they acquired through many transoceanic dispersal events. While wood-feeding termites often live in the wood on which they feed and ar...
To date, only one mitogenome from an Antarctic amphipod has been published. Here, novel complete mitochondrial genomes (mitogenomes) of two morphospecies are assembled, namely, Charcotia amundseni and Eusirus giganteus. For the latter species, we have assembled two mitogenomes from different genetic clades of this species. The lengths of Eusirus an...
The iterative nature of ecomorphological diversification is observed in various groups of animals. However, studies explicitly testing the consistency of morphological variation across and within species are scarce. Antarctic notothenioids represent a textbook example of adaptive radiation in marine fishes. Within Nototheniidae, the endemic Antarct...
Sea stars (Echinodermata: Asteroidea) are a key component of Southern Ocean benthos, with 16% of the known sea star species living there. In temperate marine environments, sea stars commonly play an important role in food webs, acting as keystone species. However, trophic ecology and functional role of Southern Ocean sea stars are still poorly know...
Coastal hypoxia is a worldwide concern. Even though seasonal hypoxia has been reported on the northwestern Black Sea shelf since the 1970s, little is known about oxygenation in this area over the Holocene. With a multiproxy approach, this work aimed to detect potential hypoxic events in two gravity cores. Our results demonstrate that the most commo...
Deep-sea elasmobranchs are commonly reported as bycatch of deep-sea fisheries and their subsequent loss has been highlighted as a long-running concern to the ecosystem ecological functioning. To understand the possible consequences of their removal, information on basic ecological traits, such as diet and foraging strategies, is needed. Such aspect...
In this study, we evaluated the suitability of body feathers, preen oil and plasma for estimation of organohalogen compound (OHC) exposure in northern goshawk Accipiter gentilis nestlings (n = 37; 14 nests). In addition, body feathers received further examination concerning their potential to provide an integrated assessment of (1) OHC exposure, (2...
Migratory bird species may serve as vectors of contaminants to Antarctica through the local deposition of guano, egg abandonment, or mortality. To further investigate this chemical input pathway, we examined the contaminant burdens and profiles of the migratory South polar skua (Catharacta maccormicki) and compared them to the endemic Adélie pengui...
In order to test the feasibility of transplantation of the whip black coral species Cirrhipathes anguina (Dana, 1846) from Madagascar, transplants were installed on cultivation tables in two sites (the North Pass and the Grande Vasque) characterized by distinct environmental conditions. Following transplantation, the transplants were followed for s...
Antarctic sea stars can occupy different trophic niches and display different trophic levels, but, while the impacts of their body size and environmental features on their trophic niches are potentially important, they are presently understudied. Here we assessed the trophic ecology in relation to the size and habitat of sea stars in a fjord on Kin...
Native fauna of the tropical volcanic part of Guadeloupe is amphidromous: juveniles born in rivers but that grow in the sea need to migrate upstream to colonise their adult habitat in rivers. This migration is affected by any human-made obstacles placed in their way. Moreover, on volcanic tropical islands, streams are the main source of water catch...
Salinity resistance of the African rice species (Oryza glaberrima) is poorly documented and the specific responses of the plant to Na⁺ and Cl⁻ toxic ions remain unknown. Cultivars TOG5307 and TOG5949 were maintained for 15 days on iso-osmotic nutrient solutions containing 50 mM NaCl, or a combination of Cl⁻ salts (Cl⁻-dominant) or Na⁺ salts (Na⁺-do...
The West Antarctic Peninsula (WAP) is one of the most rapidly changing regions in the world, in great part due to anthropogenic climate change. Steep environmental gradients in water temperature, sea ice cover and glacier melting influence are observed, but much is left to document about significance of those shifts for biological communities and e...
Mercury (Hg) concentrations have significantly increased in oceans during the last century. This element accumulates in marine fauna and can reach toxic levels. Seafood consumption is the main pathway of methyl-mercury (MeHg) toxicity in humans. Here, we analyzed total Hg (T-Hg) concentrations in two oceanic squid species (Ommastrephes bartramii an...
Contaminant levels are lower in Antarctica than elsewhere in the world because of its low anthropogenic activities. However, the northern region of the Antarctic Peninsula, is close to South America and experiences the greatest anthropogenic pressure in Antarctica. Here, we investigated, in two Antarctic Peninsula islands, intra and interspecific f...
Paleolimnological reconstructions from the mid and high latitudes in the Southern Hemisphere are still relatively scarce. Anthropogenic impacts have evidenced trophic state changes and an increase in cyanobacterial blooms in the lacustrine system of San Pedro de la Paz in the last decades. Here, we reconstructed primary production and sedimentologi...
Accumulation of exported macrophytodetritus (AEM) represent unique habitats formed by the dead material originating from macrophyte ecosystems (e.g., seagrass, kelp, other seaweeds). AEM can be found everywhere, from the littoral zone to the deepest canyons, and from high to low latitudes. Seagrass AEMs are among the most common detrital accumulati...
1. Paedomorphosis, a developmental heterochrony involving the retention of larval traits at the adult stage, is considered a major evolutionary process because it can generate phenotypic variation without requiring genetic modifications. Two main processes underlie paedomorphosis: neoteny, a slowdown of somatic development , and progenesis, a preco...
DeepIso is a collaborative effort to produce a global compilation of stable isotope ratios and elemental contents in organisms from deep-sea ecosystems. In doing so, it aims to provide the deep-sea community with an open data analysis tool that can be used in the context of future ecological research, and to help deep-sea researchers to use stable...
In the North Sea, sympatric grey and harbour seals may compete for food resources impacted by intense fishing activities and a recent increase of seal populations. In order to reduce inter-specific competition, sympatric species must segregate at least one aspect of their ecological niches: temporal, spatial or resource segregation. Using isotopes...
The biomagnification of per- and polyfluoroalkyl substances (PFASs) was investigated in a tropical mangrove food web from an estuary in Bahia, Brazil. Samples of 44 organisms (21 taxa), along with biofilm, leaves, sediment and suspended particulate matter were analyzed. Sum (∑) PFAS concentrations in biota samples were dominated by perfluorooctane...
In Morocco, Zostera marina Linnaeus has disappeared from many localities where it was historically reported. The only known remaining meadows along Mediterranean coasts of Morocco, though in North Africa, are those of Belyounech bay and Oued El Mersa bay, in the marine area of 'Jbel Moussa'. An in-depth knowledge of these meadows is required for th...
A previous investigation of our research team has demonstrated the suitability of using hepatic total tin (ΣSn) concentrations for evaluating dolphin exposure to organotins (OTs). The present study develops the previous technique into three different approaches that comprise data: (1) on hepatic ΣSn concentrations of 121 Guiana dolphins (Sotalia gu...
A large part of the production of Laminaria hyperborea kelp forests is not directly consumed by grazers, but is exported during storm events or natural annual blade erosion. Drifting kelp fragments are transported and can accumulate temporarily over subtidal benthic habitats. The decay process is particularly slow (>6 mo for complete decay during s...
In Morocco, Zostera marina Linnaeus has disappeared from many localities where it was historically reported. The only known remaining meadows along Mediterranean coasts of Morocco, though in North Africa, are those of Belyounech bay and Oued El Mersa bay, in the marine area of ‘Jbel Moussa’. An in-depth knowledge of these meadows is required for th...
Understanding the spatiotemporal patterns of legacy organochlorines (OCs) is often difficult because monitoring practices differ among studies, fragmented study periods, and unaccounted confounding by ecological variables. We therefore reconstructed long-term (1939–2015) and large-scale (West Greenland, Norway, and central Sweden) trends of major l...
Damselfishes of the genus Stegastes are among the most conspicuous benthic reef associated fish in the Gulf of California, and the two most commonly found species are the Beaubrummel Gregory Stegastes flavilatus and the Cortez damselfish Stegastes rectifraenum. Both species are described as ecologically and morphologically very similar. However, th...
Posidonia oceanica is the only reported seagrass to produce significant amount of dimethylsulfoniopropionate (DMSP). It is also the largest known producer of DMSP among coastal and inter-tidal higher plants. Here we studied i) the weekly to seasonal variability and the depth variability of DMSP and its related compound dimethylsulfoxide (DMSO) in P...
Termites are eusocial insects having evolved several feeding, nesting and reproductive strategies. Among them, inquiline termites live in a nest built by other termite species: some of them do not forage outside the nest, but feed on food stored by the host or on the nest material itself. In this study, we characterized some dimensions of the ecolo...
Shallow-water antipatharians host many symbiotic species, which spend their adult life with their host and/or use them to have access to food. Here we determine the trophic relationships between four common macrosymbionts observed on/in Cirripathes anguina, Cirrhipathes densiflora and Stichopathes maldivensis in SW Madagascar. These include the myz...
We reconstructed the first long-term (1968–2015) spatiotemporal trends of perfluoroalkyl substances (PFAS) using archived body feathers of white-tailed eagles (Haliaeetus albicilla) from the West Greenland (n = 31), Norwegian (n = 66), and Central Swedish Baltic coasts (n = 50). We observed significant temporal trends of perfluorooctane sulfonamide...
Rationale: Stable isotope analysis is used to investigate the trophic ecology of organisms and, in order to use samples from archived collections, and it is important to know whether preservation methods alter the results. This study investigates the long-term effects of four preservation methods on sea stars isotopic composition and isotopic nich...
Among the fauna inhabiting the Posidonia oceanica seagrass meadow, holothurians are particularly abundant and provide essential ecological roles, including organic matter recycling within seagrass sediments. This study aimed to investigate the trophic niche of four holothurians of the order Holothuriida [Holothuria poli (Delle Chiaje, 1824), Holoth...
The pigments are present in all photosynthetic organisms, mainly as light-gathering agents for photosynthesis and photo-protection (Porra et al., 1997), namely chlorophylls (Chls), carotenoids, photoprotective compounds and their derivatives. These are commonly stored in the sediments of lacustrine environments and are produced by algae, phototroph...
La paleolimnología es definida como el estudio de las condiciones físicas, químicas y biológicas a través de la información almacenada en los depósitos sedimentarios de lagos. En la mayoría de los casos, los sedimentos lacustres también contienen importantes datos indirectos de cambios ambientales pasados como en el periodo del Quaternario tardìo ....
The black caiman is one of the largest neotropical top predators, which means that it could play a structuring role within swamp ecosystems. However, because of the difficulties inherent to studying black caimans, data are sorely lacking on many aspects of their general biology, natural history, and ecology, especially in French Guiana. We conducte...
In the Kerguelen Islands, the multiple effects of climate change are expected to impact coastal marine habitats. Species distribution models (SDM) can represent a convenient tool to predict the biogeographic response of species to climate change but biotic interactions are not considered in these models. Nevertheless, new species interactions can e...
The spatiotemporal trends of mercury (Hg) are crucial for the understanding of this ubiquitous and toxic contaminant. However, uncertainties often arise from comparison among studies using different species, analytical and statistical methods. The long-term temporal trends of Hg exposure were reconstructed for a key sentinel species, the white-tail...
The whitemouth croaker (Micropogonias furnieri) is one of the most commercially important species along the Atlantic coast of South America. Moreover, some of its biological traits (long life span, inshore feeding, high trophic position) make this species a suitable sentinel of coastal pollution. Here, we investigated contamination by multiple lega...
Documenting phenotypic variation among populations is crucial for our understanding of micro-evolutionary processes. To date, the quantification of trophic and morphological variation among populations of coral reef fish at multiple geographical scales remains limited. This study aimed to quantify diet and body shape variation among four population...
The most recent eruption of Mt. Fuji (Japan), the VEI 5 Hōei plinian eruption (CE 1707) heavily impacted Lake Yamanaka, a shallow lake located at the foot of Mt. Fuji. Here, we discuss the influence of the Hōei eruption on the lacustrine sedimentation of Lake Yamanaka using high resolution geophysical and geochemical measurements on gravity cores....
• In large rivers, fish ontogenic development success is mainly influenced by resource availability and by the possibility of species to adapt their diet (i.e. trophic niche). Humans have drastically modified freshwater habitats, notably for navigation purposes. Such modifications may reduce food availability for young of the year (YOY) fish and, c...
Concentrations of organohalogenated contaminants (OHCs) can show significant temporal and spatial variation in the environment and wildlife. Most of the variation is due to changes in use and production, but environmental and biological factors may also contribute to the variation. Nestlings of top predators are exposed to maternally transferred OH...
Tuleariocaris holthuisi and Arete indicus are two ectocommensal shrimps closely associated with the tropical sea urchin Echinometra mathaei. This study provides a comparison of these two E. mathaei symbiotic crustaceans and particularly focuses on the relationship between T. holthuisi and its host’s pigments (i.e. spinochromes), and its dependency...
Genetic diversity is essential for species persistence because it provides the raw material for evolution. For marine organisms, short pelagic larval duration (PLD) and small population size are characteristics generally assumed to associate with low genetic diversity. The ecological diversity of organisms may also affect genetic diversity, with an...
Dear zoologist colleagues, Sorry for cross-posting. Do not hesitate to spread the news. On the 14th and the 15th of December, the ZOOLOGY 2018 meeting will take place back-to-back with the FWO Kennismakers meeting @ZOO (https://www.kennismakers.be/en-gb/home/) in Antwerp that will host a special theme on the Impact of Climate on Ecosystem services...
Seagrass meadows ecosystem engineering effects are correlated to their density (which is in turn linked to seasonal cycles) and often cannot be perceived below a given threshold level of engineer density. The density and biomass of seagrass meadows (Z. marina) together with associated macrophytes undergo substantial seasonal changes, with clear dec...
Antarctic specimens collected during various research expeditions are preserved in natural history collections around the world potentially offering a cornucopia of morphological and molecular data. Historical samples of marine species are, however, often preserved in formaldehyde which may render them useless for genetic analysis. We sampled stoma...
Kelps are major foundation species that support one of the most productive habitats in coastal environments (1-2 kgC.m-2 .yr-1). Despite kelp forests being extensively studied in term of biodiversity and ecological functioning; the area of influence of such productive ecosystem is largely underestimated and remains little-understood. In Europe, the...
Benthic copepods dominate meiofaunal communities from marine phytodetritus, both in terms of numerical abundance and species diversity. Nevertheless, ecological factors driving copepod coexistence and population dynamics are still largely unknown. Here, we aimed to explore feeding habits of four copepod species commonly found in Mediterranean seagr... | https://www.researchgate.net/profile/Gilles-Lepoint |
The term keystone species was first coined by Robert Paine (1966) after extensive studies examining the interaction strengths of food webs in rocky intertidal ecosystems in the Pacific Northwest. One of his study sites, located at Mukkaw Bay, contained a community consistently dominated by the same species of mussels, barnacles, and the starfish, Pisaster ochraceus, which preys upon the other species as a top predator (Figure 1).
Paine (1966) had observed that the diversity of organisms in rocky intertidal ecosystems declined as the number of predators in those ecosystems decreased. He hypothesized that some of these consumers might be playing a greater role than others in controlling the numbers of species coexisting in these communities. He tested his hypothesis in an experiment that involved selecting a "typical" piece of shoreline at Mukkaw Bay, about 8 meters long by 2 meters wide, that was kept free of starfish. This area was compared to an adjacent, undisturbed control area of equal size.
Paine observed dramatic changes in the temperate intertidal ecosystem after Pisaster was artificially removed compared with the control area that remained unchanged in its species number and distribution. The intertidal area where Pisaster had been removed was characterized by many changes. Remaining members of the ecosystem's food web immediately began to compete with each other to occupy limited space and resources. Within three months of the Pisaster removal, the barnacle, Balanus glandula, occupied 60 to 80% of the available space within the study area. Nine months later, Blanus glandula had been replaced by rapidly growing populations of another barnacle Mitella and the mussel Mytilus. This phenomenon continued until fewer and fewer species occupied the area and it was dominated by Mytilus and a few adult Mitella species. Eventually the succession of species wiped out populations of benthic algae. This caused some species, such as the limpet, to emigrate from the ecosystem because of lack of food and/or space. Within a year of the starfish's removal, species diversity significantly decreased in the study area from fifteen to eight species (Figure 2).
In his seminal paper that followed this work, Paine (1969) derived the term keystone species to describe the starfish in these intertidal ecosystems. Of these species he commented: "The species composition and physical appearance were greatly modified by the activities of a single native species high in the food web. These individual populations are the keystone of the community's structure, and the integrity of the community and its unaltered persistence through time."
Paine went on to describe the criteria for a keystone species. A keystone species exerts top-down influence on lower trophic levels and prevents species at lower trophic levels from monopolizing critical resources, such as competition for space or key producer food sources. This paper represented a watershed in the description of ecological relationships between species. In the twenty years that followed its publication, it was cited in over ninety publications. Additionally, the original paper describing the intertidal areas was cited in over 850 papers during the same time period (Mills et al. 1993).
Other Keystone Species
There are a number of other well-described examples where keystone species act as determinate predators. Sea otters (i) regulate sea urchin populations, which in turn feed upon kelp and other macroalgae (Duggins 1980). The otters keep the sea urchin populations in check, thus allowing enough kelp forests to remain as a habitat for a variety of other species. As a result, the entire ecosystem is kept in balance. In terrestrial environments, fire ants function as keystone predators by suppressing the numbers of individuals and species of arthropods that could be harmful to agriculture.
Keystone species also play important roles in many other ecosystems (Mills et al. 1993). For example, hummingbirds are sometimes referred to as keystone mutualists because they influence the persistence of several plant species through pollination. On the other hand, keystone modifiers, such as the North American beaver (Casor candensis), determine the prevalence and activities of many other species by dramatically altering the environment (Figure 3). Species like the Saguaro cactus (Carnegiea gigantea) in desert environments and palm and fig trees in tropical forests are called keystone hosts because they provide habitat for a variety of other species. Keystone prey are species that can maintain their numbers despite being preyed upon, therefore controlling the density of a predator.
Gray Wolves: A Case Study of Keystone Species Removal and Restoration
Gray wolves (Canis lupus, Figure 4) once roamed the western portions of North America from Alaska to Mexico. During the latter part of the nineteenth century, most of the important prey for wolves — bison, deer, elk, and moose — were severely depleted by human settlers. The wolves soon became the enemies of the ranchers and farmers when they turned to preying upon sheep and other livestock (Grooms 1993, Breck & Meier 2004, Outland 2010).
When the federal government set aside the Greater Yellowstone Ecosystem (GYE) as a national park in 1872, about three to four hundred wolves were present, preying mostly upon large hooved ungulates such as elk (Cervus canadensis, Figure 5) and bison (Yellowstone Association 1996). Fearing the wolves' impact on elk and bison herds as well as livestock owned by area ranchers, the federal government began eradicating the wolf population. Bounty programs that continued until 1965 offered as much as $50 per wolf. Wolves were trapped, shot, dug from their dens, hunted with dogs, and poisoned. In Yellowstone National Park, park rangers killed the last two remaining pups in 1924. By the 1930s wolves had been effectively eliminated from the contiguous 48 States and Mexico and only remained in high numbers in Alaska.
With their primary predator eliminated, elk populations exploded, leading to the overgrazing of plants, especially those found in riparian zones (Laliberte & Ripple 2004). Significant declines in the populations of many plant species (e.g., aspen, willow) resulted, which in turn influenced other wildlife, such as beaver and songbird populations (Ripple & Breschetta 2004, Hallofsky & Ripple 2008). Intensive browsing of aspen (Populus tremuloides) stands, for example, led to a rapid decline in the number of seedlings and root sprouts growing into saplings and trees. For many stands of these trees, only large diameter trees (i.e., those that had matured before the wolves were eradicated) remained.
Disappearance of these and other plant species not only caused the loss of habitat for many other animals but also influenced other ecological factors (Smith et al. 2009), including stream bank stability, the deposition of organic matter and fine sediment in riparian zones, water temperature regulation via shading, and nutrient cycling. The removal of wolves thus led to the instability of riparian and other environmentally sensitive areas.
Despite some setbacks (e.g., disease outbreaks within the fledgling wolf packs), recovery efforts in the GYE have greatly surpassed expectations. Since their reintroduction, wolves have overwhelmingly targeted elk over other prey. This has coincided with an increase in willow heights in several areas. This may indicate that a wolf-elk-willow trophic cascade has been reestablished within the GYE. Furthermore, investigators believe that restoration of willow populations has led to a ten-fold increase in beaver populations (Smith 2004) as well as a significant songbird rebound (Baril & Hansen 2007).
Halofsky & Ripple (2008) found that aspen browsing by elk had ceased in areas burned during the historic 1988 fires but continued in unburned areas. These results were attributed to the increased risk of wolf predation in burned areas. The authors proposed that a recoupling of fire with increased predation risk from wolves may help improve aspen restoration. The results also suggest that much more research needs to be conducted to determine the effects of wolf reintroduction into the GYE.
Summary
The concept of keystone species was first proposed and demonstrated in the 1960s by the dominance of top-predator starfish in intertidal ecosystems. Keystone species are species that play a disproportionately large role in the prevalence and population levels of other species within their ecosystem or community. The recovery of the gray wolf after its eradication from Yellowstone National Park, almost ninety years ago, demonstrates how crucial keystone species are to the long-term sustainability of the ecosystems they inhabit. Most importantly, the preservation and restoration of keystone species is essential for maintaining and/or reestablishing the historic structure and function of the ecosystems they inhabit.
References and Recommended Reading
Baril, L. & Hansen, A. Avian response to willow height growth in Yellowstone's Northern range. Report to Yellowstone National Park, 2007.
Beck, S. & Meier, T. Managing wolf depredation in the United States: past, present, and future. Sheep and Goat Research Journal 19, 41–46 (2004).
Duggins, D. O. Kelp beds and sea otters: an experimental approach. Ecology 61, 447–453 (1980).
Grooms, S. Return of the Wolf. Minocqua, WI: Northword Press, 1993.
Halofsky, J. & Ripple, W. Recoupling fire and aspen recruitment after wolf reintroduction in Yellowstone National Park. Forest Ecology and Management 256, 1004–1008 (2008).
Laliberte, A. S. & Ripple, W. J. Range contractions of North American carnivores and ungulates. BioScience 54, 123–138 (2004).
Mills, L. S. et al. The keystone-species concept in ecology and conservation. BioScience 43, 219–224 (1993).
Outland, K. Who's
afraid of the big bad wolf? The Yellowstone
wolves controversy. Journal of Young
Investigators 11, (2010).
Paine, R. T. Food web complexity and species diversity. American Naturalist 100, 65–75 (1966).
Paine, R. T. A note on trophic complexity and community stability. American Naturalist 103, 91–93 (1961).
Yellowstone Association. The Yellowstone Wolf: A Guide and Sourcebook. Edited by P. Shullery. Winnipeg: Red River Books, 1996.
Ripple, W. J. & Breschetta, R. L. Wolves, elk, willows, and trophic cascades in the upper Gallatin Range in Southwester Montana, USA. Forest Ecology and Management 200, 161–181 (2004).
Smith, D. W. 2003 Beaver Survey. Internal Memorandum. Yellowstone National Park, 2004. | https://www.nature.com/scitable/knowledge/library/keystone-species-15786127/?error=cookies_not_supported&code=cf9cba49-797d-49af-81ad-0e0109c37c7e |
Chondrichthyans (sharks, rays and chimaera) are one of the most speciose groups of higher order predators on the planet and are often cited as playing an important functional role in many ecosystems. However, most studies to date have focused on oceanic and shelf habitats, and there is limited information on the ecological role that chondrichthyans play in the deep-sea. This research aims to examine the trophic and spatial ecology of deep-sea chondrichthyans using stable isotope analysis. Stable isotopes of carbon and nitrogen vary among different trophic levels and between spatially separated areas, and therefore provide a potential tool for uncovering some ecological characteristics of deep-water chondrichthyans. In this study, I found that on a global scale, oceanic sharks appear to transfer nutrients over large spatial scales, whereas sharks found in shelf habitats couple ecologically varied food webs close to their capture location. Although global data is limited for deep-sea sharks, in the northeast Atlantic it appears that sharks found on seamounts are more tightly coupled to pelagic production than their counterparts on the continental slopes. Continental slope habitats may provide access to more isotopic niches, where sharks integrate nutrients from benthic and pelagic nutrient pathways. On the other hand, chimaeras appear to fill a unique role feeding on benthic prey items that are inaccessible to other fishes (e.g hard shelled benthic animals). Depth gradients in nutrient availability are reflected in the bathymetric distribution patterns of chondrichthyan families, with depth segregations likely reducing interspecific competition for resources. For some of the largest shark species in this ecosystem, such as Portuguese dogfish (Centroscymnus coelolepis) and leafscale gulper shark (Centrophorus squamosus), whole life-history ecology was recovered from sequential analysis of eye lens proteins. Both these species appear to undertake relatively consistent latitudinal migrations linked with ontogeny and reproductive development. This study reveals the ecological characteristic of diverse deep-sea chondrichthyan assemblages, and how trophic and spatial behaviours facilitate the transfer of nutrients in these ecosystems. Subsequently, chondrichthyans likely play an important role in deep-sea ecosystems and should be managed appropriately within fisheries. | https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.729771 |
Wildlands Network published For the Wild in 2017 to celebrate 25 years of reconnecting nature in North America. Every couple of weeks, we’ll be posting a new excerpt from this inspiring collection of prose, poetry, and photographs as a special feature on our website. Please join the Rewilding Society or our Wildlands Stewards giving circle to receive a bound copy of For the Wild. Visit our Donate page to learn more.
The following is an excerpt from an article that ran in the Fall 1998 issue of Wild Earth.
Rewilding and Biodiversity: Complementary Goals for Continental Conservation
by Michael Soulé and Reed Noss
THE FOURTH CURRENT—along with Monumentalism, Biodiversity Conservation (including representation of ecosystems), and Island Biogeography—in the modern conservation movement is the idea of rewilding—the scientific argument for restoring big wilderness based on the regulatory roles of large predators. Until the mid-1980s, the justification for big wilderness was mostly aesthetic and moral. The scientific foundation for wilderness protection was yet to be established.
We recognize three independent features that characterize contemporary rewilding: large, strictly protected, core reserves; connectivity; and keystone species. In simplified shorthand, these have been referred to as the three Cs: Cores, Corridors, and Carnivores.
Keystone species enrich ecosystem function in unique and significant ways. Although all species interact, the interactions of some species are more profound and far-reaching than others, such that their elimination from an ecosystem often triggers cascades of direct and indirect changes on more than a single trophic level, leading eventually to losses of habitats and extirpation of other species in the food web. Top carnivores are often keystones, but so are species that provide critical resources or that transform landscapes or waterscapes, such as sea otters, beavers, prairie dogs, elephants, gopher tortoises, and cavity-excavating birds. In North America, it is most often large carnivores that are missing or severely depleted.
Three major scientific arguments constitute the rewilding argument and justify the emphasis on large predators. First, the structure, resilience, and diversity of ecosystems are often maintained by “topdown” ecological (trophic) interactions that are initiated by top predators. Second, wide-ranging predators usually require large cores of protected landscape for secure foraging, seasonal movement, and other needs; they justify bigness. Third, core reserves are typically not large enough in most regions; they must be linked to ensure long-term viability of wide-ranging species. In addition to large predators, migratory species such as caribou and anadromous fishes also justify connectivity in a system of nature reserves. In short, the rewilding argument posits that large predators are often instrumental in maintaining the integrity of ecosystems. In turn, large predators require extensive space and connectivity.
The ecological argument for rewilding is buttressed by research on the roles of large animals, particularly top carnivores and other keystone species, in many continental and marine systems. Studies are demonstrating that the disappearance of large carnivores often causes these ecosystems to undergo dramatic changes, many of which lead to biotic simplification and species loss. On land, these changes are often triggered by exploding ungulate populations. For example, deer, in the absence of wolves and cougars, have become extraordinarily abundant and emboldened in many rural and suburban areas throughout the United States, causing both ecological and economic havoc. | https://wildlandsnetwork.org/blog/wild-5-rewilding-biodiversity/ |
There has been a dramatic increase in data available on various aspects of marine Arctic communities over the last decade, particularly on the pan-Arctic distribution of species and trophic interactions. New studies include investigations of the trophic links within different systems (e.g. the central role of Arctic and Polar cod and Calanoid copepods in the pelagic food webs). The review of Arctic ecosystems and VECs described by the authors in this section led to suggestions of further research which can reduce remaining uncertainties. The more generic recommendations for further research compiled from this review are summarized below while recommendations that are important for improving Arctic NEBA are listed separately.
Continue studies on Arctic faunal groups. Some Arctic populations are now well understood in terms of natural history and toxicological profiles; however, groups of species require further examination regarding trophic roles, distributions, abundances, and ecotoxicological profiles based on annual and Interannual patterns.
Knowledge of the interrelationships of Arctic species in areas of high productivity could benefit from further attention, especially with respect to migration and emigration through river systems, lagoons, and polynyas.
The distributions and abundances of benthic and bathypelagic communities of the Arctic are not well known. Boulder patch and other isolated areas of hard substrate as well as lagoon systems have proven to be important areas of increased production in the Arctic, but have received little attention. While there have been some studies evaluating the deep sea communities particularly in the Norwegian Deep, eastern Beaufort, and parts of Baffin Bay, there are substantial portions of the Arctic deep water that have not been assessed. This is a difficult area to characterize; however, recent studies in the High Canadian Arctic (Geoffroy et al. 2011, 2013; Reist and Majewski 2013) indicate that during certain portions of the year, Arctic cod can be found in large numbers in deeper waters of the Arctic. There are indications that VECs known to occupy the deep sea habitats in other oceanic basins are present in the Arctic (e.g. myctophids, deep-sea corals), however, these communities are not likely to be disrupted by near-term oil and gas activities. Deep water assessments would become important if there were a deep water release from a drilling platform.
Continue pan-Arctic data collation. Data from holistic efforts, such as BREA and RUSALCA could be collated and put into a GIS platform. VEC species or regions for which there is not sufficient data may require additional data collection efforts. These types of efforts generally occur as new areas are open for exploration or development.
Evaluate ecosystem services. Ecosystem services are the conditions and processes through which natural ecosystems and the species that comprise them sustain and fulfill human needs (Daily 1997). Marine ecosystem services include functions that support human life, such as the production of ecosystem goods (e.g. seafood) and cleansing and sequestering wastes (e.g. uptake of excess nutrients by phytoplankton). The marine ecosystem confers intangible aesthetic and cultural benefits (Kaufman and Dayton 1997; Peterson and Lubchenco 1997) to residents of the Arctic.
There are only a handful of studies useful to understanding the trophic interactions of emerging habitats of concern (e.g. interface habitats and deep-sea basins of the Arctic). The recommendations presented below indicate where increased knowledge of Arctic ECs and VECs would result in reducing existing uncertainties in NEBA assessments. No prioritization has been made to the list; for some of the recommendations, surrogate data may be already available.
Expand knowledge of Arctic ECs. Assessment of Arctic ECs should be expanded to include the communities populating interface habitats. These habitats include: sea surface microlayer, ice edges and margins, under-ice flora and fauna, water mass convergence zones, demersal communities, and shorelines. These specialized habitats and resident or transient species may contribute significantly to the overall functioning, diversity, and resilience of the Arctic. While the effects caused by individual OSR actions to key VECs living in the open water pelagic environment has been examined, repercussions of oil exposure to aggregating communities within convergent interface habitats is less well understood.
The surface microlayer (SML). The surface microlayer refers to the uppermost surface layer(s) of the ocean. The depth of the layer(s) is defined differently by physical oceanographers, chemists and biologists based on their conceptual model developed to address their different fields of interest. Physical oceanographers and atmospheric scientists view the layer as the interface between the air and water while chemical oceanographers describe the layer based on the behavior of hydrophilic and hydrophobic moieties of chemical compounds. Biological oceanographers define these layers based on where organisms and life stages reside or interact with the sea surface. Certain communities of plants, invertebrates and vertebrates spend all of their life history at the sea surface and these are typically referred to as neuston. The SML also acts as a nursery ground for many larval fish and invertebrate species, including larval species settling onto intertidal surfaces. This group of surface oriented species represents a community of organisms that is most closely associated with surfaced oil as the oil sheen spreads over the water’s surface. Whether the oil sheen is only a few microns or centimeters thick the organisms that contact this layer are exposed to the highest oil concentrations with the potential of activating multiple modes of toxic action. In some cases larger marine organisms can skim feed on the concentrated masses of food and contaminants (certain fish, birds, and mammals) while others swim through the layer(s) to breath. An understanding of the role of the neuston in the pelagic and intertidal food webs is needed to better characterize the impacts of surfaced, untreated oil and potential recovery rate of this vital micro-compartment. Exposure to oil at the upper sea surface layer may result in additional toxic stress via different modes of toxic action, including fouling and respiratory stress from evaporating volatile compounds.
Ice edges and margins, polynyas and other interface communities. Polynyas have been identified as areas of enriched abundance and production during the Arctic winter. Pelagic-benthic coupling is also showing that the increased pelagic activity is mirrored in the benthic community. These different communities are typically an aggregation of species already known to be important in other portions of the Arctic. Identifying and further studying the importance of these areas is of importance for the selection of OSR alternatives.
Emphasize functional role of faunal groups. The list of VEC species to be included in NEBA analyses is not static for all areas of the Arctic. Emphasis will be placed on functional roles while addressing regional differences. New information regarding trophic food webs, population abundances and distribution patterns as well as toxicological profiles of VECs should be continuously expanded and updated (e.g. for ophiuroids, hard corals, jellyfish and neuston). Population size estimates of VECs that occupy interface habitats compared to bulk pelagic waters is needed to determine the relative impact of the various OSR options.
Increase understanding of resiliency and potential for recovery of Arctic species and populations. An evaluation of the resiliency of potentially impacted populations of VEC species within Arctic ECs is critical in determining the ultimate biological consequences of each oil spill response considered during emergency oil spill response planning. Generic metrics for resilience should be developed and scored for keystone VECs. Refer to Sections 7, 8 and 9 for further concept development (Population Effects Modeling, Ecosystem Recovery, and NEBA for Oil Spill Response Options in the Arctic, respectively). | http://neba.arcticresponsetechnology.org/report/chapter-2/23/231/ |
One of the most refreshing changes to the political landscape of recent times has been the shifting of attitudes regarding energy capture, and the increasingly significant role that renewable sources of energy will play in modern, developed nations. A move away from a strict reliance on fossil fuels has developed over time from something seen as little more than a lofty and utopian ambition, to a necessity if we want to preserve a healthy and inhabitable environment for future generations to enjoy.
However, the truth of the matter is that renewable energy has rarely – if ever – been far from the heart of everyday life in Great Britain. From evidence of prehistoric activity unearthed by archaeology until the dawn of the industrial revolution, virtually all of the energy used for industry, agriculture, and domestic living was obtained from a renewable source.
The earliest examples of renewable energy in British history begin with the land clearances and agricultural developments of neolithic settlers, some seven thousand years ago. Remnants of burnt plant and organic matter make biomass our oldest fuel source: one that civilisations have utilised for warmth, cooking and construction since prehistory.
But that is not to say that renewable energy has ever been seen as a primitive alternative to fossil fuels, or a throttle to the pace of progress. On the contrary: renewable energy was the great driver of growth in Britain and across Europe throughout history. Hydrokinetic energy and water mills powered the Roman expansion into Britain and, by 1086, there were at least five and a half thousand water mills in the British isles.
With the adoption of coal (and, later, imported oil), heavy industry was able to flourish at an accelerated pace across the British Empire, while at home the use of water and windmills to power machinery and construction remained widespread. And with the discovery of electrical energy, scientists began to experiment with the idea of generating electricity from renewable sources. Two scientists at Kings’s College in London, William Grylls Adams and Richard Evans Day produced photovoltaic effects with sellenium cells exposed to sunlight: the discovery upon which all solar energy has since been devised. It was a Scotsman, Professor James Blyth, who invented the world’s first wind turbine for generating electricity in 1887.
But it was only in the second half of the twentieth century when public opinion really began to shift back in favour of renewables primarily for the ecological benefits they afforded. Hydroelectric, wind, biomass, and solar photovoltaic cells have enjoyed increased investment in the second half of the twentieth century, and growing government support.
The Hydro-Electric Development (Scotland) Act of 1943 led to the construction of several hydroelectric power stations, including the Cruachan dam on Loch Awe in 1965. But it was 1991 before the Uks first on-shore wind farm went online, near Delabole in Cornwall. With the construction of 30 turbines off the Welsh coast between Prestatyn and Rhyl, Britain had its first off-shore wind farm in 2003. By 2007, the combined energy generated by wind power alone was measured at an impressive two gigawatts.
By 2012, and following a worldwide political shift towards renewable energy, the UK became home to approximately 288,000 solar energy projects and commercial ventures. Noting that solar technology was becoming increasingly versatile and scalable in its energy generation, the government set benchmark targets of four million solar-powered homes by 2020. This forms a significant portion of the broader objective that the UK should source fifteen per cent of its total power output from renewable means by the same year, building a cleaner and more sustainable future for all.
Throughout our history, renewable energy has been the norm, and fossil fuels are the scarce alternative which provides power for an accelerated expansion into industrialisation. As we begin to re-contextualise non-renewable energy as a powerful, yet limited, reserve fuel source rather than the norm, we can begin to look at our society in a whole new light. | http://smallbusinessbible.org/a-brief-history-of-green-energy-in-the-united-kingdom/ |
We like to assume that producing a new megawatt-hour of electricity from wind means we’ve eliminated a megawatt-hour of fossil-fuel produced electricity.
But it doesn’t usually work that way, according to University of Oregon sociologist Richard York, and that’s why he believes it will take economic and political changes – not just cool new clean technology – to shift us away from our dependence on fossil fuels.
York makes that argument in a paper recently published in the journal Natural Climate Change. He notes that while most countries are relying on technological advances, like wind and hydro power, to limit the use of fossil fuels, this approach ignores the “complexity of human behavior.”
He says that the addition of such renewable energy technology is doing little to actually displace the use of fossil fuels.
York’s conclusions are based on studying electricity use in 130 countries in the past 50 years. He found it took more than 10 units of electricity produced from non-fossil sources, such as nuclear, hydropower, geothermal, wind, biomass and solar, to displace a single unit of fossil fuel-generated electricity.
Take nuclear power: It began growing into a significant source of power beginning in the mid-20th century, but world use of fossil fuels kept right on growing with it. He fears the same thing could happen with wind, solar and other green power sources.
“I’m not saying that, in principle, we can’t have displacement with these new technologies, but it is interesting that so far it has not happened,” York explained. ”One reason the results seem surprising is that we, as societies, tend to see demand as an exogenous thing that generates supply, but supply also generates demand. Generating electricity creates the potential to use that energy, so creating new energy technologies often leads to yet more energy consumption.”
York concludes that we need to not just be looking to technology for changes, but to think about the technology in a social context. He said society needs to discover what political and economic factors lead to true displacement of fossil fuels.
“We need to be thinking about suppressing fossil fuel use rather than just coming up with alternatives alone,” he added. | https://tgdaily.com/technology/sustainability/62280-more-clean-energy-doesnt-mean-less-dirty-energy/ |
Modern human civilization is build on and continues to be principally dependent on large quantities of energy to sustain it. The primary source of this energy has been fossil fuels.
As early as the 19th century, fossil fuels were being used to power industries and to date; they continue to be the primary source of energy for man’s industrial efforts. However, as a result of the exponential industrial expansion and the population explosion that subsequently followed, these traditional sources of energy have been stretched to their limits as the demand for energy continuous to grow. In additional to this threat of exhaustion, it has in the recent decades been acknowledged that fossil fuels are largely responsible for adverse effects on the environment. A wider exploitation of renewable energy sources has been seen as the key to enhancing the energy security for many nations as well as mitigating environmental effects caused by fossil fuels. Governments have therefore begun to seek alternative energy sources such as wind, wave, ocean currents and solar energy. These sources are to act as alternatives to the use of existing oil and natural gas sources.
While alternative energy sources have been hailed by some as the only way through which man will be able to satisfy his energy needs, other people view these means as unfeasible. This paper will argue that renewable energy sources if fully utilized can lead to an end to the world’s energy need problems and mitigate the environmental hazards caused by fossil fuels. To reinforce this assertion, this paper will engage in a detailed discussion as to why renewable energy sources should be adopted and reliance on fossil fuels decreased.
The world cannot run on fossil fuels indefinitely since there is only a finite amount of fossil fuel reserves on earth. Issitt and Warhol reveal that according to a BP report, only 1,200billion barrels of crude oil remain in the world’s oil reserves (1). Bearing in mind that the current consumption rate is about 31billion barrels per annum, it can be projected that the world will run out of its reserves in the next 4 decades.
This is a bleak reality since most of the world’s technology is made to utilize fossil fuels (Issitt and Warhol 1). As such, governments must invest more in alternative energy so as to reduce the dependency on fossil fuels which are predicted to run out in the not so distant future. The overreliance on fossil fuels results in the increased dependency of the U.S. to the oil producing nations. While the U.
S. does produce some fossil fuels, The US department of Energy notes that the contribution of the US is only 3% while its consumption is up to 25% (Issitt and Warhol 1). These points to a high dependency on oil imported from other countries by the US. This inevitable places our country at the mercies of the oil producing giants which might be a risk. For example, following the 1967 Arab-Israeli way, the oil producing Arab states imposed an oil embargo on the US for its support of Israel.
With such realities in mind, it makes sense to be self sufficient. The only way that the US can move towards self sufficiency in energy is through exploiting renewable energy sources. Proponents of fossil fuels propose that measures can be taken to mitigate their adverse effect and ensure their sustainability. Rich and David assert that while such efforts at producing environmentally-friendly fossil fuels such as coal are praiseworthy, they are misguided and will not make a significant difference (1). This is because it is impossible to avoid the production of green house gasses when burning fuels regardless of the technology being utilized. Instead of focusing on ways to increase the environmental friendliness of fossil fuels, the available renewable energy alternatives should be pursued.
Rich and David argue that there are many feasible alternative energy sources which if properly researched could render the need for fossil fuels obsolete. One of the reasons why fossil fuels are favored over alternative energy sources is because of their relatively cheap cost. This edge that fossil fuel has over renewable sources is quickly being closed and it can be projected that in the near future, alternative energy sources will dominate the market. Witherbee suggests that one of the reasons why most alternative energy sources have failed to work out in the past has been as a result of lack of political and economic support. With the continued penetration into mainstream market of alternative energy sources such as solar power and heat pumps that work on geothermal energy, it is considerable that alternative energy sources will become cheaper as the market increases therefore becoming more competitive. Given the numerous advantages that alternative energy sources have over fossil fuels, most consumers will opt to utilize the cheap and renewable energy sources. The need for fossil fuel posses a threat to the natural environment and if the demand for fossil fuels continues unabated, it is likely that rampant drilling for oil will occur. An especially troubling reality is the proposed drilling in the Arctic National Wildlife Refuge which has been a protected wildlife site for decades.
Proponents of fossil fuel extraction claim that such a move would result in cheaper fuel as well as jobs for the American population. Issitt and Tom declare that the benefits from such moves would be minimal and only periodic and the authors blame corporate contributors and politicians for furthering the interest of the petroleum industry at the expense of the environment even though they know that it is non-renewable. Lizza demonstrates the role that politics play in determining the fate of the environment by documenting how politicians support drilling in protected areas just so that they can get political leverage. This is a very irresponsible behavior since the welfare of the people in the long run and the sustainability of resources should be the main determinant of the stance taken by politicians. Witherbee asserts that rather than expanding fossil fuel exploration while all the time knowing that it is a limited energy source, alternative energy sources should be sought out for a more permanent solution.
Proponents of fossil fuels assert that while alternative energy sources purport to be the solution to the problems that fossil fuels have caused, alternative energy sources can simply not cater for the huge energy needs of the world. Bowman and Marcus assert that while it is a fact that the world’s fossil fuel reserves are dwindling, alternative energy sources are incapable or replacing them and the only solution would be to use fossil fuels more efficiently by use of conservation techniques (11). To reinforce this claim, the authors demonstrate that all the current implementations of renewable energy sources are either inefficient or prohibitively expensive therefore making them unfeasible (Bowman and Marcus 13). Fossil fuels on the other hand continue to remain cheap and therefore attractive to consumers all over the country. To counter this claim, Rich and David demonstrate that there are numerous renewable energy source alternatives and if they are extensively researched on, they could rival fossil fuels and eventually cater for the energy needs of the world (1). One of the points that opponents of fossil fuels raise to deride fossil fuels is that they are detrimental to the environment and result in the proliferation of global warming. However, proponents of fossil fuels point out that renewable energy sources which are hailed by alternative energy lobbyists as environmental friendly are not only expensive to the tax payer but they also result in environmental degradation.
Bowman and Marcus argue that some renewable sources such as ethanol and bio-fuel result in deforestation so as to create more land for corn and other resources used to produce the fuels (3). Renewable energy sources such as solar power as obtained through Concentrating Solar Power technologies require huge amount of land that large scale production plants demand. The large scale installations of mirrors results in a negative effect on the ecosystem since deployment of these structures leads to the shading or complete coverage of large tracts of land. The ecosystem that exists on these shaded surfaces will therefore be affected by the lack of sunlight. This therefore negates the notion that renewable energy sources are favorable to the environment.
While it is true that some renewable energy sources do result in environmental degradation, they do not do so on the same scale as fossil fuels do. In addition to this, not all the renewable energy sources have adverse environmental effects. For example, wind power is a very environmental friendly alternative energy source. As such, not all renewable energy sources should be branded as degenerative to the environment.
Despite the US government having invested billions of dollars to alternative energy sources projects in the past 4 decades, there is still no sign of a feasible alternative to fossil fuels. For this reason, opponents of alternative energy sources argue that more effort should be directed towards increasing the efficiency of the available fossil fuels instead of wasting time and money researching on renewable sources which hold little promise of providing solutions. Pearson reveals that the US government has lost billions in revenue as a result of tax breaks designed to motivate renewable energy production (1). The US government has also made enormous contributions to alternative energy sources such as the Hydrogen fuel cell engine despite a lack of support for the feasibility of such technologies (Pearson 1). While this argument does hold true in that alternative energy sources are not yet matured and hence cannot compete with fossil fuels in terms of efficiency or pricing, this should not be used as the basis to stop research into renewable sources.
Only by extensive research and investment into alternative sources can renewable energy compete and ultimately replace fossil fuels which have been in use for over a century. Alternative energy sources result in some problems which are not present with fossil fuels. Roper documents that wind towers which are praised by renewable energy lobbyist for their environmental friendliness pose a risk to the aviation industry by disrupting the radar system. Solar power plants which are characterized by huge reflective structures that are used to concentrate the suns rays also interfere with air transportation systems. Aircraft operations in particular stand the risk to be affected if reflected light beams become misdirected into aircraft pathways. This may have catastrophic results on the airplane. While these dangers are real, they can be mitigated by placing the solar power plants and wind plants away from the path which airplanes take.
This paper has argued that renewable energy sources if fully utilized can lead to an end to the world’s energy need problems and mitigate the environmental hazards that have resulted from over exploitation of fossil fueled. It has been documented that fossil fuels are not only dwindling in supply but they also have an adverse effect to the environment as a result of the greenhouse gases they emit. Through a detailed discussion of the many advantages that can be reaped from embracing alternative energy sources, this paper has proposed that resources should be channeled into alternative energy sources research so as to eventually cause fossil fuels to lose their primacy as the chief energy source. However, this paper has recognized that there are some problems that are inherent in alternative energy sources. The major problem is the pricing which is the reason why most people still favor fossil fuels.
It has been suggested that with more governmental support, this problems can be offset therefore making alternative energy sources competitive. By embracing alternative energy sources, man will not only be able to survive favorably when fossil fuel runs out but he will also safeguard his natural environment.
Bowman, Jeffrey, and Marcus Griswold.
“Counterpoint: Alternative Energy Won’t Solve All the Demands of World Energy Consumption.” Points of View: Alternative Energy Exploration (2009): 3. Points of View Reference Center. EBSCO. Web.
27 Nov. 2010. Issitt, Micah L., and Tom Warhol. “Alternative Energy Exploration: An Overview.
” Points of View: Alternative Energy Exploration (2009): 1. Points of View Reference Center. EBSCO.
Web. 27 Nov. 2010.
Issitt, Micah L., and David C. Morley. “Counterpoint: There are Better Energy Alternatives to Drilling in Alaska.” Points of View: Arctic Drilling (2009): 3. Points of View Reference Center. EBSCO. Web.
27 Nov. 2010. Lizza, Ryan. “AS THE WORLD BURNS.” New Yorker 86.31 (2010): 70.
Points of View Reference Center. EBSCO. Web. 27 Nov. 2010. Pearson, John. “Point: Alternative Energy Exploration is Not the Answer.” Points of View: Alternative Energy Exploration (2009): 5.
Points of View Reference Center. EBSCO. Web.
27 Nov. 2010. Rich, Alex K.
, and David C. Morley. “Point: The World Must Actively Explore Alternative Sources of Energy.” Points of View: Alternative Energy Exploration (2009): 2. Points of View Reference Center. EBSCO.
Web. 27 Nov. 2010. Roper, Peter. “Ill wind blowing: Towers foul up radar.” Pueblo Chieftain, The (CO) 05 Apr. 2010: Points of View Reference Center. EBSCO.
Web. 27 Nov. 2010. Witherbee, Amy. “Counterpoint: Saving the Alaskan Frontier.” Points of View: Arctic Drilling (2009): 6. Points of View Reference Center. EBSCO.
Web. 27 Nov. 2010. Witherbee, Amy. “Counterpoint: No Alternative.” Points of View: Alternative Energy Exploration (2009): 6. Points of View Reference Center.
EBSCO. Web. 27 Nov. 2010.
Hi!
I'm Simon! | https://graceplaceofwillmar.org/alternative-energy/ |
Q. How is utilization of Green Hydrogen essential for meeting India's future energy demands and what are the critical challenges that India faces in the utilization of Green Hydrogen? (250 Words)
Approach Answer:
Introduction: ‘Green hydrogen’ is a zero-carbon fuel made by electrolysis using renewable power from wind and solar to split water into hydrogen and oxygen. It can be utilised for the generation of power from natural sources — wind or solar systems — and will be a major step forward in achieving the target of ‘net zero’ emission.
Currently, India consumes around 5.5 million tonnes of hydrogen, primarily produced from imported fossil fuels. Currently India produces grey or blue hydrogen which use fossil fuels for it production. According to the Council on Energy, Environment and Water (CEEW), green hydrogen demand could be up to 1 million tonnes in India by 2030.
Advantages of Green Hydrogen:
Challenges in the utilization of Green Hydrogen:
Conclusion: India has been focusing on boosting Hydrogen production with the help of various initiatives such as National Hydrogen Energy Mission which aims to produce Hydrogen from green energy sources. The Indian Railways have announced the country’s first experiment of a hydrogen-fuel cell technology-based train by retrofitting an existing diesel engine.
Further Indian corporates too are looking to tap into the perceived opportunities. For example, Reliance Industries have decided to reduce cost of Hydrogen production to $1/kg by 2030. This paints a bright future in this regard. | https://chahalacademy.com/answer-writing/09-Sep-2021/227 |
The study presents a cost effective electricity generation portfolio for six island states for a 20-year period (2015-2035). The underlying concept investigates whether adding sizeable power capacities of renewable energy sources (RES) options could decrease the overall costs and contribute to a...
- Ensamblado de ficocianina sobre TiO2 nanoestructurado para celdas fotovoltaicas. Enciso, Paula; Minini, Luc�a; �lvarez, Beatriz; Cerd� Bresciano, Mar�a Fernanda // Innotec;dic2012, Issue 7, p69
The use of renewable energies is of increasing importance due to depletion of fossil fuel sources and environmental damages caused by their utilization. The energy available from the sun is clean and widely distributed. Solar cells are devices used to convert solar energy into electricity. Among...
- Optimal Analysis of Low-Carbon Power Infrastructure of Taiwan. Shyi-Min Lu // Sustainable Energy;
The study assumes that CCS can be successfully applied on the coal-fired and gas-fired power generation in the future, then regardless of the existence of nuclear power, without the substantial expanding of renewables, and with the fossil fuel generation capacity close to the BAU scenario, the...
- Development status and prospect of solar photovoltaic power generation in China. HU Yunyan; ZHANG Ruiying; WANG Jun // Journal of Hebei University of Science & Technology;Feb2014, Vol. 35 Issue 1, p69
The solar energy is an inexhaustible, clean, safe and renewable resource. With the greatest development potential among the five kinds of new energy sources, it is considered to be one of the best alternatives to replace fossil energy in the 21st century. The paper introduces the principle,...
- Buoyant Unstable Behavior of Initially Spherical Lean Hydrogen-Air Premixed Flames. Zuo-Yu Sun; Guo-Xiu Li; Hong-Meng Li; Yue Zhai; Zi-Hang Zhou // Energies (19961073);Aug2014, Vol. 7 Issue 8, p4938
Buoyant unstable behavior in initially spherical lean hydrogen-air premixed flames within a center-ignited combustion vessel have been studied experimentally under a wide range of pressures (including reduced, normal, and elevated pressures). The experimental observations show that the flame...
- Characterisation of CO/NO/SO emission and ash-forming elements from the combustion and pyrolysis process. Houshfar, Ehsan; Wang, Liang; Vähä-Savo, Niklas; Brink, Anders; Løvås, Terese // Clean Technologies & Environmental Policy;Oct2014, Vol. 16 Issue 7, p1339
Bioenergy is considered as a sustainable energy which can play a significant role in the future's energy scenarios to replace fossil fuels, not only in the heat production, but also in the electricity and transportation sectors. Emission formation and release of main ash-forming elements during...
- Renewable Energy and Government Support: Time to ‘Green’ the SCM Agreement? SHADIKHODJAEV, SHERZOD // World Trade Review;Jul2015, Vol. 14 Issue 3, p479
Many governments provide subsidies to shift from ‘dirty’ but cheap fossil fuels to ‘clean’ but expensive renewable energy. Recently, public incentives in the renewable energy sector have been challenged through both dispute settlement procedures of the World Trade...
- Cell Wall Engineering by Heterologous Expression of Cell Wall-Degrading Enzymes for Better Conversion of Lignocellulosic Biomass into Biofuels. Turumtay, Halbay // BioEnergy Research;Dec2015, Vol. 8 Issue 4, p1574
Huge energy demand with increasing population is addressing renewable and sustainable energy sources. A solution to energy demand problem is to replace our current fossil fuel-based economy with alternative strategies that do not emit carbon dioxide. Plant biomass is one of the best candidates...
- MICROALGAE AS AN ALTERNATIVE FEED STOCK FOR GREEN BIOFUEL TECHNOLOGY. Anisha, G. S.; John, Rojan P. // Environmental Research Journal;2015, Vol. 9 Issue 2, p223
The worldwide fossil fuel reserves are on the decline but the fuel demand is increasing remarkably. The combustion of fossil fuels needs to be reduced due to several environmental concerns. Biofuels are receiving attention as alternative renewable and sustainable fuels to ease our reliance on... | http://connection.ebscohost.com/c/articles/97511550/studies-green-fuel-generation-from-renewable-resources |
Photo by Penn State/CC BY-NC-ND 2.0
Given the United States’ recent election results, America’s future on climate and energy policy is uncertain, but global progress on mitigating climate change will continue with or without us...
We have reason to be optimistic because countries around the world have demonstrated their commitments to taking action on global warming. Already, 114 countries have ratified the Paris Climate Change Agreement, which became international law on November 4, 2016, indicating that mitigating climate change is indeed a global priority. Even before this United Nations agreement, countries like China and Germany have made huge investments in renewable energies, steadily diversifying their energy portfolios. Other countries like Sweden or Costa Rica have pushed further, demonstrating their commitment to relying fully on renewable energy sources. Whether or not the United States political landscape will advance progress towards cleaner and more sustainable energies, the rest of the world is demonstrating a shift towards renewables.
While we have yet to see the extent of the Paris agreement’s impact, it is certainly a landmark deal. Each country that ratifies the agreement pledges to fight climate change through the primary goal of keeping global temperature rise to well below 2 degrees Celsius in this century.
UNFCCC Executive Secretary Patricia Espinosa and President of COP22 and Minister of Foreign Affairs and Cooperation of the Kingdom of Morocco Salaheddine Mezouar praised this historic event as “a clear political signal that all the nations of the world are devoted to decisive global action on climate change.” They noted that this agreement not only gives us hope that we might minimize the global impact of climate change, but it also indicates a commitment to building a global renewable energy industry, to climate resilient economies and societies, and to the continued development of climate-focused policies and technologies.
This last idea, that this agreement signifies a global shift towards combatting climate change, is especially important. Around the world, we can see communities, cities, and countries transforming policies to more effectively reduce greenhouse gas emissions by investing in cleaner energy technologies. Nevertheless, this shift towards re-engineering societies and economies will continue to need significant investments. The International Energy Agency’s 2015 World Energy Outlook estimates that in order to fully implement the Paris Agreement, climate pledges require investments of $13.5 trillion for energy efficiency and low-carbon technologies though 2030. Sixty-percent of this amount is expected to go towards boosting renewable energy capacity.
Energy demand will continue to rise over the years. Unsurprisingly, fossil fuels will continue to make up a substantial percentage of energy use – the U.S. Energy Information Administration estimates this figure to be at 78% in 2040, with natural gas showing the greatest growth among fossil fuel types. Even so, statistics show that the transition to a clean energy economy is very much in progress. Along with the rise in certain fossil fuels, dependence on coal will continue to plateau over the next several decades, and renewables are projected to be the world’s fastest-growing energy source over the 2012-2040 period.
These trends indicate that the world will continue to embrace cleaner energy sources. Indeed, several countries around the world have gone above and beyond what is mandated by international law to integrate renewable energy sources into their portfolios.
International Shift Towards Renewables
In 2015, Swedish Prime Minister Stefan Löfven and Minister for Climate and the Environment Åsa Romson expressed their hope that Sweden would be world’s first fossil-free welfare nation. Then in the summer of 2016, Sweden revealed a plan for 100% of their energy consumption to come from renewable energy sources by 2040. While this agenda may be highly ambitious, Sweden has already demonstrated its commitment to a sustainable future by hitting its goal of a 50% renewable energy share by 2020 years ahead of schedule, in 2012.
Even if this goal is not reached within the proposed timeframe, Sweden certainly has the ability to make even larger strides towards sourcing all their energy from renewables, since it benefits from government leadership and financial support, which drives investments and policies towards sustainable goals. Already, the government has announced that 4.5 billion kronor (about $496 million) would go towards funding green infrastructure such as solar panels and a smarter energy grid. Another amount of over $115 million would go towards energy storage research and making residential buildings more energy efficient. Sweden may enjoy more abundant than average resources, but it still sets a strong model of what countries can achieve.
Of course, Sweden is not the only country setting an example in these sustainable efforts. Other nations around the world are taking significant strides as well. This article will feature a small sample of the many countries committing to increasing their renewable energy supplies. Three notable leaders in renewable energy include Costa Rica, Germany, and China.
Costa Rica
Between June and August of 2016, Costa Rica was powered fully by renewable energy for 76 consecutive days. Costa Rica’s energy sources consist primarily of hydropower (making up about 80% of its energy consumption) and of a mix of other common renewable sources, including solar, geothermal, and wind. In 2010, geothermal made up about 13% of their energy use, and this is expected to increase, which could be an important balance to being overly-reliant on hydropower. Not only has Costa Rica achieved periods without the use of any fossil fuels, but they have also done so at a low cost while covering 99.4% of the country’s households.
It is important to note that much like Sweden, Costa Rica has demonstrated significant progress towards relying completely upon renewable energy due its size and the fact that its main industries are not highly energy-intensive. Even so, it serves as an important indicator of the direction many other countries could follow.
Germany
Another country at the forefront of embracing renewable energies, Germany has recently revealed that it will build the world’s first wind-turbines backed up by hydropower batteries. This integration of sources addresses some of the concerns over the availability of renewable energy. GE, who collaborated with a German firm on this project, addressed this in a recent report, noting that “[b]ecause the wind doesn’t always blow and the sun doesn’t always shine, all forms of renewable energy need some kind of backup source to ensure reliability.” Even when wind does not power the turbines, the connected hydro plant would act as a backup battery to keep the turbine moving.
This is just one piece of Germany’s Energiewende, that is, a term describing the agenda for German energy transition to vastly increasing the presence of renewable energies, reducing carbon emissions to 40% below 1990 levels, and other key components. In July 2016, Germany set a record for meeting 78% of its energy demands of the day through renewable energy sources. Importantly, Germany sees these targets as integral not only for fighting climate change, but for stimulating their economy through increasing technological innovation, reducing energy imports, and achieving energy security.
China
By capacity, China is a global leader in renewable energy. This is not surprising given the massive energy needs of their population and industries and due to their government make up. At the end of 2015, China had installed 145.1 Gigawatts (GW) of wind power capacity, overtaking the EU as the global leader in wind power capacity. China is also the world’s leader for solar capacity, with a total of 43.2 GW of capacity. While this might not be a significant percentage of their total energy needs, this is nearly a 13-fold increase in their solar capacity since 2011. This is great progress towards renewable investments, even in a country where the main industries are highly energy intensive.
While China represents 40% of renewable energy growth globally, this has not been a perfectly smooth transition. The International Energy Agency notes that China faces challenges in regards to grid integration and a slowdown in electricity demand paired with electric power overcapacity. All the same, China–a country known for its smog-filled city skies and highly energy-intensive industries–represents a global trend towards increased renewable energies.
The Future of Renewables
Delivering renewable energy is complex. No single type of renewable energy – whether solar, wind, water, or other – is right for an economy, nor is relying 100% on renewable the right answer for every country, especially not without a gradual transition.
There are certainly some concerns with the efficacy and impact of renewables. In many cases, renewable energies cannot respond to real-time energy demands, but innovations in Germany, such as their solar and hydro power integrations, show that technological progress will make renewables an increasingly reliable energy source. Projects like the Clean and Secure Grid, aim to leverage existing technology and infrastructure in the United States to more efficiently transmit a mix of energy sources, with an emphasis on renewable energy. Companies like Tesla are making substantial progress in developing energy storage systems that will also bring increased reliability to renewables like wind and solar. In other ventures, Tesla recently acquired SolarCity and has announced a project to help power the entire island of Ta’u in American Samoa. While this is a small island with around 600 residents, the initiative is an important indication of the business interests in renewable energy. Federal governments are certainly not the only source of the transition to cleaner energy.
Of course, environmental concerns can play into the conversation as well. Hydropower, for example, can be ecologically damaging when disrupting river flow or fish breeding patterns, and in the United States, it can be controversial when dams infringe upon Native American land and water rights. Hydropower can be developed in such a way that projects are developed in partnership with affected parties. Organizations like the Hydropower Reform Coalition highlight ways in which hydropower dams can be adjusted to limit erosion and restore habitats, an important initiative when considering the expansion of river dams.
All this indicates an opportunity for growth and development but has not put a stop to the spread of renewable energy. As shown with the Paris Agreement, countries around the world are seizing the incredible opportunity to transition to a cleaner energy on a global scale. Importantly, the expansion of renewables would drive a massive reduction in greenhouse gas emissions. One IPCC assessment shows that lifecycle carbon dioxide emissions for rooftop solar (which produces higher emissions relative to concentrated solar) are one twentieth of the lifecycle emissions of coal, and one eighth the emissions of gas. When paired with energy efficient technologies, shifts towards cleaner energies would help countries across the world slow climate change and mitigate further impacts of a warming planet. This is a key point of interest for so many countries that have pledged to increase their use of renewables.
Of course, other points of interest include reduced contamination of water sources and reduced air pollution, and consequently offering public health benefits. It can also obviate the practices drilling or mining, which have further health and environmental impacts.
Around the world, businesses and government have realized the extensive benefits of expanding renewable energy and are thus increasing their investments in this clean energy. Developed responsibly, renewables are the best energy source when it comes to minimal climate impact, making this an important consideration for so many of the countries who have signed the Paris Agreement. we can certainly expect to see the transition to more renewables. Countries like Sweden and Germany lead the way in proving the feasibility of integrating more renewables and efficient energy policy. Fossil fuels are still an important part of the world economy, but we can certainly expect to see the transition to more renewables in the years to come.
Christina Ospina | December, 2016
PDF version with references available here.
Christina Ospina is a Graduate Fellow at the Climate Institute.
source: http://climate.org/
original story HERE
Get more of The Global Warming Blog. Bookmark this page and sign up for the blog’s free RSS Feed. Sign up for free Global Warming Blog by clicking here. You will automatically be emailed a regular summary of the latest global warming headlines.
To help do something about the climate change and global warming emergency, click here.
Sign up for our free Global Warming Blog by clicking here. (In your email, you will receive critical news, research, and the warning signs for the next global warming disaster.)
To share this blog post: Go to the Share button to the left below. | https://www.joboneforhumanity.org/the_global_arc_towards_renewable_energy |
I’d like to start a discussion on energy topics because I cannot understand what appears to be the strong support of fossil fuels and antipathy towards renewable forms of energy. To start with I will focus on Alex Epstein’s book “The Moral Case for Fossil Fuels”.
Epstein’s book starts off on a strong footing with the observation that humanity has benefited tremendously from the availability of cheap energy. He then goes on to say that the cheap energy was overwhelming provided by fossil fuels. That is clearly true. Where he goes off the rails from my perspective is with his conclusion that we should continue to emphasize fossil fuels and not get distracted by arguments against them or by the new technologies the promise to address some of the problems of fossil fuels.
Just because fossil fuels have got us where we are does not mean that we should not look for better alternatives. What I was expecting from his book was an acknowledgment of the obvious issues with fossil fuels and then a discussion of our alternatives going forward. Clearly we would prefer to live without the disadvantages of fossil fuels, and I was hoping for a even-handed discussion on how to do that. Is it better to try to replace fossil fuels where we can with things like solar and wind, or is better to continue to use fossil fuels while trying to mitigate its environmental costs?
Instead, Epstein largely dismisses the problems with fossil fuels with sweeping statements on how the environment is cleaner than it has ever been. However, it is only cleaner because we have managed to acknowledge and clean up much of the pollution caused by our earlier use of fossil fuels. That is precisely the process we are engaged in now and he hopes to subvert with his book.
I have a hard time reading Epstein’s book because most of the arguments are based on emotional manipulation and are one sided. For example, he tells about a hospital in the Gambia that cannot afford to run the generator past 2pm, and how that resulted in the death of a baby and in a decreased ability to save children in general. This is a story about the importance of cheap energy and he tells it that way. But it is also a story about the failings of fossil fuels. Fossil fuels require centralization for processing and distribution. This hospital was off the grid and so its energy was too expensive. Had they switched to solar the hospital could have remained open many more hours and could have refrigerated its medicines. Had it included wind or small hydro, it could have stayed open into the night. The lesson should have been that all sources of energy involve trade-offs and no source is best in all situations. But it wasn’t, it was an emotional story designed to support the idea that cheap energy is good. But nobody disagrees with that point. Its a red herring.
He also tells a sad story about a Chinese farmer’s wheat and corn fields, which have been replaced by a tailings pond for a rare earth metals mine. He then attributes this pond to increase in the use of wind power. Rare earth metals are used in magnets, which are needed to produce wind mills. But of course magnets are used for lots of things, like the generators used to convert fossil fuels to electricity. There are also a huge number of similar stories of woe that are caused by fossil fuels, such as the Jharia coalfield fires that have burned 37 million tons of coal since 1916 when they first started, or the Iraqi and Kuwaiti oil field fires. You can just as easily imagine a story about the devastation that fishermen experienced after the Exxon Valdez or Deep Water Horizon spills, but those stories are left untold.
Without an honest discussion of the trade-offs between fossil fuels and its alternatives, I was left wondering about the actual point of this book? It seems like a full throated promotion of fossil fuels. But who does that? Does he really believe that fossil fuels needs his help? In 2014 when he wrote the book oil, gas, and coal company stocks combined were worth nearly $5T and fossil fuels had been ascendant for 150 years. Was he really worried that they would be knocked off their perch by wind mills without his help? This is the part I really don’t understand.
But apparently he has something to worry about. Fossil fuels are starting to lose in the market. The share of energy provided by coal is rapidly falling and the share provided by wind and solar is steadily increasing. King Coal is largely being killed by natural gas, but wind and solar have made significant inroads because they are often cheaper than their alternatives. Wind and solar are intermittent and so cannot yet replace fossil fuels, hydro, and nuclear completely, but they do offer a cheaper and cleaner supplement. The current grids were designed to be tolerant of the drop-out of a large (>=1GW) base load plant, and so can accommodate significant intermittent sources. At least 20% of our supply can come from intermittent sources, and that raises to 40% if grid-level battery storage becomes practical.
From an economics perspective, it seems like the only defensible position here is to let the market decide which is best once the true costs can be accounted for by the market. For that to occur, all subsidies should be eliminated. Some say that renewable energy sources are disproportionately benefiting from subsidies, but fossil fuels receive a tremendous amount of subsidies, and have a very long time. Furthermore, fossil fuels are protected from paying for their rather extensive externalities. Of all the subsidies, the ones received by the fossil fuels industry are the least defensible.
So I guess my question is: as economists shouldn’t we be anti-subsidy rather than pro fossil fuels or anti renewable energy? If we are pro fossil fuels, aren’t we fighting the market? | https://academy.saifedean.com/forums/discussion/energy-from-an-economics-perspective/ |
Sources of alternative energy essayssince the discovery of petroleum in the 1860s, fossil fuels have become the foremost source of power in the world for over a century after their inception, their availability was very seldom called into question. The essay was supposed to be about the topic 'alternative sources of energy' and not about the topics included in alternative sources of energy ( i am not such a good orator) it's the same everywhere but since this is a blogspot and termed as 'my essays' , i thought of mentioning it. Alternative energy sources essays: over 180,000 alternative energy sources essays, alternative energy sources term papers, alternative energy sources research paper, book reports 184 990 essays, term and research papers available for unlimited access. Advantages renewable energy resources environmental sciences essay reflect the views of uk essays that most renewable energy sources are more environmentally. The solution to our future energy needs lies in greater use of renewable energy sources for both heat and power nuclear power is not the solution.
Renewable energy sources are very clean sources of energy however, there is pollution associated with the production process, materials, and facilities used to extract the energy primarily, alternative energy is generated from the sun, wind, water, and geothermal heat which are abundant and renewable sources to the earth. This free environmental studies essay on essay: energy dependance and renewable energy sources is perfect for environmental studies students to use as an example. Renewable energy has various sources to obtain energy through solar energy1, tidal energy, biomass energy, geothermal energy, hydroelectric energy, wind energy, etc each has a different method to contain and supply energy.
The majority of oil & gas sources are concentrated in certain regions, many of which are getting more technically challenging and more expensive to reach, whereas renewable energy is domestic it provides security of supply, helping a nation reduce its dependence on imported sources. Ielts writing task 2/ ielts essay: you should spend about 40 minutes on this task the government should make more efforts to promote alternative sources of energy to what extent do you agree or disagree with this opinion. Open document below is an essay on alternative source of energy from anti essays, your source for research papers, essays, and term paper examples. Open document below is an essay on how can alternative sources of energy be harnessed from anti essays, your source for research papers, essays, and term paper examples. Individuals must conserve and find alternative sources of energy in order to have the energy they will need for the future popular essays the barber's trade union summary.
The energy sources that produce the most energy quickest are coal and nuclear both produce energy much faster than any other energy source the major downside to these types of sources is the sheer amount of pollution they produce, and the effect they have on the environment. This is a sample essay on renewable sources of energy in india sources of energy can be broadly classified into two categories-(i) exhaustible sources and (ii) inexhaustible sources or renewable sources. Renewable energy is an international, multi-disciplinary journal in renewable energy engineering and research the journal aims to be a leading peer-reviewed platform and an authoritative source of original research and reviews related to renewable energy. Solar energy, wind power and moving water are all traditional sources of alternative energy that are making progress.
Energy: short essay on energy how to make use of renewable sources of energy or the alternative energy sources energy policy today has two choices (paths) one. Alternative energy sources: alternative energy encompasses all those things that do not consume fossil fuel they are widely available and environment friendly. Alternative energy sources can be described as energy that actually won't pollute as much fossil fuel, in addition to being less harmful to the environment as well as humans, lower in cost, lastly, comes from sources of which we won't run out of.
Sources with essay topics alternate energy sources the potential it holds in electricity production under renewable sources solar energy requires high. Renewable energy resources and significant opportunities for energy efficiency exist over wide geographical areas, in contrast to other energy sources, which are concentrated in a limited number of countries. Title: hydrogen fuel cells and ethanol introduction within the course of recent decades, scientists are considering the application of alternative energy sources to save the actual capacity of energy. Therefore, this essay will discuss how to harness alternative sources of energy effectively, with a particular focus on some exciting problems of energy, the use of renewable energy, scientific support and government participation.
The alternative sources of energy essays nowadays,when a great progress of civilization is taking place, energy is the key to sustainable developmentit has always been indispensable to most human activities such as domestic life, agriculture, industry and transportnow it is a precious good but t. Defining 'energy sources' energy sources | energy types include both the categories we use to group energy sources (like fossil fuels, alternatives, and renewables) and the resources we derive energy from (like oil, solar, and nuclear. The importance of renewable energy essay sample alternative energy sources are renewable sources of energy which include wind power, tidal energy and nuclear.
Renewable energy essay or any similar topic specifically for you do not waste another drawback of renewable energy sources is the reliability of supply. What is renewable energy and where does it come from we all think we know and some of us may even be able to name some of the most prominent sources of renewable energy, but do we really understand the purpose of each type (such as how and where it is used), how much energy it can generate or its wider economic or benefits. Alternative energy persuasive essay : you will need to gather facts about fossil fuels and the assigned alternative energy various sources will be used.
2018. | http://huhomeworkomuc.card-hikaku.info/alternative-source-of-energy-essay.html |
People also ask
What are the best alternatives to fossil fuels?
Fossil Fuel Alternatives 鈥?Three Renewable Options 1 Wind Power When the blades on the windmill are exposed to wind forces, they begin to rotate the turbines, converting the kinetic energy in the wind into mechanical energy. … 2 Solar PV Solar photovoltaic panels are able to convert sunlight into usable energy for your home. … 3 Hydroelectric
Who supports fossil fuels over Renewables?
Pew found that the only group favoring fossil fuel development over renewable energy development was conservative Republicans. But partisanship was only one part of the story: Pew鈥檚 most significant finding was that age is a significant factor in attitudes toward renewable energy.
Can renewables replace oil and coal as a primary source of energy?
But renewables still have a long way to go to replace oil, coal, and natural gas as primary sources of energy. What will it take to reach that threshold and what obstacles and limits do we need to understand as we transition to a truly sustainable energy economy?
What are the nonrenewable sources of energy?
Another energy source, nuclear energy is also expensive and is a nonrenewable source of energy. The reason that nuclear energy isn鈥檛 used is due to renewable sources policies to push out non renewables and utilized renewable sources. | http://www.pixelgun3d.top/what-two-renewable-energy-sources-can-replace-fossil-fuels-4/ |
2 edition of Energy research policy alternatives. found in the catalog.
Energy research policy alternatives.
United States. Congress. Senate. Committee on Interior and Insular Affairs.
Published
1972
by U.S. Govt. Print. Off. in Washington
.
Written in English
Edition Notes
|Statement||June 7, 1972.|
|Classifications|
|LC Classifications||KF26 .I5 1972i|
|The Physical Object|
|Pagination||viii, 814 p.|
|Number of Pages||814|
|ID Numbers|
|Open Library||OL5391863M|
|LC Control Number||72603380|
Alternative Energy Sources is designed to give the reader, a clear view of the role each form of alternative energy may play in supplying the energy needs of the human society in the near and intermediate future ( years).. The two first chapters on energy demand and supply and environmental effects, set the tone as to why the widespread use of alternative energy is essential for . Vaclav Smil does interdisciplinary research in the fields of energy, environmental and population change, food production and nutrition, technical innovation, risk assessment, and public policy. He has published 35 books and more than papers on these topics/5.
T1 - Alternatives to private finance. T2 - Role of fiscal policy reforms and energy taxation in development of renewable energy projects. AU - Yoshino, Naoyuki. AU - Taghi Zadeh Hesary, Farhad. PY - /6/ Y1 - /6/ N2 - The main obstacle to the development of Renewable Energy (RE) projects is lack of access to by: AADL has no copies of this item. Are alternative energy sources necessary? Peaks in global oil production rates are imminent / David Strahan ; Peaks in global oil production rates will not be reached for many years / Michael Lynch ; Energy independence is a reachable goal / George Allen ; Energy independence is a myth / Robert Bryce -- What alternative energy sources should be pursued?
Bora Novakovic, Adel Nasiri, in Electric Renewable Energy Systems, Energy resources. Energy resources are all forms of fuels used in the modern world, either for heating, generation of electrical energy, or for other forms of energy conversion processes. Energy resources can be roughly classified in three categories: renewable, fossil, and nuclear. Collection of selected peer reviewed papers from Naresuan University, International Conference on Energy: World Future Alternatives (NU-ICE ), November 30 – December 2, , Phitsanulok, Thailand is dedicated to various aspects of the modern usage of alternative energy sources and technologies of energy : Pupong Pongcharoen.
Place for the arts
Cestriska ne promak.
Outline development plan for Western Ghats region
Estimating survival and salvage potential of fire-scarred Douglas-fir
Home from sea.
What did he say?
Check-list of the birds of Missouri.
Carrying British mails
Comparative study of supervision in the variou Canadian provinces, with a view to determining the optiumum load for supervisors of each type.
Collected Poems of Sir Thomas Wyatt
Pub. 68-0600805
Suspicions gate
Submarine pioneer
Nuclear waste and facility siting policy
Sugarpaste Christmas Cakes
Technologies: energy supply technologies, alternative sourcewhich refers to of renewable s energy (e.g., wind and solar power), and energy efficiency technologies, or those technologies which are hired to enhance energy efficiency, (e.g., combined heat and use power (CHP), virtual power plants (VPP) and smart meters).
It should be noted thatCited by: Energy research policy alternatives: hearing before the Committee on Interior and Insular Affairs, United States Senate, pursuant to S.
Res. 45, a national fuels and energy policy study, Ninety-second Congress, second session June 7, There are other alternatives to our typical energy sources that are not renewable. Although these are “alternative energy” rather than “renewable energy”, they use the energy we have more efficiently than older technologies.
In doing this, they help us make our existing energy supplies last longer and give us more time beforeFile Size: KB. “Public Policy: Politics, Analysis, and Alternatives is a great intro to public policy book that has a heavy focus on environmental policy and does an excellent job covering market failures and reasons for government intervention.
The thematic approach is great for an academic class that stresses objective research into public problems and. Alternative energy sources show significant promise in helping to reduce the amount of toxins that are by-products of energy also help to preserve the natural resources.
Relevant answer. While Dr. Xu Wang has an environmental engineering background, he has presented his extensive capacities and research interests in conducting multidisciplinary investigations on urban water systems in the context of sustainability, as well as to assess and develop both technological and management strategies for promote Energy research policy alternatives.
book neutrality, carbon reduction, and resource efficiency in the water. Presents an overview on the different aspects of the energy value chain and discusses the issues that future energy is facing.
This book covers energy and the energy policy choices which face society. The book presents easy-to-grasp information and analysis, and includes statistical data for energy production, consumption and simple formulas.5/5(1).
PDF | This is the 6th edition of the text Public Policy, with Scott Furlong. It has a copyright date and was released in May of | Find, read and cite all the research you need on. A clean energy revolution is taking place across America, underscored by the steady expansion of the U.S.
renewable energy sector. The clean energy industry generates hundreds of billions in economic activity, and is expected to continue to grow rapidly in the coming years.
Part 1 of this series described the development and various uses of US government estimates of the social cost of carbon, including concerns with their recent use in taxes and subsidies. Download the PDF. INTRODUCTIOn. Carbon taxes and clean energy subsidies are implemented for a range of reasons, and many different approaches are used to set tax and subsidy rates.
Environmentally conscious individuals will find the most up-to-date information in this new annual summary. Important facts about companies and universities conducting research in alternative energy areas, government research subsidies, statistics on global use of alternative energy systems, and production of ethanol and hybrid fuels are included.5/5(1).
Passive and Low Energy Alternatives I presents the proceedings of the First International PLEA Conference held in Bermuda on Septemberwhich aims to establish an international forum to report on the developments in the many related topics covered in this fast growing area of global concern that effects all of mankind.
She is the author of two books, Land, Stewardship and Legitimacy: Endangered Species Policy in Canada and the United States and The Canadian Environment in Political Context. Her main areas of research include wildlife conservation, Canada. Alternative Sour ces of Energy— An Introduction to Fuel Cells By E.A.
Merewether Abstract Fuel cells are important future sources of electrical power and could contribute to a reduction in the amount of petro leum imported by the United States. They are electrochemi cal devices similar to a Cited by: 2. The Need for Alternative Energy Introduction.
Since time immemorial, energy has been a very vital component of human existence. Man has for a long time relied entirely on fossil fuels for all their energy needs. The first fossils fuel used by ancient being coal that was commonly used during the industrial revolution across the globe.
James Beshara is a startup founder and investor in San Francisco, CA. After developing a heart condition in from excess caffeine consumption, James became fascinated by the science and research backing up alternatives to caffeine for energy, productivity, and focus.
His new book is 'Beyond Coff. The research will see if carbon can be more efficiently produced if a sufficient flux of neutrons is also present in stars’ carbon-producing regions.
Physicists have detected superconducting currents — the flow of electrons w/o wasting energy — along the exterior edge of a superconducting material. Research developing the renewable, carbon-free technologies required to realize a sustainable future energy system.
Transparent solar cell from Nanowires and graphene, Energy Futures, Autumn Credit: Stuart Darsch. Making a remarkable material even better. Transparent aerogels for solar devices, windows. Renewable energy and carbon pricing.
The research in alternative energy sources utilization is in progress in Russia as well. The Government Executive Order of the Russian Federation by 28 August № approved the energy strategy in Russia for the period up to The order is focused on necessity of using renewable sources of energy and local types of by: 5.
Get this from a library. Product liability and small wind energy conversion systems (SWECS): an analysis of selected issues and policy alternatives. [Robert J Noun; Solar Energy Research Institute.; United States. Department of Energy.]. “The explosive development of technology was analogous to the grown of cancer cells, and the results would be identical: the exhaustion of all sources of nourishment, the destruction of organs, and the final death of the host body.In China’s 11th Five Year Plan, its broad renewable energy policy goal is to “accelerate renewable technology advancement and industrial system development specifically supporting the technology breakthrough and industrialization of bio-liquid fuel, wind power, biomass power, and solar power.”This goal is supported by a series of suggested measures and incentives, shown in Tables Alternative Energy Sources Implementation of Renewable Energy Sources in the State of California alternative technologies developed and currently under research, promoted will be those with the highest applicability to the state.
Wind, solar and geothermal energy sources. | https://daxesituxacawy.fdn2018.com/energy-research-policy-alternatives-book-7184lg.php |
While the Covid-19 remains the most immediate threat to our health and the global economy, there are still some who argue that the so-called ‘climate emergency’ should remain the biggest concern amongst legislators.
This is certainly likely to remain the case post Covid-19, while it can also be argued that the subsequent economic fallout may also encourage governments to shift their approach to fossil fuels and increase investment in renewable energies.
We’ll explore this further below, while asking whether Covid-19 could ultimately be the trigger that causes renewable energies such as wind and solar to supersede fossil fuels.
Just How Dominant are Fossil Fuels?
For the time being, it’s fair to surmise that fossil fuels continue to dominate the energy sector in developed economies such as the U.S. and the UK.
This is despite considerable deductions in coal consumption, which have dragged global usage (outside of emerging economic behemoths such as China and India).
However, this development has created an interesting and conflicted market climate, as while the use of fossil fuels in the U.S. alone fell to its lowest levels since 1902, energy sources such as coal, gas and oil still account for 80% of the total energy market.
This trend is not expected to change in the Western world for the foreseeable future, particularly with lucrative industries such as manufacturing and transport still heavily reliant on fossil fuels.
These industries alone account for 21% and 14% of the world’s total harmful emissions, and it’s thought that they couldn’t currently be sustained in their entirety by renewable energy sources.
Are We Seeing a More Seismic Shift Towards Renewable Energy?
Of course, this shouldn’t detract from the fact that many western nations are making a definite (if gradual shift) towards renewable energy usage, with the UK pledging to achieve carbon-neutral status by the year 2050.
There’s also a growing emphasis on renewable energy in some developing economies, with a number of South Asia economies investing increased amounts in developing inexhaustible and repeatable sources such as wind, solar, hydro and even biomass. Of course, this has as much to do with location as it does intent, with most South Asian nations having direct access to variable climactic conditions and several renewable energy sources.
It’s also fair to say that the markets surrounding energy sources such as oil have also experienced significant demand destruction, partially as a result of Covid-19 and also an excess of supply that has spiralled out of control in recent years.
The latter issue has lasted for several years now, despite attempts by the OECD to cap production levels across the globe and stabilise prices. With the coronavirus outbreak having now triggered a global decline in demand, it’s no surprise that Tickmill have reported that the price of WTI Crude oil has sunk as low as 24.72 per barrel.
With some estimates suggesting that it may take more than a year for oil prices to return to manageable 2019 levels (and even longer to reach the previous level of demand), it would not be a surprise if governments and businesses took the opportunity to invest in renewable energy sources.
In this respect, the fallout from Covid-19 and the disruption caused to the status quo could well help usher in change in the global energy markets, while increasing demand for relatively advanced renewable sources such as wind and solar. | https://azbigmedia.com/business/will-renewable-energy-supersede-fossil-fuels-in-a-post-coronavirus-world/ |
In a much anticipated international conference on Climate Change, Pakistan assured the international community that it will shift to 60% clean energy and convert 30% of its overall vehicular fleet to electricity by 2030.
Addressing the US-initiated’ Leaders Summit in Washington, Special Assistant to the Prime Minister on Climate Change Malik Amin Aslam also urged developed nations to fulfil their commitment to help others make this transition from carbon-based to clean energy. “Now, the world needs to do more on climate finance. It needs to deliver climate finance for countries in energy transition . . . . honour the commitment of $100 billion a year” he states.
Climate Summit 2021
Started on the World Earth Day, in the Leaders Summit there were 40 countries who attended the two-day virtual summit and ended with big pledges from the world’s major carbon emitters, China, the US, India and Russia. “Nations that work together to invest in a cleaner economy will reap the rewards for their citizens,” says the US President Biden. He also made the biggest pledge by promising to cut his country’s carbon emissions by 50 to 52% from 2005 levels.
Prime Minister Yoshihide Suga raised Japan’s target for cutting emissions to 46% by 2030, up from 26%. Chinese President Xi Jinping said China expects the country will achieve net zero emissions by 2060.
Pakistan Energy Scenario & Climate Change
Currently Pakistan gets 64% of its electricity from fossil fuels, with another 27% from hydropower, 5% from nuclear power and just 4% from renewables such as solar and wind. However, “Pakistan is really at the forefront of this climate disaster.” Malik Amin Aslam rightly pointed out that Pakistan contributes less than 1% to global emissions, yet it’s one of top 10 on the list of most vulnerable countries because of its topography and geography.
He also said Pakistan faces the Himalayan glaciers which are melting in the north, the arid zones which are getting heat waves like never before, cyclones in the south and rising sea levels and floods in the plains. Besides in recent years the frequency and intensity of these disasters had gone up, affecting 220 million people.
A Dubious Pledge
However, it seems very difficult the commitment Pakistan made in the Leaders Summit, would be fulfilled in the stipulated time. The commitment is already the targets of government’ Alternative and Renewable Energy (ARE) Policy 2020 and Pakistan 1st Electronic Vehicle Policy, yet specific and concrete measures so far on behalf of government to attain these targets are still missing to see the daylight.
There is no roadmap and planning to shift our major energy infrastructure on renewable specially wind and solar units. Pakistan has never carried out any study how much money needed for this infrastructural shift and what would be the cost of electricity per unit of this transition. There is no deny alternative power production units are being installed, but the pace is very slow.
As an example Pakistan started working on wind power in 2006, started generate wind electricity in 2012, and in 15 years generating only 1248 megawatt, which is less than 2% of the total production. While in Baluchistan there is provision of massive wind corridors, but no transmission lines have been planned to supply electricity all over Pakistan.
Reliance on Coal
On the other hand when the world is pledging to reduce their reliance on coal and other harmful sources of fuel for power generation, Pakistan is instead celebrating a number of coal power projects planned within the scope of CPEC. The country has commissioned seven Chinese-funded coal plants as part of the sweeping China-Pakistan Economic Corridor project which are expected to add up to 6,600 megawatts of capacity to the grid. Coal is widely known as “dirty fuel”, and its combustion leads to CO2 emissions, which are a major cause behind global warming.
Though China has also funded new renewable energy but at a smaller scale, with six wind farms set to generate just under 400MW, a 100MW solar project and four hydropower plants expected to produce 3,400MW by 2027. So it looks very difficult for Pakistan fulfill its pledge, committed in the Leader Summit on Climate Change. | https://infocus.pk/lip-service-in-the-climate-change-summit/ |
Take your position.There's no global warming - it's just natural climatic variation. And if there is a problem, we won't be affacted much.we should do nothing because the cost of dealing with global warming is far higher than the potential damage. We should perhaps sign on to some international agreements, but make only minimal financial commitments for now.This question is not counted in the answers for any candidate.We should establish a market-based solution for excess carbon emissions. The Kyoto Protocol should require developing countries' participation to make the solution work.Overuse of fossil fuels causes serious problems that we should deal with immediately. And green energy is absolutly a good option for us to Stop Global Warming.
Importance
- Oppose
- Support
- Very
- Somewhat
- Background
Green energy includes natural energetic processes that can be harnessed with little pollution. Green power is electricity generated from renewable energy sources.
Anaerobic digestion, geothermal power, wind power, small-scale hydropower, solar energy, biomass power, tidal power, wave power, and some forms of nuclear power (ones which are able to "burn" nuclear waste through a process known as nuclear transmutation, such as an Integral Fast Reactor, and therefore belong in the "Green Energy" category). Some definitions may also include power derived from the incineration of waste.(Source: Wikipedia)
- Official Democratic Position
Democrats have supported increased domestic renewable energy development, including wind and solar power farms, in an effort to reduce carbon pollution. The party's platform calls for an "all of the above" energy policy including clean energy, natural gas and domestic oil, with the desire of becoming energy independent. The party has supported higher taxes on oil companies and increased regulations on coal power plants, favoring a policy of reducing long-term reliance on fossil fuels. Additionally, the party supports stricter fuel emissions standards to prevent air pollution.Source: Wikipedia
- Official Republican Position
The Republican Party has called for a more aggressive strategy for their development, especially for solar, wind, hydro, tidal, geothermal, and biomass energy. Rather than focus on federal research, however, the party believes in market-based research on the technology and partnerships between emerging renewable energy industries and current energy industries. | http://pawen.pavatar.us/survey/8/green-energy |
MUMT 202 Fundamentals of New Media
This course provides a theoretical and practical introduction to selected areas of music technology. Topics include digital audio and sampling theory, MIDI and sequencing, audio editing and mixing, elementary sound recording, score editing software and current areas of research interest.
MUMT 203 Introduction to Digital Audio
Marcelo Wanderley
An introduction to the theory and practice of digital audio. Topics include: sampling theory; digital sound synthesis methods (additive, subtractive, summation series); sound processing (digital mixing, delay, filters, reverberation, sound localization); software-based samplers; real-time sound processing; interactive audio systems. Hands-on exercises are included.
MUMT 250 Music Perception and Cognition
Stephen McAdams
Basic processes by which the brain transforms sound waves into musical events, dimensions, systems and structures and the processes by which musicians imagine new musical sounds and structures and plan movements that produce music on instruments.
MUMT 301 Music and the Internet
Technologies and resources of the Internet (access tools, data formats and media) and Web authoring (HTML) for musicians; locating, retrieving and working with information; putting information online; tools for music research, music skills development, technology-enhanced learning, music productivity, and promotion of music and musicians. Evaluation of Internet music resources.
Prerequisite: MUMT 201 or MUMT 202
MUMT 302 New Media Production I
Project-based course introducing techniques for producing and manipulating music and sound for new media applications. Synthesis techniques including FM, granular, and physical modeling. Audio effects including delay, reverberation, dynamics processing, and filtering. Audio compression, HCI, and MIR concepts. Small projects using Max/MSP and a final project of greater scope.
Prerequisite: MUMT 202
MUMT 303 New Media Production II
Project-based course building on material of MUMT 302. Advanced audio processing with general considerations of aesthetics in sonic art. Introduction to theory and practice of digital video processing using Jitter. Projects on aesthetic and conceptual aspects of sound and video art practice, multimedia projects combining audio and video processing.
Prerequisite: MUMT 302
MUMT 306 Music and Audio Computing I
Gary Scavone
Concepts, algorithms, data structures, and programming techniques for the development of music and audio software, ranging from musical instrument design to interactive music performance systems. Student projects will involve the development of various music and audio software applications in Max/MSP and C++.
Prerequisites: Previous digital audio and object-oriented programming experience.
MUMT 307 Music and Audio Computing II
Gary Scavone
Theory and implementation of signal processing techniques for sound synthesis and audio effects processing using Matlab, C++, and Max/MSP. Exercises will focus on the development of programming skills for the implementation of real-time audio applications.
Prerequisite: MUMT 306
MUMT 501 Digital Audio Signal Processing
Philippe Depalle
Discrete-time signal processing concepts and techniques. Discrete-time fourier transform and series, linear time-invariant systems, digital filtering, spectral analysis of discrete-time signals, and the z-transform.
Prerequisite: MUMT 307
MUMT 502 Senior Project in Music Technology
All Music Technology Professors
Independent senior project in Music Technology. Students will design and implement a medium-scale project in consultation with their advisor. Evaluation will be based on concept, background research, implementation, reliability, and documentation. | https://mt.music.mcgill.ca/undergraduate_courses |
workshops and seminars
For more information and booking please get in touch.
The spatiality of sound in immersive media environments
This workshop series shall enable participants from all disciplines to understand how experience, verbalization, and theory can help us reflect on our creative practice and processes in a constant iterative and comparative loop, and to think of spatial aesthetics as a daily practice of world making inherent to every compositional act, with sound in general, and with loudspeaker environments in particular.
Gerriet K. Sharma, Johannes Scherzer
Experiencing sound and space
Our capacity to analyze our aesthetical experience is a key prerequisite for spatial thinking and the production of space and spatial sound phenomena. This workshop focuses on how we experience sound and space through guided listening sessions and an introduction to various methods to develop the skills of attentional listening and experiential analysis. The participants will gain a deeper understanding of how the experience of space is shaped by its auditory qualities, and how these qualities can be changed.
Gerriet K. Sharma, Johannes Scherzer
Verbalizing sound and space
Our ability to verbally express our experiences of sound and space is the foundation for collaborating with peers and for our creative practice. The workshop participants will learn about various approaches to verbally document their experiences of sound and space to construct a shared vocabulary. Based on the documentation of situational listening sessions, the result will be compared and complemented with existing conceptual frameworks. The goal is to raise awareness for verbalization, its still inconsistent nature, and give agency to the participants to engage in the much-needed discourse.
Gerriet K. Sharma, Johannes Scherzer
Thinking of sound and space
Our thinking of sound and space is largely determining our actions in the production of space. In this workshop, participants will be introduced to relevant thoughts and theories intersecting with sound and space. We will be looking at the spatial aesthetics of sound from different perspectives, including music, film, architecture, theater, VR & XR, scenography, atmosphere, phenomenology, communication, and linguistics. To keep us grounded in spatial aesthetics in sound, we will link the outlined theoretical framework to our actual experience, and to our capability of verbalizing it. Finally, we will outline the implications for our spatial practices.
Gerriet K. Sharma, Johannes Scherzer
Sound in the production of space
Knowing the tools for the production of space is essential for executing and implementing ideas and concepts. This workshop will provide an overview of methods, techniques, tools, and technologies to create spatial sound phenomena and co-produce space through sound. Participants are invited to experiment with auditory interventions in some areas of the building. Finally, we will reflect on our spatial practice and discuss concrete design strategies that help guide the creation of spatial sound phenomena from the first idea to the moment of presenting it to an audience.
Gerriet K. Sharma, Johannes Scherzer
Reflection and evaluation of spatial aesthetics in sound
A strategic and structured approach to the reflection and evaluation of the spatial aesthetics in sound is helpful for both, the beginning and the finishing of a project. Closing the circle to the series’ first workshop about experiencing sound and space, we will introduce methods for the qualitative analysis of spatial aesthetics in sound in the context of experience, verbalization, theory and practice.
Gerriet K. Sharma, Johannes Scherzer
Sound installation art — intermedial spatial composition
Interest and activity in the area of sound installation has increased dramatically over the past decade. Such an increase in involvement on the part of artistsmay be seen simply as a natural tendency for them to fuse various artistic areas within their exploration of technology or, even more simply, as their direct reflection of our multimedia-oriented society. Within this seminar practical and theoretical approaches will be introduced, considered and put into individual practice.
Gerriet K. Sharma
Beyond the visual. Music, sound and architecture —towards fluid spatiotemporal environments
"Beyond the Visual" is a research curriculum for the investigation of spatiotemporal aesthetics, at the interface between architecture and music, in regard to perception and creativity and design/composition. With with architect and space theorist Constantinos Militadis.
Gerriet K. Sharma, Constantinos Militadis
Sound scenography - auditory communication in staged spaces
One of the primary purposes for staging spaces like museum exhibitions, brand exhibitions, flagship stores, etc. is communication on the various levels, including the factual, the contextual, the emotional, the imagined, and the bodily felt. Even in ordinary architecture, gardens, shopping malls, streets, restaurants, or doctor’s practices, various communication dimensions are implied. Also, the sonic dimension of immersive media environments such as virtual reality or 360° film can be analyzed and approached from the perspective of communication.
However, in the practice of staging space, we usually consider the visual part of communication first. But what do we know about communication through sound? What role plays our auditory experience, and how can we use the potential of sound in the context of the scenographic practice, for staging narrative spaces, for communicating facts, contexts, and narratives, for the construction of meaning?
This workshop series brings together ideas and knowledge useful to everyone involved with the production of narrative space. We will address the problems we are often facing with sound, offer new perspectives to think differently about the role of listening, and discover what we can do to craft narrative spaces that involve and nourish our auditory experience in a meaningful way.
As the approach to auditory communication applies to the staging of any space we can experience with our senses, this workshop series is relevant to professionals from many fields, including media production, exhibition making, architecture, interior design, live communication, tourism, health and rehabilitation, urban planning, gardening, film making, or virtual and mixed realities.
Johannes Scherzer
Composing with sculptural sound phenomena in virtual auditory environments
Electronic music, computer music and sound design has so far opened up space as a compositional dimension mainly through multi-channel loudspeaker systems configured as an acoustic outer shell around the audience. The seminar, on the other hand, sees itself as an introduction to an aesthetic practice that composes space by taking it as a prerequisite for sonic-sculptural material, thereby artistically incorporating performance environments of varying complexity.
In the course of the seminar, electro-acoustic space-sound phenomena, plastic sound objects, which occur in certain sound (re)production processes, will be theoretically motivated and experienced in practice with regard to their acoustic foundations and artistic po(e)tential.
Gerriet K. Sharma
Reflections on sound-image relations: strategies, experiments and fears of loss
Image-sound relations in movie, television, installation and video games are subject to this field of investigations. Starting with early attempts of audiovisual stagings (Lescaux) a historic overwiew is established to derive strategies and explain perecptual prinicples, for analysis and practical use dealing with audiovisual media today.
Gerriet K. Sharma
Spatial theories and spatial aesthetics in immersive arts
Within the past 10 years, immersion has become a frequently used term in concert venues and studios with multichannel-loudspeaker arrays, in the context of audio-visual caves, VR, AR and fine arts. Manufacturers of loudspeaker systems as well as the gaming industry are using the term as a feature that heralds a new step in “multi-media” experiences, and academia is claiming a kind of expertise in this field based on years of scientific experimentation and avant-garde practice
This seminar will provide different theories on space and spatiality from the past 80 years salient in music and the arts. Different artistic and philosophical approaches will be introduced and discussed to show that space has indeed become one of the most important subjects that bears a polyvalent structure we can define, compose, defend and imagine. | https://spaes.org/education |
Please wait ...
SI
e:
zavodsploh(at)gmail.com
t:
00386(0)51360735
YouTube
Facebook
sploh.bandcamp.com/music
Music
Cycles
>
Sluhodvod
>
Zvokotok
>
Razsrediščenja
>
Confine aperto
>
Re_humanizacija
Sound Disobedience
Workshops
Projects
Performing
Cycles
>
Šift
>
Neforma
>
Ventilator
>
Promising
Projects
Intermedia
Projects
Label
Albums
Reflection
Research_Archiving
Neforma_Reflection
The Audience Council
About Sploh
What and Why Sploh
Where Sploh
Team
Artists
Contact
Home
Reflection
Research_Archiving
Primož Trdan: Ensemble for the new millennium
Primož Trdan: Ensemble for the new millennium
The institution of the contemporary music ensemble contains a certain contradiction of western artificial music in its last phase of the new music, contemporary compositional practice. The idea of how new music needs a special performing unit and how living composers create pieces for a handful of specialized virtuoso performers who can appropriately play their most complex works shows how much new music has narrowly surpassed the technique and aesthetics of the past composing practice, and at the same time, it only tightens the broader ideology of the 19th century. This was, among other things, a century of turning to the specialization of music tasks. Once, it was customary for the music author for a larger ensemble to actively participate in the performance, lead it, play a demanding part as a soloist, and partially improvise it, however, the 19th century slowly introduced composition as a more isolated work. The composer's product becomes a written, designed work; it is not so much a musical performance anymore. As a result, the notation is becoming more precise, the composer is becoming increasingly distant from the audience, and it did not take long for the musical taste to become more and more focused on older works.
The new music is about a hundred years old. In 1918, Arnold Schönberg founded the Society for Private Musical Performances, which organized properly prepared performances of recent works in a closed circle. What followed next were specialized festivals, societies, workshops, and since the 1970s, contemporary music ensembles with special programs, exclusive orders for composers, and concert cycles of contemporary music. In recent decades, ensembles have been gradually realizing the need to unravel this framework. Many approaches are emerging and changing the way we collaborate with the composer. They are increasingly trying out the work material together with the musicians, establishing improvisational protocols in compositions, and pushing the boundaries of their authorship towards the performer. Strict aesthetic refinement is complemented with some most welcome breadth, the artistic management of ensembles are looking for shifts to other genres – New York ensemble Bang On A Can interprets Sonic Youth’s music, their fellow citizens Alarm Will Sound arrange Aphex Twin’s tracks.
Šalter Ensemble comes from a slightly different context. It comes from a desire to bring musicians and scenes together, to work collectively, which is, in this sense, a part of the trend of large improvisational ensembles, which has been, in recent decades, spreading between ensembles such as the London Improvisers Orchestra and the Berlin Splitter Orchestra and new interpreters of the big band jazz tradition. Since 2017, the ensemble is comprised of twelve musicians from Switzerland, Slovenia, Croatia, and Serbia. The initiator and artistic director, the Swiss accordionist Jonas Kocher, together with Davorka Begović, Bojan Đorđević and Tomaž Grom, has gathered an ensemble of different musical backgrounds, generations, in which the experience of free improvised music, jazz and rock scenes, folk music practitioners, and performers of classical tradition meet. What the musicians have in common is the skill of improvisation, however, this is not crucial to their music making process. The ensemble plays the pieces of four of its members – Bertrand Denzler, Robert Roža, Roko Crnić, and Tomaž Grom – who create a work concept which, in cooperation with their colleagues, they test, develop, adapt through experience, and improve. The improvisational experience is essential here and it allows a special way of listening and responding, playing, and communicating. Even listening to various concerts and recordings of Šalter Ensemble clearly confirms that this is not a total improvisational openness, but carefully developed compositional ideas take place in front of us, frequency movements in time and space that bear a clear author's signature. But at the same time, the performance of pieces is driven by a different spontaneity and dynamics among musicians, which is usually absent in classical contemporary music ensembles.
In many ways, Šalter responds to the dilemmas of contemporary music ensembles, but not from the position of curating from the top of the hierarchy. The reflection about the nature of creating the new music, the way the author and the performer cooperate, the reflection of the new musical aesthetics is already embedded with the wide-ranging selection of participating musicians, and at the same time great sound seekers and sensible people. This kind of reflection is built "from the bottom up". This is where Šalter is an active human community in miniature and an ensemble for the new millennium.
Primož Trdan
graphic design: Matej Stupica and Nejc Prah
Follow us
and stay informed
Signup
English consent
I agree that my e-mail address is used to inform about current events, so I agree to inform me about events and news. | https://www.sploh.si/en/reflection/research_archiving/primoz-trdan_ensemble-for-the-new-millennium |
The Butchers Production creates content for Film, Live Experience, Music and Commercials.
The Butchers’s signature is a unique visual language characterised by their conceptual approach to live performance and film production. Experimentation and attention to detail combined with their unique philosophy of aesthetics is at the core of their artwork’s identity.
The Butchers is a Creative Studio dedicated to create selfinitiated art project.
Working in collaboration with architects, designers, musicians and programmers, The Butchers’s work combines art, design and technology. Their language is defined by their concept of space-perception and their ability to create emotional and engaging experiences with the audience. | https://salon.io/thebutcherproduction/studio |
In recent years, various forces within and outside the music industry -record producers, hardware and software suppliers, and Internet service providers -have created techniques and tools that allow recording studios in remote locations to be networked in ever more complex and intimate ways. The effort behind the creation of the 'network studio' is, in part, the result of an overall progression in the historical development of the tools, architectures and practices of the contemporary recording studio. Studios do not exist in a musical or cultural vacuum, however: traditionally, music scenes, session musicians, and local aesthetics and practices have played an important role in the development of specific approaches to recording and have had an influence on the resulting sounds. But the rise of the network studio raises fundamental questions about such relationships and about the role of space and place in sound recording and, in this regard, can be considered as an expression of larger tendencies described within various theories of globalization. This paper addresses how the emergence of the network studio, with its emphasis on standardized technologies and practices and its reliance on the virtual space of network communications, may have an impact upon and/or work alongside conventional recording studio practices. | https://ir.library.carleton.ca/pub/2134 |
The challenge of animaBIRIKI is to create an innovative animation project that takes into account the sensibilities of smaller children, and that is capable of combining a refined aesthetic with a clear message of respect for humankind and for nature aimed at a wide audience.
The project feeds on relationships, through workshops and activities that precede and follow on from the creation of the animated film.
The production of episode one, Biriki and the Rainbow, was an extraordinary experience, the product of a team effort, the result of all the experience accumulated in our individual backgrounds.
Bruna Ferrazzini’s training as an educator led her on the path to building a wordless language that addresses small children with respect and attention, following on from a decade of experience already gained with the Biriki character as part of an education and communication campaign.
Ilaria Turba, photographer and artist, has mainly followed a path of artistic experimentation, merging practices and skills consistent with her personal journey. In particular, she has developed the participative method of group creation.
Both of the authors oversaw the participation and spreading of the project, through workshops, exhibitions and special events created in collaboration with Museums, Schools and Associations.
Anna Ciammitti followed the entire development of the animation from the storyboard to post-production. Her experience in stop motion animation and in 2D mixed techniques were fundamental ingredients in the visual and experimental pathway of the project.
The capable hands of Manuela Bieri guided us through the woods, in the selection and construction of the materials and objects for the world of animaBIRIKI.
Riccardo Studer recorded, created and assembled a rich range of sounds and noises with the contribution of Alessandro Broggini. He worked on the project following every aspect of the sound mix including the recordings of the characters brought to life thanks to the extraordinary voice of Eugenia Amisano.
The fascinating experimentation between language and the music of Alessandro Bosetti adds rhythmic feeling to the compositions written for orchestra, performed by a group of young musicians from the Conservatorio della Svizzera italiana, conducted by Francesco Bossaglia. The recordings took place at the prestigious RSI studios (CH).
of Swiss production company CINEDOKKE and was co-produced by Walter Bortolotti and Gabriella De Gara from RSI Radiotelevisione svizzera, with the support of Canton Ticino and FilmPlus in Italian-speaking Switzerland.
The Education Office of Amnesty International Italia gave its patronage to the short film. | http://www.animabiriki.com/en/direction-and-staff |
Angoor.
Archipel: Hi! How are you? Tell me about yourself!
Alex & Mohit:
Namaste! Ciao! We are Bare & Kind. Alex (Alexander Folonari) is from Italy and Mo (Mohit Maini) is from India. We are constantly working on developing our sound. Working with musicians from different backgrounds, experimenting with machines and softwares and the list goes on. Besides that, we recently relocated to Berlin, Germany and are loving it
A: What’s your spare time outside music?
A: I’m usually out in the streets…skateboarding that is, really love to skate. Besides that, food is a big aspect of my life, understanding the chemistry behind what we eat and how it interacts with our bodies is very important to me, as well as cooking and eating!
M: I don’t have many hobbies anymore. Its mostly working on music for me and working my job. I like to be at home or in a quiet environment, where conversations can take place. I like to read things about our history and how we, each one of us, reached this point of time, our present existence and how we all are similar, I try to find facts and philosophies that unify us. And I like to eat the food Alex cooks…that I could categorise as a hobby for sure.
“We try to use controlled randomisation. In a way that it stays musical and fits the mood of the track. The texture is also very important in the process, textural diversity can really create a sense of reality within music, giving it an incredible sonic depth and volume to the track“
A: Tell me about the experimental scenes you have grown up with?
A: Well up too two years ago we were living in the United States. There the music scene is definitely different. There are some cities like New York, Detroit, Chicago etc which have always had a small niche scene for quality dance and experimental music. But yes, moving to Berlin was huge for me, as here there is a much broader interest in not so standard music. But experimental music is a big word, as there are many different shades of experimentation in music. There is really interesting music coming from all parts of the world, and Berlin is one of those places where i feel that if you have talent and love for what your doing you can be as experimental as you feel and people will be open minded about it, i mean it has to be the right environment of course!
M: In a country that has an ancient connection with music, India is very rooted in its traditions in music. Indian music, specially folk music has been my inspiration growing up and it still is. India has given a lot to the musical world! However, when it comes to experimental music…there is almost none. Even classical and folk music are dead. Now you hear the generic electro infused pop music from Bollywood movies. Music has died due to piracy and now the only music you get is music that is released with a movie. So the individual artist, or their uniques ideas are a thing of the past. In the Indian society, music is now considered just a mere hobby. Nobody takes it seriously. The ones who do are forced to conform to what sells. Experimentation only alienates them from the micro music scene in India. I personally have a few friends who are trying to step out of the norm, but they are handicapped due to many reasons, including technological ones. For example, the price of most music equipment is almost twice as much as in the US or EU. This is due to customs, etc. Availability and support for music equipment, specially electronic is nearly non existent. So to sum it up, sadly, India is not a place for experimentation in music at the moment. In the recent past, there has been no major development of Indian music. Even music that is from other parts of the world that gets popular is watered down, generic, happy ‘clap your hands’ music. But there are people in India, like my childhood friend Roby (ThatBoyRoby), that are working against many odds to experiment with their music. This gives me hope that soon there will be an experimental music scene in India.
A: You have a very particular signature to your productions. What is your approach to sound design?
Alex & Mohit: We are usually trying out new techniques, mainly to do with “modular” synthesis, sampling and drum machines. In the sense that it is very important for us to be able to manually sculpt the sounds from scratch. Most of our most inspired creations come from extended and at times randomised modulation to many parameters, with interrelated rhythmic variations. We try to use controlled randomisation. In a way that it stays musical and fits the mood of the track. The texture is also very important in the process, textural diversity can really create a sense of reality within music, giving it an incredible sonic depth and volume to the track. A favourite of ours is the tape machine and the tube psychoacoustic enhancer; tape and tube saturation/distortion, really is magic when trying to change the overall texture and sonic character of a sound! With abstract sounds, you can create fictional worlds of sounds, and that is at times hard to mix with the more concretised perception of music which is in melody and harmony, which are also very fundamental for creating emotional tonalities in music. Its really interesting to use very strict musical ideas and then take them out of their ordinary context and include them into a more abstract and surreal world of sound.
“For us this journey represents our growth as sonic explorers“
Another important aspect of our sound design is based around the idea of ‘organic synthesis’. Trying to make synthesisers sound acoustic or organic. We try to achieve the complex textures and richness of acoustic/ organic instruments while having the in your face, bold synthesis sound. This balance really excites us! The Dave Smith Instruments Tempest deserves a mention here. It is supposed to be a drum machine. But, the sounds that come out of the machines are incredible! So organic and simple sometimes and crazy and other worldly at times. Thinking of it as a drum machine is just scratching the surface.
A: Please tell us a few words about your album?
Alex & Mohit: Angoor, in Hindi, “Grape(s).” Or the essential building block to a good wine. We felt the tracks as a whole were like grapes and when mixed and listened to in various ways they can yield different results. Angoor is the name of our collaborative international platform and we are honored to first present this concept on Archipel.
The album as a whole is how we saw the world of contemporary electro acoustic music back when we lived in the Netherlands. Many tracks are original composition of a dear friend of ours, Krists Auznieks, who composed and orchestrated about 10 pieces which were given to us with the freedom to manipulate and to interpret in our point of view. The album was produced in our apartment studio, in India, in Italy and in Germany, in about a year of time. For us this journey represents our growth as sonic explorers. A big part of this growth came with the introduction of electro acoustic improvisational music. The ability to both express clear and concrete musical ideas and contrast that with abstract micro-organismic sound structures.
Credits:
Krists Auznieks (Primary composer)
That boy Roby (Guitar)
Babaji (Vocals)
Juliette Froissart (Cello and composition)
Ruben Brovida (Sound designer)
Marcello Spagnolo (Jazz drums)
A: How do you get that spirit that, cinematic emotion?
M: Its what sounds good to us, mostly. I am a professional Film Audio Post Production engineer and have worked on 25 films as a Foley Engineer/ Artist. Right now I am working with a prestigious post production studio in Berlin. So I use some of the techniques I have learned in my working experience. Using certain techniques can really bring a cinematic feel and unexpected emotions that musical sounds can’t achieve. Sounds that can take you back in time, trigger memories and engulf you in a sonic world which in that moment, seems real. We are constantly going around field recording when and where we can. We have hard drives filled with thousands of recordings, most of them we haven’t used yet. Its a personal collection that keeps growing. People take pictures when they travel and see something beautiful, we like to record the sound of it, sonic pictures.
Different spaces have their own sound. The human ear is very sensitive to reverb. So if you can convince the listener about the space your sounds are in with your sound design, there is something very special that happens.
A: Who and why are your biggest influences?
A: When it comes to dance oriented sounds, Ion Ludwig, Edward, Pepe Bradock, Melchior and Ford of course, those guys just got something special you know? Cleymoore and the Pluie/Noir Collective I find always to have a fresh output. Shcaa, The Marx Trukker, Isherwood, Really love that style of experimental house music! Pheek and Archipel Musique has been an inspiration for me for a long time as well!
M: My influences start from my childhood. Chamkila, Kuldeep Manak, Asa Singh Mastana and other folk artists really influenced me. Then oldies like The Chemical Brothers also have had a great impact on the development of my musical ear. But for ‘dance’ music and sound design, my greatest influence and inspiration is Ricardo Villalobos. There is so much I can say about his ideas and his sound design. His sound quality. Its unbelievable. The labels Sharingtones and Shahr Farang have a very distinctive and stimulating sound; very inspirational.
Also, living in 4 different countries in the past 10 years has really influenced us in our sound.
A: What are your views on music and technology and what it enables us to do nowadays?
A & M: Today technology allows us to do what was once very hard and near impossible, very easily. We can now blend not only the synthetic and acoustic worlds, but seamlessly blend any style of music into another. Its mind blowing. It gives us child like excitement. So we try to work with as many musicians as we can. From different parts of the world, from different styles of music and from different walks of life, they all bring something irreplaceable and unique along with them. We try to work with as many styles as possible, but we hope to develop our sound enough to one day work with some Indian music. To melt together styles that have thousands of years in between them.
A: Anything else you would like to add?
A & M: Thank you Pierre and Pheek for inviting us to do this interview! We highly appreciate the support and are super happy to be associated with such amazing people. And of course, we would like to thank all the amazing musicians, friends and family for helping us and working with us!
Peace!
Interview by PH Paradis.
Portrait Photos by: Omar Jaimes.
More info:
http://archipel.cc/
http://archipel.cc/music/angoor/
Add a comment
Leave a Reply
You must be logged in to post a comment. | http://archipel.cc/bare-kind-interview/ |
Jamal Rahman talks about his musical beginning, establishing TBR and his future plans
Jamal Rahman of True Brew Records talks to Ahmad Uzair about his musical beginning, establishing True Brew Records, and his future plans. Here is what he has to say about all of this.
Q) Hey Jamal, tell us a bit about your musical background and how did you get into this profession?
I owe the beginning of my relationship with music to my parents. When I was 6 or 7, they bought a Bose sound system to complement their already huge cassette tape collection. There was always a diverse span of music playing in the house, and that laid the foundation for what would become my own production aesthetic. Later, as an early teen, my brother introduced and taught me the guitar and I started recording and performing a year after. Never looked back since!
Q) How did True Brew Records happen?
I wanted very much to take a different route than how a career in music in Pakistan had been generally perceived. Since performing and producing music had always been synonymous for me, my career had to be an amalgamation of the two. And so, True Brew Records was born, representing my aesthetic both musically and artistically.
Q) What would you like to say about the experience of working with so many top musicians of the country and producing music for them?
I have been exceptionally fortunate to have worked on some rewarding projects with a large selection of talent across the board. As a producer, it is my prerogative to get a performance out of an artist that will produce the best possible version of the work that we do. Music is a subjective medium, one in which what is produced can only be as good as the ideas that are poured into it. And that depends largely on the artists themselves and how far they are willing to push the envelope. One of my core principles is to always better your work with each new project and I am grateful that the artists I have worked with have been open to experimentation and to discovering new ground.
Q) You have spent considerable time abroad for your studies. What’s the basic elements that you find missing in the younger lot coming up here as compared to that in west? And what are the positives that you see over here?
Musicians in countries with thriving music scenes cut their chops through performances. Starting from the bottom of the rung, they perform their material in the small bar circuit and have to build up to larger venues and festivals as their fan base grows and music improves. Frequent performances inform each band’s aesthetic and also act as a filter for success as people vote with their feet. The fact that live music in Pakistan is limited to private affairs has stifled the growth of our music scene.
Having said that, credence must be given to the determination with which our musicians have, despite all odds, continued producing music. The love for the art itself has been its main driving force and that is an admirable quality.
Q) We have heard that currently you are working on some big projects under the banner of True Brew Records. Please let our readers know about it, specially your association with the much talked-about upcoming movie ‘Manto’?
TBR is currently involved in a number of efforts whose over-arching aim is to provide support for the production and distribution of Pakistani music across the globe.
One such endeavour is the collaborative music project called the ‘True Brew All Stars’ which will feature music composed by and produced with a fluid line-up of musicians spanning multiple genres. No holds barred!
Our Live at True Brew (LATB) series, held at our own studio, will see a second wave of shows this winter featuring more exciting local and international acts and will have a select number of video releases accompanying it.
We will be frequenting unchartered musical territory with some killer content in the year ahead, more about which shall be revealed in due time.
Sarmad Khoosat, the director of the film, Manto, has been generous and allowed us significant creative freedom in the production and composition of the music. The score is minimal and haunting, the original songs borrowing heavily from traditional melodies and lyrics but steeped in modern production techniques, and the sound design augments the visuals in a manner that makes the entire experience larger than life.
Q) What are your future plans at True Brew Records? Where do you see yourself 5 years from here?
TBR is here to stay and in the next five years we aim to provide the sort of institutional and infrastructural support to the culture of music that has been absent. We have a wealth of unexplored talent that has no access to guidance or recording facilities, and a nation that is starved for entertainment and music. Our goal is to fill that void and build bridges to allow for the two to interact in the best possible manner.
Q) Any special message to all your fans and music aspirants who want to come into this field?
To aspiring musicians, I have three words; courage, initiative and perseverance.
Courage to pursue the sort of music that is true to one’s own person and not be bogged down by commercial value or current trends. Initiative to take matters into one’s own hand, set up shows, create videos and get your music heard. And persevere, because its not going to be easy!
As for our fans, the music is for you so help us make it, support it and spread it. Sharing, after all, is caring! | https://koolmuzone.pk/2013/11/jamal-rahman-true-brew-records-interview/ |
Creative Technologies (2021)
Creative Technologies is focused on the creative outcomes of applying new technologies across a range of media in advanced and emerging cultural and artistic fields. Music papers are focused on digital music and applications, computer science papers focus on computer graphics and interactive media systems, and media papers include video production and new, integrated video-based multimedia practices.
Note: Candidates who wish to follow a creative technology or creative practice pathway can discuss these options with the Screen and Media Studies graduate advisor.
On this page
Prescriptions for the PGCert(CreateTech) and PGDip(CreateTech)
To be eligible to be considered for enrolment in graduate Creative Technologies papers, a student should normally have completed a Bachelor of Media and Creative Technologies (BMCT) or an undergraduate programme considered to be equivalent by the Head of the School of Arts.
500 Level
Code Paper Title Points Occurrence / Location DSIGN532 Information Visualisation 15.0 21A (Hamilton) This paper aims to provide an awareness of the potential offered by information visualisation techniques, a familiarity with the underlying concepts, and an understanding and ability to effectively design and apply information visualisations in a given context. MEDIA501 Critical and Creative Approaches to Research 30.0 21A (Hamilton) & 21A (Online) This paper identifies the constraints and freedoms of research methods, and places a strong emphasis on research as an intellectual, theoretical, and processual activity as well as the roles of interdisciplinary projects in creating unique methodological and conceptual media research. MEDIA504 Media Design and Aesthetics 30.0 21B (Hamilton) Audio-visual media are undergoing continual transformations that question the roles of makers and audiences alike. Students are encouraged to experiment with sensory perception in order to question current notions of aesthetics in relation to cultural practices and media creations. This process of reflection encourages critical per... MEDIA507 Theory and Research in Action 30.0 21B (Hamilton) Have you got a topic you are passionate about to research? This paper provides students the opportunity to engage with primary texts in preparation for writing their dissertation. You will explore research design frameworks which will support the completion of a robust dissertation. MEDIA508 Creative Practice Research 30.0 21A (Hamilton) & 21B (Hamilton) Are you a creative practitioner that wants to experiment with practice-led research? This paper offers a site for experimentation and development of a practical project to be included in a final dissertation. MEDIA593 Screen and Media Studies Thesis 90.0 21X (Hamilton) Provides students with an opportunity to engage with a topic of their choice from the field of Screen and Media Studies under the guidance and supervision of a lecturer from the programme. The outcome is a report of approx. 30,000 words or equivalent on the findings of a theoretical, empirical or practice-led investigation in the f... MUSIC510 Music and the Screen 30.0 21D (Hamilton) No description available. MUSIC511 Sonic Art 30.0 21D (Hamilton) No description available. | https://papers.waikato.ac.nz/subjects/CRTCH |
Lanificio is an unusual and fascinating location, able to accommodate any type of event: private parties, corporate events, set and shooting, exhibitions, shows and much more.
Lanificio is the perfect synthesis between a production center and a creative laboratory. It consists of 3,500 square meters between the river Aniene and via di Pietralata, in the former Lanificio Luciani, in spaces and landscapes that can not hide their European vocation. It is a container of ideas and cultural stimuli that takes its first steps in 2007, when a new entrepreneurial activity began to take shape, enveloped around experimentation, taste for refinement and the convergence of heterogeneous professionalism.
To date the cultural proposal of Lanificio is transversal and touches the music, the performing and visual arts, the kitchen, the recovery of design objects. Everything in terms of sustainability, recycling and the revaluation of the territory. Lanificio is a place and a project able to transform itself according to the needs to offer support to the development and design of events and shared paths open to the public or directed to individuals and companies. | https://alloraroma.com/en/place/lanificio-159 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.