question
stringlengths 15
100
| context
stringlengths 18
412k
|
---|---|
when does a crime to remember come on | A Crime to Remember - wikipedia
A Crime to Remember is an American documentary television series that airs on Investigation Discovery and was premièred on November 12, 2013. It tells the stories of notorious crimes that captivated attention of the media and the public when they occurred, such as the United Airlines Flight 629 bombing from 1955. As of the 2016 -- 2017 season, the series has aired 30 episodes over four seasons. The first three seasons are currently streaming on Hulu.
The series was officially renewed for Season 5 as of March 29, 2017, and the season began airing February 10, 2018.
Episodes feature interviews with surviving friends and relatives, as well as surviving journalists who covered the cases and investigators. Other interviews feature true crime experts and authors.
Prior to her 2016 death, author and True Crime Diary blogger Michelle McNamara, wife of Patton Oswalt, was a frequent contributor to the series, a role now filled by Karen Kilgariff from podcast My Favorite Murder. The producers honored McNamara at the start of Season 4 's finale: "This season of A Crime to Remember is dedicated to our friend Michelle McNamara ''.
Each episode is treated as a mini movie, and is filmed and edited as a motion picture is.
Seasons 1 and 2 won back - to - back News & Documentary Emmy Awards for Outstanding Lighting Direction & Scenic Design.
Season 3 was Emmy - nominated in the same category.
Episode dates are from TVGuide.com.
|
when was the last hurricane in gulf shores alabama | Hurricane Ivan - wikipedia
Effects
Other wikis
Hurricane Ivan was a large, long - lived, Cape Verde hurricane that caused widespread damage in the Caribbean and United States. The cyclone was the ninth named storm, the sixth hurricane and the fourth major hurricane of the active 2004 Atlantic hurricane season. Ivan formed in early September, and reached Category 5 strength on the Saffir - Simpson Hurricane Scale.
Ivan caused catastrophic damage to Grenada as a strong Category 3 storm, heavy damage to Jamaica as a strong Category 4 storm and then Grand Cayman, Cayman Islands and the western tip of Cuba as a Category 5 storm. After peaking in strength, the hurricane moved north - northwest across the Gulf of Mexico to strike Pensacola / Milton, Florida and Alabama as a strong Category 3 storm, causing significant damage. Ivan dropped heavy rains on the Southeastern United States as it progressed northeast and east through the eastern United States, becoming an extratropical cyclone. The remnant low from the storm moved into the western subtropical Atlantic and regenerated into a tropical cyclone, which then moved across Florida and the Gulf of Mexico into Louisiana and Texas, causing minimal damage. Ivan caused an estimated $26.1 billion (2004 USD) along its path, of which $20.5 billion occurred in the United States.
On September 2, 2004, Tropical Depression Nine formed from a large tropical wave southwest of Cape Verde. As the system moved to the west, it strengthened gradually, becoming Tropical Storm Ivan on September 3 and reaching hurricane strength on September 5, 1,150 miles (1,850 km) to the east of Tobago. Later that day, the storm intensified rapidly, and by 5 pm EDT (2100 UTC), Ivan became a Category 3 hurricane with winds of 125 miles per hour (200 km / h). The National Hurricane Center said that the rapid strengthening of Ivan on September 5 was unprecedented at such a low latitude in the Atlantic basin.
As it moved east, Ivan weakened slightly because of wind shear in the area. The storm passed over Grenada on September 7, battering several of the Windward Islands. As it entered the Caribbean Sea, Ivan reintensified rapidly and became a Category 5 hurricane, just north of the Windward Netherlands Antilles (Curaçao and Bonaire) and Aruba on September 9, with winds reaching 160 mph (260 km / h). Ivan weakened slightly as it moved west - northwest towards Jamaica. As Ivan approached the island late on September 10, it began a westward jog that kept the eye and the strongest winds to the south and west. However, because of its proximity to the Jamaican coast, the island was battered with hurricane - force winds for hours.
After passing Cuba, Ivan resumed a more northerly track and regained Category 5 strength. Ivan 's strength continued to fluctuate as it moved west on September 11, and the storm attained its highest winds of 163 mph (262 km / h) as it passed within 30 miles (50 km) of Grand Cayman. Ivan reached its peak strength with a minimum central pressure of 910 millibars (27 inHg) on September 12. Ivan passed through the Yucatán Channel late on September 13, while its eyewall affected the westernmost tip of Cuba. Once over the Gulf of Mexico, Ivan weakened slightly to Category 4 strength, which it maintained while approaching the Gulf Coast of the United States.
Just before it made landfall in the United States, Ivan 's eyewall weakened considerably, and its southwestern portion almost disappeared. Around 2 a.m. CDT September 16 (0700 UTC), Ivan made landfall on the U.S. mainland in Gulf Shores, Alabama, as a Category 3 hurricane with 120 mph (190 km / h) winds; some hurricane information sources put the winds from Hurricane Ivan near 130 mph (210 km / h) upon landfall in Alabama and northwestern Florida. Ivan then continued inland, maintaining hurricane strength until it was over central Alabama. Ivan weakened rapidly that evening and became a tropical depression on the same day, still over Alabama. Ivan lost tropical characteristics on September 18 while crossing Virginia. Later that day, the remnant low of Ivan drifted off the U.S. mid-Atlantic coast into the Atlantic Ocean, and the low - pressure disturbance continued to dump rain on the United States.
On September 20, Ivan 's remnant surface low completed an anticyclonic loop and moved across the Florida peninsula. As it continued west across the northern Gulf of Mexico, the system reorganized and again took on tropical characteristics. On September 22, the National Weather Service, "after considerable and sometimes animated in - house discussion (regarding) the demise of Ivan, '' determined that the low was in fact a result of the remnants of Ivan and thus named it accordingly. On the evening of September 23, the revived Ivan made landfall near Cameron, Louisiana as a tropical depression. Ivan weakened into a remnant low on September 24, as it moved overland into Texas. The remnant circulation of Ivan persisted for another day, before dissipating on September 25.
Ivan set 18 new records for intensity at low latitudes. When Ivan first became a Category 3 hurricane on September 3 (1800 UTC), it was centered near 10.2 degrees north from the equator. This is the most southerly location on record for a major hurricane in the Atlantic basin. Just six hours later, Ivan also became the most southerly Category 4 hurricane on record in the Atlantic basin when it reached that intensity while located at 10.6 degrees north. Finally, at midnight (UTC) on September 9 while centered at 13.7 degrees north, Ivan became the most southerly Category 5 hurricane on record in the Atlantic basin. The latter record would not be surpassed until Hurricane Matthew in 2016, which reached Category 5 intensity at 13.4 degrees north.
Ivan had held the world record of 33 (with 32 consecutive) six - hour periods of intensity at or above Category 4 strength. This record was broken two years later by Pacific Hurricane / Typhoon Ioke, which had 36 (33 consecutive) six - hour periods at Category 4 strength. This contributed to Ivan 's total Accumulated Cyclone Energy (ACE) of 70.38. The tornado outbreak associated with Ivan spawned 127 tornadoes, more than any other tropical cyclone worldwide.
Scientists from the Naval Research Laboratory at Stennis Space Center, Mississippi have used a computer model to predict that, at the height of the storm, the maximum wave height within Ivan 's eyewall reached 131 feet (40 m).
By September 5, a hurricane watch was posted for Barbados. Early on the following day, a tropical storm watch was issued for Grenada. Later that day, hurricane watches were also put into effect for Saint Lucia, and Martinique. A tropical storm warning was issued for Saint Vincent and Grenadines and Tobago and Grenada. By 1500 UTC on September 6, the hurricane watches and tropical storm watches and warnings were upgraded to a hurricane warning and expanded to: Barbados, Saint Vincent and Grenadines, Saint Lucia, Tobago, Grenada. Simultaneously, a tropical storm warning was issued for Trinidad. On September 7, the hurricane warning in effect for several countries was downgraded to a tropical storm warning. By September, all tropical storm and hurricane watches and warnings were discontinued in the eastern portions of the Windward Islands.
As Ivan continued westward, a hurricane watch was issued for the ABC islands on September 8. Many schools and businesses were closed in the Netherlands Antilles, and about 300 people evacuated their homes on Curaçao.
In the Caribbean, 500,000 Jamaicans were told to evacuate from coastal areas, but only 5,000 were reported to have moved to shelters. 12,000 residents and tourists were evacuated from Isla Mujeres off the Yucatán Peninsula.
In Louisiana, mandatory evacuations of vulnerable areas in Jefferson, Lafourche, Plaquemines, St. Charles, St. James, St. John the Baptist, and Tangipahoa parishes took place, with voluntary evacuations ordered in six other parishes. More than one - third of the population of Greater New Orleans evacuated voluntarily, including more than half of the residents of New Orleans itself. At the height of the evacuation, intense traffic congestion on local highways caused delays of up to 12 hours. About a thousand special - needs patients were housed at the Louisiana Superdome during the storm. Ivan was considered a particular threat to the New Orleans area because dangers of catastrophic flooding. However, Plaquemines and St. Bernard Parishes suffered a moderate amount of wind damage. Hurricane preparedness for New Orleans was judged poor. At one point, the media sparked fears of an "Atlantean '' catastrophe if the hurricane were to make a direct strike on the city. These fears were not realized, as the storm 's path turned further east.
In Mississippi, evacuation of mobile homes and vulnerable areas took place in Hancock, Jackson, and Harrison counties. In Alabama, evacuation in the areas of Mobile and Baldwin counties south of Interstate 10 was ordered, including a third of the incorporated territory of the City of Mobile, as well as several of its suburbs. In Florida, a full evacuation of the Florida Keys began at 7: 00 am EDT September 10 but was lifted at 5: 00 am EDT September 13 as Ivan tracked further west than originally predicted. Voluntary evacuations were declared in ten counties along the Florida Panhandle, with strong emphasis in the immediate western counties of Escambia, Santa Rosa, and Okaloosa. Ivan prompted the evacuation of 270 animals at the Alabama Gulf Coast Zoo in Gulf Shores, Alabama. The evacuation had to be completed within a couple of hours, with only 28 volunteers available to move the animals.
Ivan killed 64 people in the Caribbean -- mainly in Grenada and Jamaica -- three in Venezuela, and 25 in the United States, including fourteen in Florida. Thirty - two more deaths in the United States were indirectly attributed to Ivan. While traversing the eastern United States, Ivan spawned 120 tornadoes, striking communities along concentric arcs on the leading edge of the storm. In Florida, Blountstown, Marianna, and Panama City Beach suffered three of the most devastating tornadoes. A Panama City Beach news station was nearly hit by an F2 tornado during the storm. Ivan also caused over US $20.5 billion (2004 USD) in damages in the United States and US $3 billion in the Caribbean (2004 USD).
Ivan passed directly over Grenada on September 7, 2004, killing 39 people. The capital, St. George 's, was severely damaged and several notable buildings were destroyed, including the residence of the prime minister. Ivan also caused extensive damage to a local prison, allowing most of the inmates to escape. The island, in the words of a Caribbean disaster official, suffered "total devastation. '' According to a member of the Grenadian parliament, at least 85 % of the small island was devastated. Extensive looting was reported. In all, damage on the island totalled US $815 million (2004 USD, $1.06 billion in 2017 USD).
Elsewhere in the Caribbean, a pregnant woman was killed in Tobago when a tree fell on top of her home, and a 75 - year - old Canadian woman drowned in Barbados. Three deaths were reported in Venezuela. Over five hundred homes on Barbados and around 60 homes in Saint Vincent and the Grenadines were either damaged or destroyed.
On September 11 -- 12, the center of Ivan passed near Jamaica, causing significant wind and flood damage. Overall, 17 people were killed in Jamaica and 18,000 people were left homeless as a result of the flood waters and high winds. Most of the major resorts and hotels fared well, though, and were reopened only a few days after Ivan had passed. Damage on Jamaica totaled US $360 million (2004 USD, $466 million in 2017 USD).
In the Cayman Islands, Governor Bruce Dinwiddy described damage as "very, very severe and widespread. '' Despite strict building codes which made the islands ' buildings well able to withstand even major hurricanes, Ivan 's winds and storm surge were so strong that a quarter or more of the buildings on the islands were reported to be uninhabitable, with 85 % damaged to some extent. Much of Grand Cayman still remained without power, water, or sewer services for several months later. After five months, barely half the pre-Ivan hotel rooms were usable. Two people were killed on Grand Cayman, one from drowning and the other from flying debris. Damage across the territory was catastrophic, with losses amounting to US $2.86 billion or 183 percent of its gross domestic product. The Letter from the Cayman Islands Government Office in the United Kingdom, 8 October 2004 by McKeeva Bush, Leader of Government Business details the intensity, extent of damage, and recovery process during the months that followed.
There were four deaths in the Dominican Republic. The region 's Caribbean Development Bank estimates Ivan caused over US $3 billion (2004 USD, $3.9 billion in 2017 USD) damage on island nations, mostly in the Cayman Islands, Grenada, and Jamaica. Minor damage, including some beach erosion, was reported in the ABC islands.
Even though Ivan did not make landfall on Cuban soil, its storm surge caused localized flooding on Santiago de Cuba and Granma, on the southern part of the island. At Cienfuegos, the storm produced waves of 15 feet (4.6 m), and Pinar del Río recorded 13.3 inches (340 mm) of rainfall. While there were no casualties on the island, the Cuban government estimates that about US $1.2 billion (2004 USD, $1.6 billion in 2017 USD) of property damage were directly due to Ivan.
Along with the 14 deaths in Florida, Ivan is blamed for eight deaths in North Carolina, two in Georgia, and one in Mississippi. An additional 32 deaths were reported as indirectly caused by the storm.
As it passed over the Gulf of Mexico off the coast of Louisiana, Ivan caused the destruction of Taylor Energy 's Mississippi Canyon 20 - A production platform, 550 ft (170 m) above 28 producing oil and gas wells drilled in water 479 ft (146 m) deep. Waves estimated to be 71 feet (22 m) caused tremendous pressures below the surface, causing a landslide that obliterated the platform. Hundreds of gallons of oil per day were still leaking onto the surface of the Gulf ten years later in 2014, and continue to appear to the present date.
Ivan caused an estimated US $20.5 billion (2004 USD) in damage in the United States alone, making it the second - costliest hurricane on record at the time, behind only Hurricane Andrew of 1992.
As Ivan made landfall on the U.S. coastline in Florida, there was heavy damage as observed in Pensacola, Gulf Breeze, Navarre Beach, and Pensacola Beach, dwellings situated far inland, as much as 20 miles (32 km) from the Gulf coast, along the shorelines of Escambia Bay, East Bay, Blackwater Bay, and Ward Basin in Escambia County and Santa Rosa County, and Fort Walton Beach, Florida on the eastern side of the storm. The area just west of Pensacola, including the community of Warrington (which includes Pensacola NAS), Perdido Key, and Innerarity Point, took the brunt of the storm. Some of the subdivisions in this part of the county were completely destroyed, with a few key roads in the Perdido area only opened in late 2005, over a year after the storm hit. Shattered windows from gusts and flying projectiles experienced throughout the night of the storm were common. As of December 2007, roads remained closed on Pensacola Beach because of damage from Ivan 's storm surge.
In Pensacola, the Interstate 10 Escambia Bay Bridge was heavily damaged, with as much as a quarter - mile (400 m) of the bridge collapsing into the bay. The causeway that carries U.S. Highway 90 across the northern part of the same bay was also heavily damaged. The U.S. 90 causeway reopened first; the I - 10 bridge reopened, with temporary repairs, in November. Virtually all of Perdido Key, an area on the outskirts of Pensacola that bore the brunt of Ivan 's winds and rain, was essentially leveled. High surf and wind brought extensive damage to Innerarity Point.
On September 26, 2006, over two years after Ivan struck the region, funding for the last 501 FEMA - provided trailers ran out for those living in Santa Rosa and Escambia counties.
The city of Demopolis, over 100 miles (160 km) inland in west - central Alabama, endured wind gusts estimated at 90 mph (140 km / h), while Montgomery saw wind gusts in the 60 to 70 mph (97 to 113 km / h) range at the height of the storm.
The heaviest damage as Ivan made landfall on the U.S. coastline was observed in Baldwin County in Alabama, where the storm 's eye (and eyewall) made landfall. High surf and wind brought extensive damage to Orange Beach near the border with Florida. There, two five - story condominium buildings were undermined to the point of collapse by Ivan 's storm surge of 14 feet (4.3 m). Both were made of steel - reinforced concrete. Debris gathered in piles along the storm tide, exacerbating the damage when the floodwaters crashed into homes sitting on pilings. Brewton, a community about 50 miles (80 km) inland, also suffered severe damage.
In addition to the damage to the southern portions of the state, there was extensive damage to the state 's electrical grid. At the height of the outages, Alabama Power reported 489,000 subscribers had lost electrical power -- roughly half of its subscriber base.
Further inland, Ivan caused major flooding, bringing the Chattahoochee River near Atlanta and many other rivers and streams to levels at or near 100 - year records. The Delaware River and its tributaries crested just below their all - time records set by Hurricane Diane in 1955. Locations in southern New Hampshire and Massachusetts received over 7 inches of rainfall from the remnants of Ivan, causing flooding and mudslides. In Connecticut, high winds moved in quickly and unexpectedly, and a boater was killed when his trimaran capsized in 50 - knot winds on Long Island Sound.
In western North Carolina, many streams and rivers reached well above flood stage in an area that was heavily flood damaged just a week and a half prior from the remnants of Hurricane Frances, causing many roads to be closed. High winds contributed to widespread power outages throughout the mountainous region. The Blue Ridge Parkway as well as Interstate 40 through the Pigeon River gorge in Haywood County, North Carolina sustained major damage, and landslides were common across the mountains. There was major flooding along the French Broad River and Swannanoa River in Asheville, North Carolina and along the Pigeon River near Canton, North Carolina. As a result of the rain, a major debris flow of mud, rocks, trees, and water surged down Peek 's Creek, near Franklin, North Carolina, sweeping away 15 houses and killing five people.
The system also spawned deadly tornadoes as far north as Maryland and destroyed seven oil platforms in the Gulf of Mexico while at sea. While crossing over the Mid-Atlantic states, Ivan 's remnants spawned 117 tornadoes across the eastern United States, with the 40 tornadoes spawned in Virginia on September 17 setting a daily record for the commonwealth. Ivan then moved into the Wheeling, West Virginia and Pittsburgh area, causing major flooding and gusty winds. Pittsburgh International Airport recorded the highest 24 - hour rainfall for Pittsburgh, recording 5.95 inches (151 mm) of rain. Ivan 's rain caused widespread flooding. The Juniata River basin was flooded, and the Frankstown Branch crested at its highest level ever. After Ivan regenerated in the Gulf of Mexico, it caused further heavy rainfall up to 8 inches (200 mm) in areas of Louisiana and Texas.
On the morning of September 21, the remnant mid-level circulation of Ivan combined with a frontal system. This produced a plume of moisture over the Canadian Maritimes for four days, producing heavy rainfall totaling 6.2 inches (160 mm) in Gander, Newfoundland. High winds of up to 89 mph (143 km / h) downed trees and caused power outages in Newfoundland, Prince Edward Island, and eastern Nova Scotia. The system produced intense waves of up to 50 feet (15 m) near Cape Bonavista. The system killed two when it grounded a fishing vessel and was indirectly responsible for four traffic fatalities in Newfoundland.
Grenada suffered serious economic repercussions following the destruction caused by Ivan. Before Ivan, the economy of Grenada was projected to grow by 4.7 %, but the island 's economy instead contracted by nearly 3 % in 2004. The economy was also projected to grow by at least 5 % through 2007, but, as of 2005, that estimate had been lowered to less than 1 %. The government of Grenada also admitted that government debt, 130 % of the island 's GDP, was "unsustainable '' in October 2004 and appointed a group of professional debt advisors in January 2005 to help seek a cooperative restructuring agreement with creditors.
More than US $150 million was sent to Grenada in 2004 to aid reconstruction following Ivan, but the economic situation remains fragile. The International Monetary Fund reports that as "difficult enough as the present fiscal situation is, it is unfortunately quite easy to envisage circumstances that would make it even more so. '' Furthermore, "shortfalls in donor financing and tax revenues, or events such as a further rise in global oil prices, pose a grave risk. ''
By two days after Ivan 's passage, USAID 's hurricane recovery program distributed emergency relief supplies for families who were displaced by the storm. During phase one of the recovery program, communities restored three tourist sites, cleared agricultural lands, and completed disaster mitigation. In addition, the U.S. Peace Corps completed thirty small projects in rural communities and low income neighborhoods. 66 health clinics, 25 schools, and 62 water and sanitation systems were repaired during the first phase of recovery. About 1,379 farmers, herders and micro businesses became eligible for grants. By 2005, 55 schools and colleges were repaired, while restoration of 1,560 houses had occurred.
On September 27, 2004, President of the United States George W. Bush submitted a budget to the United States Congress which requested over $7 billion (2004 USD) in aid to victims of Hurricane Ivan and Jeanne in the following states: Alabama, Florida, Georgia, Louisiana, Mississippi, North Carolina, Ohio, Pennsylvania, and West Virginia. Over half of the $7 billion (2004 USD) was to cover uninsured damage to property and public infrastructure. $889 million was spent to repair Department of Defense facilities. About $600 million was earmarked for emergency repairs to highways and road damaged by Hurricanes Charley, Frances, Ivan, and Jeanne. The Small Business Administration (SBA) used $472 million to provide loans for small businesses and homeowners affected by the storm. Approximately $400 million was given by the United States Department of Agriculture to provide financial assistance agricultural producers suffering crop and other losses. Around $132 million (2004 USD) was used to repair Federal facilities by several government agencies, including: United States Coast Guard, Federal Bureau of Prisons, the United States Forest Service, and the Federal Aviation Administration. The United States Army Corps of Engineers used $81 million (2004 USD) for restoration of coastal areas affected by Ivan. In addition, $50 million (2004 USD) of which was for disaster and famine assistance funds Grenada, Jamaica, and Haiti.
Following the storm in Alabama, more than 167,700 people applied for assistance in 65 counties in the state. over 51 counties in the state became eligible for public assistance. As a result, the U.S. Department of Homeland Security 's Federal Emergency Management Agency (FEMA) and the Alabama Emergency Management Agency (AEMA) received $735 million (2004 USD), which was spent in disaster assistance, and included: low - interest loans for homeowners and businesses, disaster food stamps, Disaster Unemployment Assistance to those left unemployed as a result of Ivan, "Project Rebound '', and to fill the 5,856 National Flood Insurance Program claims. In addition, there were repairs to public infrastructure such as roads, bridges, buildings, utilities, facilities, and parks. 20 Disaster Recovery Centers were opened in 13 counties, which also included the Poarch Creek Indian Reservation. Overall, FEMA paid 90 % of the $735 million (2004 USD), while the AEMA paid for the other 10 %.
Ivan is suspected of bringing spores of soybean rust from Venezuela into the United States, the first ever occurrences of soybean rust found in North America. Since the Florida soybean crop had already been mostly harvested, economic damage was limited. Some of the most severe outbreaks in South America have been known to reduce soybean crop yields by half or more. Following the storm, more than 138,500 residents in 15 counties of the Florida Panhandle applied for federal and state aid. In those counties, a total of $162.6 million was approved by FEMA 's Individuals and Households Program. In addition, residents of 24 other counties in Florida were eligible for grants and loans. By September 2005, more than $1.4 billion (2004 USD) in federal and state assistance was approved for residents and communities in the Florida Panhandle. In addition, the National Flood Insurance Program paid nearly $869 million (2004 USD) for more than 9,800 insurance claims after Ivan.
More than $4 million (2004 USD) in disaster assistance was approved for Mississippi by FEMA and the Mississippi Emergency Management Agency (MEMA). In addition, the SBA issued nearly 3,000 applications for low - interest loans to homeowners, renters, landlords, businesses, and non-profit organizations. The loans covered up to $200,000 in real estate repairs / replacements and up to $40,000 in repairs / replacements of personal property.
Residents and business owners in eight parishes of Louisiana became eligible for disaster assistance. By one week before the deadline to apply on November 15, 2004, about 9,527 residents applied for disaster assistance. Overall, FEMA and the Government of Louisiana provided more than $3.8 million (2004 USD) to those that requested assistance. In addition, the SBA also allowed applications for loans to repair personal property until that day.
This storm marked the third occasion the name "Ivan '' had been used to name a tropical cyclone in the Atlantic, as well as the fifth of six occurrences worldwide. Because of the severe damage and number of deaths in the Caribbean and United States, the name Ivan was retired in the spring of 2005 by the World Meteorological Organization and will never again be used in the Atlantic basin. It was replaced by Igor for the 2010 season.
Ivan broke several hydrological records; it is credited with possibly causing the largest ocean wave ever recorded, a 91 - foot (28 - meter) wave that may have been as high as 131 ft (40 m), and the fastest seafloor current, at 2.25 m / s (5 mph).
|
where was the money hidden in shawshank redemption | Shawshank tree - wikipedia
The iconic tree featured in the 1994 motion picture The Shawshank Redemption was a white oak located near Malabar Farm State Park in Lucas, Ohio, United States. The tree was at least 100 feet (30 m) tall and approximately 180 to 200 years old. The tree played a central role in the film 's plot and was one of the most popular tourist sites connected to it. The tree was split by lightning on July 29, 2011, and eventually was knocked down by strong winds on or around July 22, 2016.
The tree was a major tourist attraction for fans of the film, although located on private property at Malabar Farm. The tree was part of "The Shawshank Trail '' which features many of the film 's iconic locations and attracts up to 35,000 visitors annually. The farm where the tree was located sometimes is a venue for weddings.
On July 29, 2011, half of the tree fell due to trunk rot from ants after being hit by lightning. News of the event became viral, appearing in news outlets in the United Kingdom and India. The tree 's fate was uncertain at the time, and officials were pessimistic about its chances of survival, but it was found to be alive. The tree was further damaged in July 2016 due to strong winds. The event caused a major increase in Internet traffic to the Mansfield and Richland County Convention and Visitors Bureau website and general interest in the Shawshank Trail. The remaining portions of the tree were cut down on April 9, 2017, by the property 's current owner.
Though the film is set in Maine, much of the filming took place in Mansfield, Ohio, and nearby locations. The oak appears near the end of The Shawshank Redemption when Red (played by Morgan Freeman) follows clues left by Andy Dufresne (played by Tim Robbins) to its location. Red finds a box buried at the base of a stone wall in the shade of the oak. The box contains a letter from Andy and cash to buy a bus ticket to visit him in Mexico. In the film, Andy describes the tree as "like something out of a Robert Frost poem ''.
The oak has been described as among the most iconic trees in film history.
|
who is performing at pyeongchang 2018 closing ceremony | 2018 Winter Olympics closing ceremony - wikipedia
The closing ceremony of the 2018 Winter Olympics took place at Pyeongchang Olympic Stadium in Pyeongchang County, South Korea, on 25 February 2018 at 20: 00 KST (UTC + 9).
The flag bearers of 92 National Olympic Committees arrived into the Pyeongchang Olympic Stadium. The flag bearers from each participating country entered the stadium informally in single file, ordered by ganada order of the Korean alphabet, and behind them marched all the athletes, without any distinction or grouping by nationality. Marching alongside the athletes were Soohorang, the Pyeongchang 2018 Olympic mascot, and Hodori, mascot of the 1988 Summer Olympics in Seoul.
First, the Greek flag was raised while its anthem played. The Olympic flag was then lowered and passed, by the mayor of Pyeongchang County, Shim Jae - kook, to IOC President, Thomas Bach, who then handed it over to the mayor of Beijing, Chen Jining. This was then followed by the raising of the flag of China, and the playing of its anthem. The flag will be raised again in Tokyo, Japan, for the opening ceremony of the 2020 Summer Olympics on 24 July 2020.
Beijing, the host city of the 2022 Winter Olympic Games, presented a special performance See you in Beijing in 2022 directed by Chinese film director Zhang Yimou, who also presented the 2008 Summer Olympics opening and closing ceremonies. The presentation featured two pandas skating and people forming red lines that became a dragon, as pandas and dragons are national icons for China. The skaters also trail lines to form the emblem of the games. China 's Paramount leader Xi Jinping made a cameo appearance by video expressing the welcome message on behalf of the Chinese people.
IOC President Thomas Bach formally closed the games, calling them ' The Games of New Horizons. ' Soon after, the cauldron was extinguished.
|
who is considered to be most likely to vote | Voter turnout - wikipedia
Voter turnout is the percentage of eligible voters who cast a ballot in an election. Eligibility varies by country, and the voting - eligible population should not be confused with the total adult population. Age and citizenship status are often among the criteria used to determine eligibility, but some countries further restrict eligibility based on sex, race, or religion.
After increasing for many decades, there has been a trend of decreasing voter turnout in most established democracies since the 1980s. In general, low turnout is attributed to disillusionment, indifference, or a sense of futility (the perception that one 's vote wo n't make any difference).
Low turnout is usually considered to be undesirable. As a result, there have been many efforts to increase voter turnout and encourage participation in the political process. In spite of significant study into the issue, scholars are divided on the reasons for the decline. Its cause has been attributed to a wide array of economic, demographic, cultural, technological, and institutional factors.
Different countries have very different voter turnout rates. For example, turnout in the United States 2012 presidential election was about 55 %. In both Belgium, which has compulsory voting, and Malta, which does not, participation reaches about 95 %.
The chance of any one vote determining the outcome is low. Some studies show that a single vote in a voting scheme such as the Electoral College in the United States has an even lower chance of determining the outcome. Other studies claim that the Electoral College actually increases voting power. Studies using game theory, which takes into account the ability of voters to interact, have also found that the expected turnout for any large election should be zero.
The basic formula for determining whether someone will vote, on the questionable assumption that people act completely rationally, is
where
Since P is virtually zero in most elections, PB is also near zero, and D is thus the most important element in motivating people to vote. For a person to vote, these factors must outweigh C. Experimental political science has found that even when P is likely greater than zero, this term has no effect on voter turnout. Enos and Fowler (2014) conducted a field experiment that exploits the rare opportunity of a tied election for major political office. Informing citizens that the special election to break the tie will be close (meaning a high P term) has little mobilizing effect on voter turnout.
Riker and Ordeshook developed the modern understanding of D. They listed five major forms of gratification that people receive for voting: complying with the social obligation to vote; affirming one 's allegiance to the political system; affirming a partisan preference (also known as expressive voting, or voting for a candidate to express support, not to achieve any outcome); affirming one 's importance to the political system; and, for those who find politics interesting and entertaining, researching and making a decision. Other political scientists have since added other motivators and questioned some of Riker and Ordeshook 's assumptions. All of these concepts are inherently imprecise, making it difficult to discover exactly why people choose to vote.
Recently, several scholars have considered the possibility that B includes not only a personal interest in the outcome, but also a concern for the welfare of others in the society (or at least other members of one 's favorite group or party). In particular, experiments in which subject altruism was measured using a dictator game showed that concern for the well - being of others is a major factor in predicting turnout and political participation. Note that this motivation is distinct from D, because voters must think others benefit from the outcome of the election, not their act of voting in and of itself.
There are philosophical, moral, and practical reasons that some people cite for not voting in electoral politics. Robert LeFevre, Francis Tandy, John Pugsley, Frank Chodorov, George H. Smith, Carl Watner, Wendy McElroy, and Lysander Spooner are some moderately well - known authors who have written about these reasons.
High voter turnout is often considered to be desirable, though among political scientists and economists specializing in public choice, the issue is still debated. A high turnout is generally seen as evidence of the legitimacy of the current system. Dictators have often fabricated high turnouts in showcase elections for this purpose. For instance, Saddam Hussein 's 2002 plebiscite was claimed to have had 100 % participation. Opposition parties sometimes boycott votes they feel are unfair or illegitimate, or if the election is for a government that is considered illegitimate. For example, the Holy See instructed Italian Catholics to boycott national elections for several decades after the creation of the state of Italy. In some countries, there are threats of violence against those who vote, such as during the 2005 Iraq elections, an example of voter suppression. However, some political scientists question the view that high turnout is an implicit endorsement of the system. Mark N. Franklin contends that in European Union elections opponents of the federation, and of its legitimacy, are just as likely to vote as proponents.
Assuming that low turnout is a reflection of disenchantment or indifference, a poll with very low turnout may not be an accurate reflection of the will of the people. On the other hand, if low turnout is a reflection of contentment of voters about likely winners or parties, then low turnout is as legitimate as high turnout, as long as the right to vote exists. Still, low turnouts can lead to unequal representation among various parts of the population. In developed countries, non-voters tend to be concentrated in particular demographic and socioeconomic groups, especially the young and the poor. However, in India, which boasts an electorate of more than 814 million people, the opposite is true. The poor, who comprise the majority of the demographic, are more likely to vote than the rich and the middle classes, and turnout is higher in rural areas than urban areas. In low - turnout countries, these groups are often significantly under - represented in elections. This has the potential to skew policy. For instance, a high voter turnout among the elderly coupled with a low turnout among the young may lead to more money for retirees ' health care, and less for youth employment schemes. Some nations thus have rules that render an election invalid if too few people vote, such as Serbia, where three successive presidential elections were rendered invalid in 2003.
In each country, some parts of society are more likely to vote than others. In high - turnout countries, these differences tend to be limited. As turnout approaches 90 %, it becomes difficult to find significant differences between voters and nonvoters, but in low turnout nations the differences between voters and non-voters can be quite marked.
Turnout differences appear to persist over time; in fact, the strongest predictor of individual turnout is whether or not one voted in the previous election. As a result, many scholars think of turnout as habitual behavior that can be learned or unlearned, especially among young adults.
One study found that improving children 's social skills increases their turnout as adults.
Socioeconomic factors are significantly associated with whether individuals develop the habit of voting. The most important socioeconomic factor affecting voter turnout is education. The more educated a person is, the more likely he or she is to vote, even controlling for other factors that are closely associated with education level, such as income and class. Income has some effect independently: wealthier people are more likely to vote, regardless of their educational background. There is some debate over the effects of ethnicity, race, and gender. In the past, these factors unquestionably influenced turnout in many nations, but nowadays the consensus among political scientists is that these factors have little effect in Western democracies when education and income differences are taken into account. However, since different ethnic groups typically have different levels of education and income, there are important differences in turnout between such groups in many societies. Other demographic factors have an important influence: young people are far less likely to vote than the elderly. Occupation has little effect on turnout, with the notable exception of higher voting rates among government employees in many countries.
There can also be regional differences in voter turnout. One issue that arises in continent - spanning nations, such as Australia, Canada, the United States and Russia, is that of time zones. Canada banned the broadcasting of election results in any region where the polls have not yet closed; this ban was upheld by the Supreme Court of Canada. In several recent Australian national elections, the citizens of Western Australia knew which party would form the new government up to an hour before the polling booths in their State closed.
Within countries there can be important differences in turnout between individual elections. Elections where control of the national executive is not at stake generally have much lower turnouts -- often half that for general elections. Municipal and provincial elections, and by - elections to fill casual vacancies, typically have lower turnouts, as do elections for the parliament of the supranational European Union, which is separate from the executive branch of the EU 's government. In the United States, midterm congressional elections attract far lower turnouts than Congressional elections held concurrently with Presidential ones. Runoff elections also tend to attract lower turnouts.
In theory, one of the factors that is most likely to increase turnout is a close race. With an intensely polarized electorate and all polls showing a close finish between President George W. Bush and Democratic challenger John F. Kerry, the turnout in the 2004 U.S. presidential election was close to 60 %, resulting in a record number of popular votes for both candidates; despite losing the election, Kerry even surpassed Ronald Reagan 's 1984 record in terms of the number of popular votes received. However, this race also demonstrates the influence that contentious social issues can have on voter turnout; for example, the voter turnout rate in 1860 wherein anti-slavery candidate Abraham Lincoln won the election was the second - highest on record (81.2 percent, second only to 1876, with 81.8 percent). Nonetheless, there is evidence to support the argument that predictable election results -- where one vote is not seen to be able to make a difference -- have resulted in lower turnouts, such as Bill Clinton 's 1996 re-election (which featured the lowest voter turnout in the United States since 1924), the United Kingdom general election of 2001, and the 2005 Spanish referendum on the European Constitution; all of these elections produced decisive results on a low turnout.
A 2017 NBER paper found that an awareness by the electorate that an election would be close increased turnout: "Closer elections are associated with greater turnout only when polls exist. Examining within - election variation in newspaper reporting on polls across cantons, we find that close polls increase turnout significantly more where newspapers report on them most. ''
One 2017 study in the Journal of Politics found that, in the United States, incarceration had no significant impact on turnout in elections: ex-felons did not become less likely to vote after their time in prison. Also in the United States, incarceration, probation, and a felony record deny 5 -- 6 million Americans of the right to vote, with reforms gradually leading more states to allow people with felony criminal records to vote, while almost none allow incarcerated people to vote.
A 2017 study found that the opening and closing hours of polling places determines the age demographics of turnout: turnout among younger voters is higher the longer polling places are open and turnout among older voters decreases the later polling places open.
A 2017 study in Electoral Studies found that Swiss cantons that reduced the costs of postal voting for voters by prepaying the postage on return envelopes (which otherwise cost 85 Swiss Franc cents) were "associated with a statistically significant 1.8 percentage point increase in voter turnout ''. A 2016 study in the American Journal of Political Science found that preregistration - allowing young citizens to register before being eligible to vote - increased turnout by 2 to 8 percentage points.
A 2018 study in the British Journal of Political Science found that internet voting in local elections in Ontario, Canada, only had a modest impact on turnout, increasing turnout by 3.5 percentages points. The authors of the study say that the results "suggest that internet voting is unlikely to solve the low turnout crisis, and imply that cost arguments do not fully account for recent turnout declines. ''
A 2017 experimental study found that by sending registered voters between the ages of 18 and 30 a voter guide containing salient information about candidates in an upcoming election (a list of candidate endorsements and the candidates ' policy positions on five issues in the campaign) increased turnout by 0.9 points.
Research results are mixed as to whether bad weather affects turnout. There is research that shows that bad weather can reduce turnout. A 2016 study, however, found no evidence that weather disruptions reduce turnout. A 2011 study found "that while rain decreases turnout on average, it does not do so in competitive elections. '' The season and the day of the week (although many nations hold all their elections on the same weekday) can also affect turnout. Weekend and summer elections find more of the population on holiday or uninterested in politics, and have lower turnouts. When nations set fixed election dates, these are usually midweek during the spring or autumn to maximize turnout. Variations in turnout between elections tend to be insignificant. It is extremely rare for factors such as competitiveness, weather, and time of year to cause an increase or decrease in turnout of more than five percentage points, far smaller than the differences between groups within society, and far smaller than turnout differentials between nations. A 2017 study in the journal American Politics Research found that rainfall increased Republican vote shares, because it decreased turnout more among Democratic voters than Republican voters.
Limited research suggests that genetic factors may also be important. Some scholars recently argued that the decision to vote in the United States has very strong heritability, using twin studies of validated turnout in Los Angeles and self - reported turnout in the National Longitudinal Study of Adolescent Health to establish that. They suggest that genetics could help to explain why parental turnout is such a strong predictor of voting in young people, and also why voting appears to be habitual. Further, they suggest, if there is an innate predisposition to vote or abstain, this would explain why past voting behavior is such a good predictor of future voter reaction.
In addition to the twin study method, scholars have used gene association studies to analyze voter turnout. Two genes that influence social behavior have been directly associated with voter turnout, specifically those regulating the serotonin system in the brain via the production of monoamine oxidase and 5HTT. However, this study was reanalyzed by separate researchers who concluded these "two genes do not predict voter turnout '', pointing to several significant errors, as well as "a number of difficulties, both methodological and genetic '' in studies in this field. Once these errors were corrected, there was no longer any statistically significant association between common variants of these two genes and voter turnout.
A 2018 study in the American Political Science Review found that the parents to newly enfranchised voters "become 2.8 percentage points more likely to vote. '' A 2018 study in the journal Political Behavior found that increasing the size of households increases household member 's propensity to vote.
A 2018 study in The Journal of Politics found that Section 5 of the 1965 Voting Rights Act "increased black voter registration by 14 -- 19 percentage points, white registration by 10 -- 13 percentage points, and overall voter turnout by 10 -- 19 percentage points. Additional results for Democratic vote share suggest that some of this overall increase in turnout may have come from reactionary whites. ''
According to a 2018 study, get - out - the - vote groups in the United States who emphasize ballot secrecy along with reminders to vote increase turnout by about 1 percentage point among recently registered nonvoters.
Voter turnout varies considerably between nations. It tends to be lower in the United States, Asia and Latin America than in most of Europe, Canada and Oceania. Western Europe averages a 77 % turnout, and South and Central America around 54 % since 1945. The differences between nations tend to be greater than those between classes, ethnic groups, or regions within nations. Confusingly, some of the factors that cause internal differences do not seem to apply on a global level. For instance, nations with better - educated populaces do not have higher turnouts. There are two main commonly cited causes of these international differences: culture and institutions. However, there is much debate over the relative impact of the various factors.
Wealth and literacy have some effect on turnout, but are not reliable measures. Countries such as Angola and Ethiopia have long had high turnouts, but so have the wealthy states of Europe. The United Nations Human Development Index shows some correlation between higher standards of living and higher turnout. The age of a democracy is also an important factor. Elections require considerable involvement by the population, and it takes some time to develop the cultural habit of voting, and the associated understanding of and confidence in the electoral process. This factor may explain the lower turnouts in the newer democracies of Eastern Europe and Latin America. Much of the impetus to vote comes from a sense of civic duty, which takes time and certain social conditions that can take decades to develop:
Demographics also have an effect. Older people tend to vote more than youths, so societies where the average age is somewhat higher, such as Europe; have higher turnouts than somewhat younger countries such as the United States. Populations that are more mobile and those that have lower marriage rates tend to have lower turnout. In countries that are highly multicultural and multilingual, it can be difficult for national election campaigns to engage all sectors of the population.
The nature of elections also varies between nations. In the United States, negative campaigning and character attacks are more common than elsewhere, potentially suppressing turnouts. The focus placed on get out the vote efforts and mass - marketing can have important effects on turnout. Partisanship is an important impetus to turnout, with the highly partisan more likely to vote. Turnout tends to be higher in nations where political allegiance is closely linked to class, ethnic, linguistic, or religious loyalties. Countries where multiparty systems have developed also tend to have higher turnouts. Nations with a party specifically geared towards the working class will tend to have higher turnouts among that class than in countries where voters have only big tent parties, which try to appeal to all the voters, to choose from. A four - wave panel study conducted during the 2010 Swedish national election campaign, show (1) clear differences in media use between age groups and (2) that both political social media use and attention to political news in traditional media increase political engagement over time.
Institutional factors have a significant impact on voter turnout. Rules and laws are also generally easier to change than attitudes, so much of the work done on how to improve voter turnout looks at these factors. Making voting compulsory has a direct and dramatic effect on turnout. Simply making it easier for candidates to stand through easier nomination rules is believed to increase voting. Conversely, adding barriers, such as a separate registration process, can suppress turnout. The salience of an election, the effect that a vote will have on policy, and its proportionality, how closely the result reflects the will of the people, are two structural factors that also likely have important effects on turnout.
The modalities of how electoral registration is conducted can also affect turnout. For example, until "rolling registration '' was introduced in the United Kingdom, there was no possibility of the electoral register being updated during its currency, or even amending genuine mistakes after a certain cut off date. The register was compiled in October, and would come into force the next February, and would remain valid until the next January. The electoral register would become progressively more out of date during its period of validity, as electors moved or died (also people studying or working away from home often had difficulty voting). This meant that elections taking place later in the year tended to have lower turnouts than those earlier in the year. The introduction of rolling registration where the register is updated monthly has reduced but not entirely eliminated this issue since the process of amending the register is not automatic, and some individuals do not join the electoral register until the annual October compilation process.
Another country with a highly efficient registration process is France. At the age of eighteen, all youth are automatically registered. Only new residents and citizens who have moved are responsible for bearing the costs and inconvenience of updating their registration. Similarly, in Nordic countries, all citizens and residents are included in the official population register, which is simultaneously a tax list, voter registration, and membership in the universal health system. Residents are required by law to report any change of address to register within a short time after moving. This is also the system in Germany (but without the membership in the health system).
The elimination of registration as a separate bureaucratic step can result in higher voter turnout. This is reflected in statistics from the United States Bureau of Census, 1982 -- 1983. States that have same day registration, or no registration requirements, have a higher voter turnout than the national average. At the time of that report, the four states that allowed election day registration were Minnesota, Wisconsin, Maine, and Oregon. Since then, Idaho and Maine have changed to allow same day registration. North Dakota is the only state that requires no registration.
One of the strongest factors affecting voter turnout is whether voting is compulsory. In Australia, voter registration and attendance at a polling booth have been mandatory since the 1920s, with the most recent federal election in 2013 having turnout figures of 93.23 % for the House of Representatives and 93.88 % for the Senate. Several other countries have similar laws, generally with somewhat reduced levels of enforcement. If a Bolivian voter fails to participate in an election, the citizen may be denied withdrawal of their salary from the bank for three months.
In Mexico and Brazil, existing sanctions for non-voting are minimal or are rarely enforced. When enforced, compulsion has a dramatic effect on turnout.
In Venezuela and the Netherlands compulsory voting has been rescinded, resulting in substantial decreases in turnout.
In Greece voting is compulsory, however there are practically no sanctions for those who do not vote.
In Belgium and Luxembourg voting is compulsory, too, but not strongly enforced. In Luxembourg only voters below the age of 75 and those who are not physically handicapped or chronically ill have the legal obligation to vote.
Sanctions for non-voting behaviour were foreseen sometimes even in absence of a formal requirement to vote. In Italy the Constitution describes voting as a duty (art. 48), though electoral participation is not obligatory. From 1946 to 1992, thus, the Italian electoral law included light sanctions for non-voters (lists of non-voters were posted at polling stations). Turnout rates have not declined substantially since 1992 in Italy, though, pointing to other factors than compulsory voting to explain high electoral participation.
Mark N. Franklin argues that salience, the perceived effect that an individual vote will have on how the country is run, has a significant effect on turnout. He presents Switzerland as an example of a nation with low salience. The nation 's administration is highly decentralized, so that the federal government has limited powers. The government invariably consists of a coalition of parties, and the power wielded by a party is far more closely linked to its position relative to the coalition than to the number of votes it received. Important decisions are placed before the population in a referendum. Individual votes for the federal legislature are thus unlikely to have a significant effect on the nation, which probably explains the low average turnouts in that country. By contrast Malta, with one of the world 's highest voter turnouts, has a single legislature that holds a near monopoly on political power. Malta has a two - party system in which a small swing in votes can completely alter the executive. On the other hand, countries with a two - party system can experience low turnout if large numbers of potential voters perceive little real difference between the main parties. Voters ' perceptions of fairness also have an important effect on salience. If voters feel that the result of an election is more likely to be determined by fraud and corruption than by the will of the people, fewer people will vote.
Another institutional factor that may have an important effect is proportionality, i.e., how closely the legislature reflects the views of the populace. Under a pure proportional representation system the composition of the legislature is fully proportional to the votes of the populace and a voter can be sure that of being represented in parliament, even if only from the opposition benches. (However many nations that use a form of proportional representation in elections depart from pure proportionality by stipulating that smaller parties are not supported by a certain threshold percentage of votes cast will be excluded from parliament.) By contrast, a voting system based on single seat constituencies (such as the plurality system used in North America, the UK and India) will tend to result in many non-competitive electoral districts, in which the outcome is seen by voters as a foregone conclusion.
Proportional systems tend to produce multiparty coalition governments. This may reduce salience, if voters perceive that they have little influence over which parties are included in the coalition. For instance, after the 2005 German election, the creation of the executive not only expressed the will of the voters of the majority party but also was the result of political deal - making. Although there is no guarantee, this is lessened as the parties usually state with whom they will favour a coalition after the elections.
Political scientists are divided on whether proportional representation increases voter turnout, though in countries with proportional representation voter turnout is higher. There are other systems that attempt to preserve both salience and proportionality, for example, the Mixed member proportional representation system in New Zealand (in operation since 1996), Germany, and several other countries. However, these tend to be complex electoral systems, and in some cases complexity appears to suppress voter turnout. The dual system in Germany, though, seems to have had no negative impact on voter turnout.
Ease of voting is a factor in rates of turnout. In the United States and most Latin American nations, voters must go through separate voter registration procedures before they are allowed to vote. This two - step process quite clearly decreases turnout. U.S. states with no, or easier, registration requirements have larger turnouts. Other methods of improving turnout include making voting easier through more available absentee polling and improved access to polls, such as increasing the number of possible voting locations, lowering the average time voters have to spend waiting in line, or requiring companies to give workers some time off on voting day. In some areas, generally those where some polling centres are relatively inaccessible, such as India, elections often take several days. Some countries have considered Internet voting as a possible solution. In other countries, like France, voting is held on the weekend, when most voters are away from work. Therefore, the need for time off from work as a factor in voter turnout is greatly reduced.
Many countries have looked into Internet voting as a possible solution for low voter turnout. Some countries like France and Switzerland use Internet voting. However, it has only been used sparingly by a few states in the US. This is due largely to security concerns. For example, the US Department of Defense looked into making Internet voting secure, but cancelled the effort. The idea would be that voter turnout would increase because people could cast their vote from the comfort of their own homes, although the few experiments with Internet voting have produced mixed results.
Voter fatigue can lower turnout. If there are many elections in close succession, voter turnout will decrease as the public tires of participating. In low - turnout Switzerland, the average voter is invited to go to the polls an average of seven times a year; the United States has frequent elections, with two votes per year on average, if one includes all levels of government as well as primaries. Holding multiple elections at the same time can increase turnout; however, presenting voters with massive multipage ballots, as occurs in some parts of the United States, can reduce turnouts.
A 2018 study found that "young people who pledge to vote are more likely to turn out than those who are contacted using standard Get - Out - the - Vote materials. Overall, pledging to vote increased voter turnout by 3.7 points among all subjects and 5.6 points for people who had never voted before. ''
Differing methods of measuring voter turnout can contribute to reported differences between nations. There are difficulties in measuring both the numerator, the number of voters who cast votes, and the denominator, the number of voters eligible to vote.
For the numerator, it is often assumed that the number of voters who went to the polls should equal the number of ballots cast, which in turn should equal the number of votes counted, but this is not the case. Not all voters who arrive at the polls necessarily cast ballots. Some may be turned away because they are ineligible, some may be turned away improperly, and some who sign the voting register may not actually cast ballots. Furthermore, voters who do cast ballots may abstain, deliberately voting for nobody, or they may spoil their votes, either accidentally or as an act of protest.
In the United Kingdom, the Electoral Commission distinguishes between "valid vote turnout '', which excludes spoilt ballots, and "ballot box turnout '', which does not.
In the United States, it has been common to report turnout as the sum of votes for the top race on the ballot, because not all jurisdictions report the actual number of people who went to the polls nor the number of undervotes or overvotes. Overvote rates of around 0.3 percent are typical of well - run elections, but in Gadsden County Florida, the overvote rate was 11 percent in November 2000.
For the denominator, it is often assumed that the number of eligible voters was well defined, but again, this is not the case. In the United States, for example, there is no accurate registry of exactly who is eligible to vote, since only about 70 -- 75 % of people choose to register themselves. Thus, turnout has to be calculated based on population estimates. Some political scientists have argued that these measures do not properly account for the large number of illegal aliens, disenfranchised felons and persons who are considered ' mentally incompetent ' in the United States, and that American voter turnout is higher than is normally reported. Professor Michael P. McDonald constructed an estimation of the turnout against the ' voting eligible population ' (VEP), instead of the ' voting age population ' (VAP). For the American presidential elections of 2004, turnout could then be expressed as 60.32 % of VEP, rather than 55.27 % of VAP.
In New Zealand, registration is supposed to be universal. This does not eliminate uncertainty in the eligible population because this system has been shown to be unreliable, with a large number of eligible but unregistered citizens, creating inflated turnout figures.
A second problem with turnout measurements lies in the way turnout is computed. One can count the number of voters, or one can count the number of ballots, and in a vote - for - one race, one can sum the number of votes for each candidate. These are not necessarily identical because not all voters who sign in at the polls necessarily cast ballots, although they ought to, and because voters may cast spoiled ballots.
Over the last 40 years, voter turnout has been steadily declining in the established democracies. This trend has been significant in the United States, Western Europe, Japan and Latin America. It has been a matter of concern and controversy among political scientists for several decades. During this same period, other forms of political participation have also declined, such as voluntary participation in political parties and the attendance of observers at town meetings. The decline in voting has also accompanied a general decline in civic participation, such as church attendance, membership in professional, fraternal, and student societies, youth groups, and parent - teacher associations. At the same time, some forms of participation have increased. People have become far more likely to participate in boycotts, demonstrations, and to donate to political campaigns.
Before the late 20th century, suffrage -- the right to vote -- was so limited in most nations that turnout figures have little relevance to today. One exception was the United States, which had near universal white male suffrage by 1840. The U.S. saw a steady rise in voter turnout during the century, reaching its peak in the years after the Civil War. Turnout declined from the 1890s until the 1930s, then increased again until 1960 before beginning its current long decline. In Europe, voter turnouts steadily increased from the introduction of universal suffrage before peaking in the mid-to - late 1960s, with modest declines since then. These declines have been smaller than those in the United States, and in some European countries turnouts have remained stable and even slightly increased. Globally, voter turnout has decreased by about five percentage points over the last four decades.
Many causes have been proposed for this decline; a combination of factors is most likely. When asked why they do not vote, many people report that they have too little free time. However, over the last several decades, studies have consistently shown that the amount of leisure time has not decreased. According to a study by the Heritage Foundation, Americans report on average an additional 7.9 hours of leisure time per week since 1965. Furthermore, according to a study by the National Bureau of Economic Research, increases in wages and employment actually decrease voter turnout in gubernatorial elections and do not affect national races. Potential voters ' perception that they are busier is common and might be just as important as a real decrease in leisure time. Geographic mobility has increased over the last few decades. There are often barriers to voting in a district where one is a recent arrival, and a new arrival is likely to know little about the local candidate and local issues. Francis Fukuyama has blamed the welfare state, arguing that the decrease in turnout has come shortly after the government became far more involved in people 's lives. He argues in Trust: The Social Virtues and The Creation of Prosperity that the social capital essential to high voter turnouts is easily dissipated by government actions. However, on an international level those states with the most extensive social programs tend to be the ones with the highest turnouts. Richard Sclove argues in Democracy and Technology that technological developments in society such as "automobilization, '' suburban living, and "an explosive proliferation of home entertainment devices '' have contributed to a loss of community, which in turn has weakened participation in civic life.
Trust in government and in politicians has decreased in many nations. However, the first signs of decreasing voter turnout occurred in the early 1960s, which was before the major upheavals of the late 1960s and 1970s. Robert D. Putnam argues that the collapse in civil engagement is due to the introduction of television. In the 1950s and 1960s, television quickly became the main leisure activity in developed nations. It replaced earlier more social entertainments such as bridge clubs, church groups, and bowling leagues. Putnam argues that as people retreated within their homes and general social participation declined, so too did voting.
Rosenstone and Hansen contend that the decline in turnout in the United States is the product of a change in campaigning strategies as a result of the so - called new media. Before the introduction of television, almost all of a party 's resources would be directed towards intensive local campaigning and get out the vote initiatives. In the modern era, these resources have been redirected to expensive media campaigns in which the potential voter is a passive participant. During the same period, negative campaigning has become ubiquitous in the United States and elsewhere and has been shown to impact voter turnout. Attack ads and smear campaigns give voters a negative impression of the entire political process. The evidence for this is mixed: elections involving highly unpopular incumbents generally have high turnout; some studies have found that mudslinging and character attacks reduce turnout, but that substantive attacks on a party 's record can increase it.
Part of the reason for voter decline in the recent 2016 election is likely because of restrictive voting laws around the country. Brennan Center for Justice reported that in 2016 fourteen states passed restrictive voting laws. Examples of these laws are photo ID mandates, narrow times for early voter, and limitations on voter registration. Barbour and Wright also believe that one of the causes is restrictive voting laws but they call this system of laws regulating the electorate. The Constitution gives states the power to make decisions regarding restrictive voting laws. In 2008 the Supreme Court made a crucial decision regarding Indiana 's voter ID law in saying that it does not violate the constitution. Since then almost half of the states have passed restrictive voting laws. These laws contribute to Barbour and Wrights idea of the rational nonvoter. This is someone who does not vote because the benefits of them not voting outweighs the cost to vote. These laws add to the "cost '' of voting, or reason that make it more difficult and to vote. In the United States programs such as MTV 's "Rock the Vote '' and the "Vote or Die '' initiatives have been introduced to increase turnouts of those between the ages of 18 and 25. A number of governments and electoral commissions have also launched efforts to boost turnout. For instance Elections Canada has launched mass media campaigns to encourage voting prior to elections, as have bodies in Taiwan and the United Kingdom.
Google extensively studied the causes behind low voter turnout in the United States, and argues that one of the key reasons behind lack of voter participation is the so - called "interested bystander ''. According to Google 's study, 48.9 % of adult Americans can be classified as "interested bystanders '', as they are politically informed but are reticent to involve themselves in the civic and political sphere. This category is not limited to any socioeconomic or demographic groups. Google theorizes that individuals in this category suffer from Voter apathy, as they are interested in political life but believe that their individual effect would be negligible. These individuals often participate politically on the local level, but shy away from national elections.
It has been argued that democratic consolidation (the stabilization of new democracies) contributes to the decline in voter turnout. A 2017 study challenges this however.
Much of the above analysis is predicated on voter turnout as measured as a percentage of the voting - age population. In a 2001 article in the American Political Science Review, Michael McDonald and Samuel Popkin argued, that at least in the United States, voter turnout since 1972 has not actually declined when calculated for those eligible to vote, what they term the voting - eligible population. In 1972, noncitizens and ineligible felons (depending on state law) constituted about 2 % of the voting - age population. By 2004, ineligible voters constituted nearly 10 %. Ineligible voters are not evenly distributed across the country -- 20 % of California 's voting - age population is ineligible to vote -- which confounds comparisons of states. Furthermore, they argue that an examination of the Census Bureau 's Current Population Survey shows that turnout is low but not declining among the youth, when the high youth turnout of 1972 (the first year 18 - to 20 - year - olds were eligible to vote in most states) is removed from the trendline.
|
how many rivers are there in west bengal | List of rivers of West Bengal - Wikipedia
List of rivers of West Bengal state, located in Eastern India.
The major rivers of West Bengal state include:
|
when did juventus last won the champions league | Juventus F.C. in European Football - wikipedia
Juventus Football Club first participated in a Union of European Football Associations (UEFA) competition in 1958. The first international cup they took part in was the Central European Cup in which they participated in 1929. The competition lasted from 1927 to 1940 and the club reached the semi-finals in five editions. From 1938 to the Rio Cup in 1951, Juventus did not participate in any international competitions. Subsequently, since entering the European competitions in 1955, they have competed in all the six confederation tournaments claiming the title at least once in each of them, making the Torinese club the only one worldwide in reach that achievement.
One of the most titled clubs in the sport, Juventus is Italy 's second most successful team in European competitions and the eight club with the most official international tournaments won in the world, having won eleven official trophies: the UEFA Champions League (formerly known as the European Champions ' Cup) twice, European Cup Winners ' Cup once, the UEFA Europa League (formerly known as the UEFA Cup) thrice, the UEFA Intertoto Cup once, the UEFA Super Cup twice and the Intercontinental Cup twice; being a finalist in nine occasions (seven in European Champions ' Cup and Champions League, one in UEFA Cup and one in Intercontinental Cup) and leading the confederation ranking during seven seasons since its introduction in 1979, the most for an Italian club. Based to these results, the club was recognised as Italy 's best club and second in Europe of the 20th century according to the all - time ranking published in 2009 by the International Federation of Football History and Statistics, an organisation recognised by FIFA.
Qualification for international competitions is determined by a team 's success in its national league and cup competitions from the previous season. Juventus competed in international competitions for 28 consecutive seasons since 1963 to 1991, more than other Italian club.
Giovanni Trapattoni is the club 's most successful manager at international stage, with six trophies. During his first spell in the club between the 1970s and 1980s, Juventus became the first and only Italian side to win an international competition without foreigner footballers, the first club in the history of European football to have won all three seasonal competitions organised by the Union of European Football Associations, being also the only one to reach it with the same coach, and the first European club to win the Intercontinental Cup, in 1985, since it was restructured by the European Confederation and Confederación Sudamericana de Fútbol (CONMEBOL) 's organizing committee five years beforehand; being awarded with The UEFA Plaque by the confederation 's president Jacques Georges on 12 July 1988 at Geneva, Switzerland.
Juventus ' biggest - margin win in UEFA club competitions is a 7 -- 0 victory over Lechia Gdańsk in the 1983 -- 84 European Cup Winners ' Cup, Valur in the 1986 -- 87 European Champions ' Cup and Olympiacos in the 2003 -- 04 UEFA Champions League. Alessandro Del Piero holds the club record for the most appearances (130) and goals scored on that stage (53).
Juventus ' score listed first.
UEFA competitions includes European Champions ' Cup and Champions League, UEFA Cup Winners ' Cup, UEFA Cup and Europa League, UEFA Intertoto Cup, UEFA Super Cup and Intercontinental Cup.
Source: UEFA.com Pld = Matches played; W = Matches won; D = Matches drawn; L = Matches lost; GF = Goals for; GA = Goals against; GD = Goal Difference.
As of 13 February 2018
|
who plays the genie in aladdin at california adventure | Disney 's Aladdin: a Musical Spectacular - wikipedia
Disney 's Aladdin: A Musical Spectacular was a Broadway - style show based on Disney 's 1992 animated film Aladdin with music by Alan Menken and lyrics by Howard Ashman, Tim Rice and Alan Menken.
It was performed inside the Hyperion Theater in Hollywood Land at Disney California Adventure from 2003 to 2016. In September 2015, it was announced that the show 's final day of performance would be January 10, 2016. The show was then closed on January 11, and was replaced by a musical stage show inspired by Disney 's 2013 animated film Frozen, which premiered in May 2016.
A version of the show continues to play on board the Disney Cruise Line ship Disney Fantasy.
The production is a Broadway - type show. Many of the scenes and songs from the movie are re-created on stage and some of the action spills out into the aisles, such as Prince Ali 's arrival in Agrabah on elephant back.
At Disney California Adventure, the 45 - minute production took place in the 2,000 seat Hyperion Theater, located at the end of Hollywood Land. While most of the show was scripted, the Genie 's dialogue often changed to reflect current events in the news and popular culture. The musical replaced the venue 's previous show, The Power of Blast, which played from 2001 to 2002.
Alan Menken composed and wrote lyrics for a new song for this production, called "To Be Free ''.
Buena Vista Records released an official soundtrack to the production in 2003. This is an original cast recording, and includes almost every piece of music used in the show. The main cast on the recording is Miles Wesley (Aladdin), Deedee Magno (Jasmine), Nick Santa Maria (Genie), Lance Roberts (Jafar) and Jamila Ajibade (Narrator). Orchestrations by Timothy Williams (composer).
Track Listing 1. Arabian Nights * / A Thousand Stories / Who Dares Approach (1: 43) 2. Off You Go / The Mouth Closes (: 34) 3. You Have Been Warned (: 11) 4. Aladdin Intro (: 24) 5. One Jump Ahead (2: 44) 6. Street Rat! (: 24) 7. Princess of Agrabah (: 14) 8. Old Man (: 48) 9. Go Now, Into the Cave / Gold Reveal / Cave Collapse (1: 57) 10. Carpet (: 39) 11. Genie Up (: 22) 12. Friend Like Me (3: 18) 13. The Palace (: 25) 14. Sultan 's Fanfare (: 13) 15. Prince Ali (2: 04) 16. Genie Free / Jafar Plots (1: 27) 17. To Be Free (2: 44) 18. A Whole New World (3: 38) 19. A Whole New World - Underscore (: 20) 20. I 'll Say (: 42) 21. We Through Yet / Prince Ali (Reprise) (1: 36) 22. Snake! (: 41) 23. He Has More Power (: 50) 24. Father, I 've Decided (: 59) 25. Celebration (1: 36) (includes "Arabian Nights '', "To Be Free '', "A Whole New World '') 26. Curtain Call (1: 13) (includes: "One Jump Ahead '', "Prince Ali '', "To Be Free '', "Friend Like Me '')
A larger view of the theater interior as a whole
Jasmine reveals herself to be the Princess of Agrabah
Aladdin 's entrance as "Prince Ali ''
"Prince Ali '' proposes to Jasmine
Iago, Jasmine, "Prince Ali '', and the Genie
Jafar summons the Genie
"Prince Ali '' holding the lamp
|
which is the highest mountain peak in bihar | Karoh Peak - wikipedia
Karoh Peak is a 1,467 - metre (4,813 ft) tall mountain peak in the Sivalik Hills range of greater Himalayas range located near Morni Hills area of Panchkula district, Haryana, India. It is highest point in the state of Haryana
Karoh Peak is named the local Hindu deity, the Kroh or Karoh Deota.
The Karoh Deota is the presiding deity of the Karoh hill. At the top there are three brick and mortar shrines of Karoh deota. There are also rock sculptures placed outside one of the shrine, that are archaeological fragments of ruined ancient Hindu temples.
At 1467 metres Karoh Peak is the highest mountain peak in the state of Haryana. The British rulers had originally and incorrectly recorded in The Imperial Gazetteer of India the height of Karoh Hill to be 1499 metres. Actual measurements by the Survey of India found the actual height to be 100 feet lower and official height was revised down to 1467 metres.
It is located within the Indus river basin and Ghaggar - Hakra River basin. The Ghaggar - Hakra River basin which is said to be remnant of Sarasvati River, that is said to originate from Adi Badri (Haryana), was home of the Indus Valley Civilisation.
One can take motored transport from Chandigarh via Panchkula to reach Chhamla - Daman - Thapli road in Haryana. The Karoh Peak is nearly 500 m (1,640 ft) vertical distance from Chhamla - Daman - Thapli road and can be climbed by traversing a 2 kilometres (1.2 mi) trek via the dhanis (small settlements) of Churi and Diyothi of Bhoj Darara.
|
mind is such a terrible thing to taste | The Mind is a Terrible Thing to Taste - wikipedia
The Mind Is a Terrible Thing to Taste is the fourth studio album by industrial metal band Ministry, released in 1989 through Sire / Warner Bros. Records. The music took a more hardcore, aggressively guitar - driven direction. Jourgensen was inspired by Stormtroopers of Death and Rigor Mortis to add Thrash Metal guitars to The Mind Is a Terrible Thing to Taste and other subsequent Ministry albums. As with most of Ministry 's work, the album 's lyrics deal mainly with political corruption ("Thieves ''); cultural violence ("So What ''); environmental degradation, and nuclear war ("Breathe ''); drug addiction ("Burning Inside ''); and insanity ("Cannibal Song '').
Jourgensen recalled the band 's state as dysfunctional and the album 's production as "complete chaos and mayhem '', which gave the band a level of artistic freedom impossible had they planned it. Jourgensen says that despite being a fan favorite, it is not among his favorites because of the condition he was in at the time; he was heavily into drugs during recording and had a poor relationship with his bandmates. In one instance, he chased bassist Paul Barker around the studio with a chair and hit him on the head because he "could n't stand him anymore ''. Jourgensen credited the era, the city, and the atmosphere at Chicago Trax Studios for the album. Bill Rieflin and Chris Connelly instead attributed the album 's sound to the band 's interest in technology.
For pre-production, Rieflin said he and Barker watched films for a month, sampling anything that caught their interest. Instead of writing music, they all improvised individually, rarely collaborating with each other. Connelly compared it to exquisite corpse, a Surrealist technique in which an artistic work is created collaboratively without any of the participants having knowledge of the others ' contribution. Rieflin cited "So What '' as the only track to feature two musicians in the studio at the same time.
After playing with the band on The Land of Rape and Honey 's tour, Dave Ogilvie collaborated on this album. The rapper K - Lite sang vocals on "Test ''. Jourgensen said that Ministry and K - Lite had been recording songs at the same time at the studio. Both Jourgensen and K - Lite were impressed with the aggressiveness of each others ' music, and Jourgensen invited him to contribute vocals for a track. Rieflin had previously recorded drums and bass after he became frustrated waiting for the others to contribute music to the track; Barker said Rieflin played all the instruments on the song.
The female spoken word part of "Dream Song '' is a recorded conversation with Angelina Lukacin, Jourgensen 's future, and now ex -, wife. Jourgensen had met her while on tour in Canada and, impressed with her entertaining personality, called her on the phone several times while working on the album. Jourgensen recalled the conversations as her "babbling about dreams and angels '' while high. Lukacin herself said "Dream Song '' was a poem she wrote after having a dream about an angel. She did not know she was being recorded but enjoyed the song.
The title of the album is a reference to the United Negro College Fund 's slogan, "the mind is a terrible thing to waste ''. Jourgensen was further inspired by the "Just Say No '' anti-drug campaign. Rieflin said the other band members groaned when they heard it, but Jourgensen had final say in naming. According to Connelly, the album art was inspired by a television program Jourgensen saw where migraine sufferers painted images of their pain. The image itself was a picture of an x-ray from a studio receptionist 's mother, who had been in a car accident and received a metal plate. Jourgensen said he wanted that as the album artwork as soon as he found out about it, but the other band members disliked it. Barker praised the concept but said the execution was poor.
The album peaked at # 163 in the US and was certified Gold by the RIAA for sales in excess of 500,000 units in December 1995. "Burning Inside '' reached # 23 on Billboard 's Hot Modern Rock Tracks.
Music critic Tom Moon included the album in his book 1,000 Recordings to Hear Before You Die, calling it "one of the great works of industrial music '' and an influential album that is "way ahead of its time ''. In rating it 4.5 / 5 stars, AllMusic reviewer Marc van der Pol described it as a "wonderful album '' that avoids the clichés common to industrial rock. Bill Wyman of the Chicago Tribune rated it 3 / 4 stars and called it Ministry 's "best - sounding, most assured and consistent album ''. The A.V. Club, though praising the album 's other tracks, listed "Test '' in their "24 songs that almost derail great albums '', calling it "a novelty genre exercise from which Mind barely recovers ''. The A.V. Club also wrote about "So What '', including it on a list of the best songs written from the point of view of a crazy person. They called it "the most obvious and best - executed '' of Ministry 's songs about violent psychosis.
|
an example of what a community ecologist would study is | Community (ecology) - wikipedia
In ecology, a community is a group or association of populations of two or more different species occupying the same geographical area and in a particular time, also known as a biocoenosis The term community has a variety of uses. In its simplest form it refers to groups of organisms in a specific place or time, for example, "the fish community of Lake Ontario before industrialization ''.
Community ecology or synecology is the study of the interactions between species in communities on many spatial and temporal scales, including the distribution, structure, abundance, demography, and interactions between coexisting populations. The primary focus of community ecology is on the interactions between populations as determined by specific genotypic and phenotypic characteristics. Community ecology has its origin in European plant sociology. Modern community ecology examines patterns such as variation in species richness, equitability, productivity and food web structure (see community structure); it also examines processes such as predator -- prey population dynamics, succession, and community assembly.
On a deeper level the meaning and value of the community concept in ecology is up for debate. Communities have traditionally been understood on a fine scale in terms of local processes constructing (or destructing) an assemblage of species, such as the way climate change is likely to affect the make - up of grass communities. Recently this local community focus has been criticised. Robert Ricklefs has argued that it is more useful to think of communities on a regional scale, drawing on evolutionary taxonomy and biogeography, where some species or clades evolve and others go extinct.
Clements developed a holistic (or organismic) concept of community, as it was a superorganism or discrete unit, with sharp boundaries.
Gleason developed the individualistic (also known as open or continuum) concept of community, with the abundance of a population of a species changing gradually along complex environmental gradients, but individually, not equally to other populations. In that view, it is possible that individualistic distribution of species gives rise to discrete communities as well as to continuum. Niches would not overlap.
In the neutral theory view of the community (or metacommunity), popularized by Hubbell, species are functionally equivalent, and the abundance of a population of a species changes by stochastic demographic processes (i.e., random births and deaths). Each population would have the same adaptive value (competitive and dispersal abilities), and local and regional composition would represent a balance between speciation or dispersal (which increase diversity), and random extinctions (which decrease diversity).
Species interact in various ways: competition, predation, parasitism, mutualism, commensalism, etc. The organization of a biological community with respect to ecological interactions is referred to as community structure.
Species can compete with each other for finite resources. It is considered to be an important limiting factor of population size, biomass and species richness. Many types of competition have been described, but proving the existence of these interactions is a matter of debate. Direct competition has been observed between individuals, populations and species, but there is little evidence that competition has been the driving force in the evolution of large groups.
Predation is hunting another species for food. This is a positive -- negative (+ −) interaction in that the predator species benefits while the prey species is harmed. Some predators kill their prey before eating them (e.g., a hawk killing a mouse). Other predators are parasites that feed on prey while alive (e.g., a vampire bat feeding on a cow). Another example is the feeding on plants of herbivores (e.g., a cow grazing). Predation may affect the population size of predators and prey and the number of species coexisting in a community.
Mutualism is an interaction between species in which both benefit. Examples include Rhizobium bacteria growing in nodules on the roots of legumes and insects pollinating the flowers of angiosperms.
Commensalism is a type of relationship among organisms in which one organism benefits while the other organism is neither benefited nor harmed. The organism that benefited is called the commensal while the other organism that is neither benefited nor harmed is called the host. For example, an epiphytic orchid attached to the tree for support benefits the orchid but neither harms nor benefits the tree. The opposite of commensalism is amensalism, an interspecific relationship in which a product of one organism has a negative effect on another organism.
A major research theme among community ecology has been whether ecological communities have a (nonrandom) structure and, if so however to characterise this structure. Forms of community structure include aggregation and nestedness.
|
cisg applies only to sale of goods and commercial parties | United Nations Convention on Contracts for the International Sale of Goods - wikipedia
The United Nations Convention on Contracts for the International Sale of Goods (CISG; the Vienna Convention) is a treaty that is a uniform international sales law. It has been ratified by 89 states that account for a significant proportion of world trade, making it one of the most successful international uniform laws. The State of Palestine is the most recent state to ratify the Convention, having acceded to it on 29 December 2017.
The CISG was developed by the United Nations Commission on International Trade Law (UNCITRAL), and was signed in Vienna in 1980. The CISG is sometimes referred to as the Vienna Convention (but is not to be confused with other treaties signed in Vienna). It came into force as a multilateral treaty on 1 January 1988, after being ratified by 11 countries.
The CISG allows exporters to avoid choice of law issues, as the CISG offers "accepted substantive rules on which contracting parties, courts, and arbitrators may rely ''. Unless excluded by the express terms of a contract, the CISG is deemed to be incorporated into (and supplant) any otherwise applicable domestic law (s) with respect to a transaction in goods between parties from different Contracting States.
The CISG has been regarded as a success for the UNCITRAL, as the Convention has been accepted by states from "every geographical region, every stage of economic development and every major legal, social and economic system ''. Countries that have ratified the CISG are referred to within the treaty as "Contracting States ''. Of the uniform law conventions, the CISG has been described as having "the greatest influence on the law of worldwide trans - border commerce ''. It has been described as a great legislative achievement, and the "most successful international document so far '' in unified international sales law, in part due to its flexibility in allowing Contracting States the option of taking exception to certain specified articles. This flexibility was instrumental in convincing states with disparate legal traditions to subscribe to an otherwise uniform code. While certain State parties to the CISG have lodged declarations, the vast majority -- 68 of the current 89 Contracting States -- have chosen to accede to the Convention without any declaration.
The CISG is the basis of the annual Willem C. Vis International Commercial Arbitration Moot held in Vienna in the week before Easter (and now also in Hong Kong). Teams from law schools around the world take part. The Moot is organised by Pace University, which keeps a definitive source of information on the CISG.
As of 2018, the following 89 states have ratified, acceded to, approved, accepted, or succeeded to the Convention:
The Convention has been signed, but not ratified, by Ghana and Venezuela.
The CISG allows contracting States to lodge reservations (called "declarations '' in the CISG own language). About one fourth of the CISG contracting States have done so.
Declarations may refer to:
Some existing declarations have been reviewed and withdrawn by States. Nordic countries (i.e., members of the Nordic Council) (except Iceland) had originally opted out of the application of Part II under article 92 CISG. However, they recently withdrew their Article 92 CISG reservations and became a party to Part II CISG, except for trade among themselves, to which the CISG is not applied as a whole due to a declaration lodged under article 94.
Likewise, China, Latvia, Lithuania and Hungary withdrew their written form declaration.
Some countries have expanded rather than restricted CISG application by removing one of the cumulative conditions for application within the CISG. Thus, Israeli law stipulates that the CISG will apply equally to a party whose place of business is in a State that is not a Contracting State. This is in conformity with Article 97 CISG as it is not a "reservation ''; it widens the scope of the CISG 's application, rather than limits it.
Hong Kong, India, South Africa, Taiwan, and the United Kingdom are the only major trading countries that have not yet ratified the CISG.
The absence of the United Kingdom, a leading jurisdiction for the choice of law in international commercial contracts, has been attributed variously to: the government not viewing its ratification as a legislative priority, a lack of interest from business in supporting ratification, opposition from a number of large and influential organisations, a lack of public service resources, and a danger that London would lose its edge in international arbitration and litigation.
Taiwan currently may not become a party to treaties deposited with the Secretary - General of the United Nations.
Rwanda has concluded the domestic procedure of consideration of the CISG and adopted laws authorising its adoption; the CISG will enter into force for it once the instrument of accession is deposited with the Secretary - General of the United Nations. A number of other countries, including Guatemala, have made progress in the adoption process.
The CISG is written using "plain language that refers to things and events for which there are words of common content ''. This was a conscious intent to allow national legal systems to be transcended through the use of a common legal lingua franca and avoids the "words associated with specific domestic legal nuances ''. Further, it facilitated the translation into the UN 's six official languages. As is customary in UN conventions all 6 languages are equally authentic.
The CISG is divided into four parts:
The CISG applies to contracts of the sale of goods between parties whose places of business are in different States, when the States are Contracting States (Article 1 (1) (a)). Given the significant number of Contracting States, this is the usual path to the CISG 's applicability.
The CISG also applies if the parties are situated in different countries (which need not be Contracting States) and the conflict of law rules lead to the application of the law of a Contracting State. For example, a contract between a Japanese trader and a Brazilian trader may contain a clause that arbitration will be in Sydney under Australian law with the consequence that the CISG would apply. A number of States have declared they will not be bound by this condition.
The CISG is intended to apply to commercial goods and products only. With some limited exceptions, the CISG does not apply to personal, family, or household goods, nor does it apply to auctions, ships, aircraft, or intangibles and services. The position of computer software is ' controversial ' and will depend upon various conditions and situations.
Importantly, parties to a contract may exclude or vary the application of the CISG.
Interpretation of the CISG is to take account of the ' international character ' of the Convention, the need for uniform application, and the need for good faith in international trade. Disputes over interpretation of the CISG are to be resolved by applying the ' general principles ' of the CISG, or where there are no such principles but the matters are governed by the CISG (a gap praeter legem) by applying the rules of private international law.
A key point of controversy was whether or not a contract requires a written memorial to be binding. The CISG allows for a sale to be oral or unsigned, but in some countries, contracts are not valid unless written. In many nations, however, oral contracts are accepted, and those States had no objection to signing, so States with a strict written requirement exercised their ability to exclude those articles relating to oral contracts, enabling them to sign as well.
The CISG is not a complete qualification by its own definition. These gaps must be filled in by the applicable national law under due consideration of the conflict of law rules applicable at the place of jurisdiction.
An offer to contract must be addressed to a person, be sufficiently definite -- that is, describe the goods, quantity, and price -- and indicate an intention for the offeror to be bound on acceptance. The CISG does not appear to recognise common law unilateral contracts but, subject to clear indication by the offeror, treats any proposal not addressed to a specific person as only an invitation to make an offer. Further, where there is no explicit price or procedure to implicitly determine price, then the parties are assumed to have agreed upon a price based upon that ' generally charged at the time of the conclusion of the contract for such goods sold under comparable circumstances '.
Generally, an offer may be revoked provided the withdrawal reaches the offeree before or at the same time as the offer, or before the offeree has sent an acceptance. Some offers may not be revoked; for example when the offeree reasonably relied upon the offer as being irrevocable. The CISG requires a positive act to indicate acceptance; silence or inactivity are not an acceptance.
The CISG attempts to resolve the common situation where an offeree 's reply to an offer accepts the original offer, but attempts to change the conditions. The CISG says that any change to the original conditions is a rejection of the offer -- it is a counter-offer -- unless the modified terms do not materially alter the terms of the offer. Changes to price, payment, quality, quantity, delivery, liability of the parties, and arbitration conditions may all materially alter the terms of the offer.
Articles 25 -- 88; sale of goods, obligations of the seller, obligations of the buyer, passing of risk, obligations common to both buyer and seller.
The CISG defines the duty of the seller, ' stating the obvious ', as the seller must deliver the goods, hand over any documents relating to them, and transfer the property in the goods, as required by the contract. Similarly, the duty of the buyer is to take all steps ' which could reasonably be expected ' to take delivery of the goods, and to pay for them.
Generally, the goods must be of the quality, quantity, and description required by the contract, be suitably packaged and fit for purpose. The seller is obliged to deliver goods that are not subject to claims from a third party for infringement of industrial or intellectual property rights in the State where the goods are to be sold. The buyer is obliged to promptly examine the goods and, subject to some qualifications, must advise the seller of any lack of conformity within ' a reasonable time ' and no later than within two years of receipt.
The CISG describes when the risk passes from the seller to the buyer but it has been observed that in practice most contracts define the ' seller 's delivery obligations quite precisely by adopting an established shipment term, such as FOB and CIF.
Remedies of the buyer and seller depend upon the character of a breach of the contract. If the breach is fundamental, then the other party is substantially deprived of what it expected to receive under the contract. Provided that an objective test shows that the breach could not have been foreseen, then the contract may be avoided and the aggrieved party may claim damages. Where part performance of a contract has occurred, then the performing party may recover any payment made or good supplied; this contrasts with the common law where there is generally no right to recover a good supplied unless title has been retained or damages are inadequate, only a right to claim the value of the good.
If the breach is not fundamental, then the contract is not avoided and remedies may be sought including claiming damages, specific performance, and adjustment of price. Damages that may be awarded conform to the common law rules in Hadley v Baxendale but it has been argued the test of foreseeability is substantially broader and consequently more generous to the aggrieved party.
The CISG excuses a party from liability to a claim of damages where a failure to perform is attributable to an impediment beyond the party 's, or a third party sub-contractor's, control that could not have been reasonably expected. Such an extraneous event might elsewhere be referred to as force majeure, and frustration of the contract.
Where a seller has to refund the price paid, then the seller must also pay interest to the buyer from the date of payment. It has been said the interest rate is based on rates current in the seller 's State ' (s) ince the obligation to pay interest partakes of the seller 's obligation to make restitution and not of the buyer 's right to claim damages ', though this has been debated. In a mirror of the seller 's obligations, where a buyer has to return goods the buyer is accountable for any benefits received.
Articles 89 -- 101 (final provisions) include how and when the Convention comes into force, permitted reservations and declarations, and the application of the Convention to international sales where both States concerned have the same or similar law on the subject.
The Part IV Articles, along with the Preamble, are sometime characterized as being addressed ' primarily to States ', not to business people attempting to use the Convention for international trade. They may, however, have a significant impact upon the CISG 's practical applicability, thus requiring careful scrutiny when determining each particular case.
It has been remarked that the CISG expresses a practice - based, flexible and "relational '' character. It places no or very few restrictions of form on formation or adjustment of contracts; in case of non-performance (or over-performance) it offers a wide array of interim measures before the aggrieved party must resort to avoiding the contract (e.g. unilateral pro-rated price reduction (Art. 50); suspension of performance (art. 71); the availability of cure as a matter of right of the defaulting party (subject to some reservations, Art. 48); choice between expectation and market - based damages, etc.); additionally, the CISG does not operate under a "perfect tender '' rule and its criteria for conformity are functional rather than formal (art. 35). Additionally, its rules of interpretation rely heavily on custom as well as on manifest acts rather than on intent (Art. 8). The CISG does include a so - called Nachlass rule, but its scope is relatively limited. On the other hand, its good faith obligation may seem relatively limited and in any case obscure (Art. 7). All communications require "reasonable time. ''
Although the Convention has been accepted by a large number of States, it has been the subject of some criticism. For example, the drafting nations have been accused of being incapable of agreement on a code that "concisely and clearly states universal principles of sales law '', and through the Convention 's invitation to interpret taking regard of the Convention 's "international character '' gives judges the opportunity to develop "diverse meaning ''. Put more bluntly, the CISG has been described as "a variety of vague standards and compromises that appear inconsistent with commercial interests ''.
A contrary view is that the CISG is "written in plain business language, '' which allows judges the opportunity to make the Convention workable in a range of sales situations. It has been said "the drafting style is lucid and the wording simple and uncluttered by complicated subordinating clauses '', and the "general sense '' can be grasped on the first reading without the need to be a sales expert.
Uniform application of the CISG is problematic because of the reluctance of courts to use "solutions adopted on the same point by courts in other countries '', resulting in inconsistent decisions. For example, in a case involving the export to Germany by a Swiss company of New Zealand mussels with a level of cadmium in excess of German standards, the German Supreme Court held that it is not the duty of the seller to ensure that goods meet German public health regulations. This contrasted with a later decision in which an Italian cheese exporter failed to meet French packaging regulations, and the French court decided it was the duty of the seller to ensure compliance with French regulations.
These two cases were held by one commentator to be an example of contradictory jurisprudence. Another commentator, however, saw the cases as not contradictory, as the German case could be distinguished on a number of points. The French court chose not to consider the German court 's decision, in its published decision. (Precedent, foreign or not, is not legally binding in civil law.)
CISG advocates are also concerned that the natural inclination of judges is to interpret the CISG using the methods familiar to them from their own State rather than attempting to apply the general principles of the Convention or the rules of private international law. This is despite the comment from one highly respected academic that ' it should be a rare, or non-existent, case where there are no relevant general principles to which a court might have recourse ' under the CISG. This concern was supported by research of the CISG Advisory Council which said, in the context of the interpretation of Articles 38 and 39, there is a tendency for courts to interpret the articles in the light of their own State 's law, and some States have ' struggled to apply (the articles) appropriately '. In one of a number of criticisms of Canadian court decisions to use local legislation to interpret the CISG, one commentator said the CISG was designed to ' replace existing domestic laws and caselaw, ' and attempts to resolve gaps should not be by ' reference to relevant provisions of (local) sales law '.
Critics of the multiple language versions of the CISG assert it is inevitable the versions will not be totally consistent because of translation errors and the untranslatability of ' subtle nuances ' of language. This argument, though with some validity, would not seem peculiar to the CISG but common to any and all treaties that exist in multiple languages. The reductio ad absurdum would seem to be that all international treaties should exist in only a single language, something which is clearly neither practical nor desirable.
Other criticisms of the Convention are that it is incomplete, there is no mechanism for updating the provisions, and no international panel to resolve interpretation issues. For example, the CISG does not govern the validity of the contract, nor does it consider electronic contracts. However, legal matters relating to the use of electronic communications in relation to contracts for international sale of goods have been eventually dealt with in a comprehensive manner in the United Nations Convention on the Use of Electronic Communications in International Contracts. Moreover, it is not to be forgotten that the CISG is complemented by the Convention on the Limitation Period in the International Sale of Goods with respect to the limitation of actions due to passage of time.
Despite the critics, a supporter has said ' (t) he fact that the costly ignorance of the early days, when many lawyers ignored the CISG entirely, has been replaced by too much enthusiasm that leads to... oversimplification, can not be blamed on the CISG '.
Greater acceptance of the CISG will come from three directions. Firstly, it is likely that within the global legal profession, as the numbers of new lawyers educated in the CISG increases, the existing Contracting States will embrace the CISG, appropriately interpret the articles, and demonstrate a greater willingness to accept precedents from other Contracting States.
Secondly, business people will increasingly pressure both lawyers and governments to make sales of goods disputes less expensive, and reduce the risk of being forced to use a legal system that may be completely alien to their own. Both of these objectives can be achieved through use of the CISG.
Finally, UNCITRAL will arguably need to develop a mechanism to further develop the Convention and to resolve conflicting interpretation issues. This will make it more attractive to both business people and potential Contracting States.
Depending on the country, the CISG can represent a small or significant departure from local legislation relating to the sale of goods, and in this can provide important benefits to companies from one contracting state that import goods into other states that have ratified the CISG.
In the U.S., all 50 states have, to varying degrees, adopted common legislation referred to as the Uniform Commercial Code ("UCC ''). UCC Articles 1 (General Provisions) and 2 (Sales) are generally similar to the CISG. However, the UCC differs from the CISG in some respects, such as the following areas that tend to reflect more general aspects of the U.S. legal system:
Nevertheless, because the U.S. has ratified the CISG, the CISG in the U.S. has the force of federal law and supersedes UCC - based state law under the Supremacy Clause. Among the U.S. reservations to the CISG is the provision that the CISG will apply only as to contracts with parties located in other CISG Contracting States, a reservation permitted by the CISG in Article 95. Therefore, in international contracts for the sale of goods between a U.S. entity and an entity of a Contracting State, the CISG will apply unless the contract 's choice of law clause specifically excludes CISG terms. Conversely, in "international '' contracts for the sale of goods between a U.S. entity and an entity of a non-Contracting State, to be adjudicated by a U.S. court, the CISG will not apply, and the contract will be governed by the domestic law applicable according to private international law rules.
|
star wars episode v the empire strikes back soundtrack | The Empire Strikes Back (soundtrack) - wikipedia
The score from The Empire Strikes Back, composed by John Williams, was recorded in eighteen sessions at Anvil Studios over three days in December 1979 and a further six days in January 1980 with Williams conducting the London Symphony Orchestra. Between Star Wars and The Empire Strikes Back, Williams had also worked with the London Symphony Orchestra for the scores to the films The Fury, Superman and Dracula. The score earned another Academy Award nomination for Williams. Again, the score was orchestrated by Herbert W. Spencer, recorded by engineer Eric Tomlinson and edited by Kenneth Wannberg with supervision by Lionel Newman. John Williams himself took over duties as record producer from Star Wars creator George Lucas.
The soundtrack was first released in the United States as a 75 - minute double LP five days before the film 's premiere but the first Compact Disc release ran only half the length of the 2 - LP set. Re-recordings of the score even included music that was not on the original CD soundtrack. A remastered version of the soundtrack was released by Walt Disney Records on May 4, 2018.
In 1980, the disco label RSO Records released the film 's original soundtrack in a double - album, with two long - playing (LP) records. Combined, the two records featured seventy - five minutes of film music. This double LP package also included a booklet presentation with pictures of the main characters and action sequences from the film. Featured at the booklet 's end was an interview with John Williams about the music and the new themes, such as "The Imperial March (Darth Vader 's Theme) '' and "Yoda 's Theme ''. It also included a brief explanation of each track. The front cover artwork featured Darth Vader 's mask against the backdrop of outer space; and the back cover featured the famous "Gone with the Wind '' version of the poster art. As a side note, this package marked the final time a double LP soundtrack set was ever issued (Episode VI, the final film to have an LP soundtrack released, had only a single disc, also released by RSO Records). A double - cassette edition was also released.
In the U.K., a single vinyl album and cassette were released in 1980 by RSO Records. This comprised only ten tracks, which were also re-arranged differently. For instance, the first track on the U.K. release is "The Imperial March '' instead of the "Star Wars Main Theme ''. This track listing would be used for the album 's first international CD release in 1985. Also unlike the U.S. version, this release did not have a booklet but the information (and some photographs) were replicated on the inner sleeve.
In 1985, the first Compact Disc (CD) release of the soundtrack was issued by Polydor Records, which had by that time absorbed RSO Records and its entire music catalog. As with the album 's original U.K. vinyl and cassette release, this CD release reduced the music content from the seventy - five minutes featured in the 1980 U.S. double - album down to forty - two minutes.
In 1993, 20th Century Fox Film Scores released a special four - CD box set: Star Wars Trilogy: The Original Soundtrack Anthology. This anthology included the soundtracks to all three of the original Star Wars films in separate discs. The disc dedicated to The Empire Strikes Back restored almost all of the original seventy - five minutes from the 1980 LP version and included new music cues never released before for a total of nineteen tracks. On the fourth bonus disc, five additional tracks from Empire were included in a compilation of additional cues from the other two films. This CD release also marked the first time that the famous "20th Century Fox Fanfare '' composed by Alfred Newman in 1954 was added to the track listing, preceding the "Star Wars Main Theme ''.
In 1997, RCA Victor released a definitive two - disc set coinciding with the Special Edition releases of the original trilogy 's films. This original limited - edition set featured a thirty - two page black booklet that was encased inside a protective outer slipcase. The covers of the booklet and the slipcase had the Star Wars Trilogy Special Edition poster art. This booklet was very detailed, providing extensive notes on each music cue and pictures of the main characters and action sequences from the film. The two discs were placed in sleeves that were on the booklet 's inside front and inside back covers. Each disc had a glittery laser - etched holographic logo of the Empire. The musical content featured the complete film score for the first time. It had all of the previously released tracks (restoring the Mynock Cave music which was left off the 1993 release), included extended versions of five of those tracks with previously unreleased material, and six brand new tracks of never before released music for a total of one hundred twenty - four minutes. All the tracks were digitally remastered for superior clarity of sound. They were also re-arranged and re-titled from the previous releases to follow the film 's story in chronological order. RCA Victor re-packaged the Special Edition set later in 1997, offering it in slimline jewel case packaging as an unlimited edition, but without the original "black booklet '' version 's stunning presentation and packaging.
In 2004, Sony Classical acquired the rights to the classic trilogy scores since it already had the rights to release the second trilogy soundtracks (The Phantom Menace and Attack of the Clones). And so, in 2004, Sony Classical re-pressed the 1997 RCA Victor release of the Special Edition Star Wars trilogy, including The Empire Strikes Back. The set was released in a less - than - spectacular package with the new art work mirroring the film 's first DVD release. Despite the Sony digital remastering, which minimally improved the sound heard only on high - end stereos, this 2004 release is essentially the 1997 RCA Victor release.
In 2016, Sony Classical released a remastered version of the original 1980 release as a two - disc LP, copying all aspects of the original RSO release, down to the labeling.
On May 4, 2018, Walt Disney Records released a newly - remastered edition of the original 1980 album program on CD, digital download, and streaming services. This remaster was newly assembled from the highest - quality tapes available, rather than sourced from the existing 1980 album masters. This release marks the first release on CD of the complete 1980 soundtrack album.
Total Time: 74: 34
Total Time: 41: 23
In 1993, 20th Century Fox Film Scores released a four - CD box set containing music from the original Star Wars trilogy. Disc two in the set was devoted to The Empire Strikes Back, with further tracks on disc four.
Note: Parts of tracks six and seventeen on this particular set have their left & right channels reversed).
The first part of track twenty - one, "Ewok Celebration (Film Version) '', is from Return of the Jedi.
In preparation for the 20th anniversary Special Edition releases of the original trilogy 's films, 20th Century Fox spent four months, from April to July 1996, transferring, cleaning and preparing the original soundtracks for special two - disc releases. The original release, by RCA Victor in 1997, consisted of limited - edition books with laser etched CDs inside the front and back covers with each book. In the case of The Empire Strikes Back, the discs are etched with the logo for the Empire. The discs were given an unlimited release in a two - disc jewel case, also by RCA Victor later that year. They were again re-released in 2004 by Sony Music, with new artwork paralleling the original trilogy 's first DVD release.
4: 02 3: 37 4: 03
|
what is the voltage range in which alternating current equipment can be operated | Alternating current - wikipedia
Alternating current (AC) is an electric current which periodically reverses direction, in contrast to direct current (DC) which flows only in one direction. Alternating current is the form in which electric power is delivered to businesses and residences, and it is the form of electrical energy that consumers typically use when they plug kitchen appliances, televisions, fans and electric lamps into a wall socket. A common source of DC power is a battery cell in a flashlight. The abbreviations AC and DC are often used to mean simply alternating and direct, as when they modify current or voltage.
The usual waveform of alternating current in most electric power circuits is a sine wave. In certain applications, different waveforms are used, such as triangular or square waves. Audio and radio signals carried on electrical wires are also examples of alternating current. These types of alternating current carry information such as sound (audio) or images (video) sometimes carried by modulation of an AC carrier signal. These currents typically alternate at higher frequencies than those used in power transmission.
Electrical energy is distributed as alternating current because AC voltage may be increased or decreased with a transformer. This allows the power to be transmitted through power lines efficiently at high voltage, which reduces the energy lost as heat due to resistance of the wire, and transformed to a lower, safer, voltage for use. Use of a higher voltage leads to significantly more efficient transmission of power. The power losses (P w (\ displaystyle P_ (\ rm (w)))) in the wire are a product of the square of the current (I) and the resistance (R) of the wire, described by the formula
This means that when transmitting a fixed power on a given wire, if the current is halved (i.e. the voltage is doubled), the power loss will be four times less.
The power transmitted is equal to the product of the current and the voltage (assuming no phase difference); that is,
Consequently, power transmitted at a higher voltage requires less loss - producing current than for the same power at a lower voltage. Power is often transmitted at hundreds of kilovolts, and transformed to 100 V -- 240 V for domestic use.
High voltages have disadvantages, such as the increased insulation required, and generally increased difficulty in their safe handling. In a power plant, energy is generated at a convenient voltage for the design of a generator, and then stepped up to a high voltage for transmission. Near the loads, the transmission voltage is stepped down to the voltages used by equipment. Consumer voltages vary somewhat depending on the country and size of load, but generally motors and lighting are built to use up to a few hundred volts between phases. The voltage delivered to equipment such as lighting and motor loads is standardized, with an allowable range of voltage over which equipment is expected to operate. Standard power utilization voltages and percentage tolerance vary in the different mains power systems found in the world. High - voltage direct - current (HVDC) electric power transmission systems have become more viable as technology has provided efficient means of changing the voltage of DC power. Transmission with high voltage direct current was not feasible in the early days of electric power transmission, as there was then no economically viable way to step down the voltage of DC for end user applications such as lighting incandescent bulbs.
Three - phase electrical generation is very common. The simplest way is to use three separate coils in the generator stator, physically offset by an angle of 120 ° (one - third of a complete 360 ° phase) to each other. Three current waveforms are produced that are equal in magnitude and 120 ° out of phase to each other. If coils are added opposite to these (60 ° spacing), they generate the same phases with reverse polarity and so can be simply wired together. In practice, higher "pole orders '' are commonly used. For example, a 12 - pole machine would have 36 coils (10 ° spacing). The advantage is that lower rotational speeds can be used to generate the same frequency. For example, a 2 - pole machine running at 3600 rpm and a 12 - pole machine running at 600 rpm produce the same frequency; the lower speed is preferable for larger machines. If the load on a three - phase system is balanced equally among the phases, no current flows through the neutral point. Even in the worst - case unbalanced (linear) load, the neutral current will not exceed the highest of the phase currents. Non-linear loads (e.g. the switch - mode power supplies widely used) may require an oversized neutral bus and neutral conductor in the upstream distribution panel to handle harmonics. Harmonics can cause neutral conductor current levels to exceed that of one or all phase conductors.
For three - phase at utilization voltages a four - wire system is often used. When stepping down three - phase, a transformer with a Delta (3 - wire) primary and a Star (4 - wire, center - earthed) secondary is often used so there is no need for a neutral on the supply side. For smaller customers (just how small varies by country and age of the installation) only a single phase and neutral, or two phases and neutral, are taken to the property. For larger installations all three phases and neutral are taken to the main distribution panel. From the three - phase main panel, both single and three - phase circuits may lead off. Three - wire single - phase systems, with a single center - tapped transformer giving two live conductors, is a common distribution scheme for residential and small commercial buildings in North America. This arrangement is sometimes incorrectly referred to as "two phase ''. A similar method is used for a different reason on construction sites in the UK. Small power tools and lighting are supposed to be supplied by a local center - tapped transformer with a voltage of 55 V between each power conductor and earth. This significantly reduces the risk of electric shock in the event that one of the live conductors becomes exposed through an equipment fault whilst still allowing a reasonable voltage of 110 V between the two conductors for running the tools.
A third wire, called the bond (or earth) wire, is often connected between non-current - carrying metal enclosures and earth ground. This conductor provides protection from electric shock due to accidental contact of circuit conductors with the metal chassis of portable appliances and tools. Bonding all non-current - carrying metal parts into one complete system ensures there is always a low electrical impedance path to ground sufficient to carry any fault current for as long as it takes for the system to clear the fault. This low impedance path allows the maximum amount of fault current, causing the overcurrent protection device (breakers, fuses) to trip or burn out as quickly as possible, bringing the electrical system to a safe state. All bond wires are bonded to ground at the main service panel, as is the neutral / identified conductor if present.
The frequency of the electrical system varies by country and sometimes within a country; most electric power is generated at either 50 or 60 hertz. Some countries have a mixture of 50 Hz and 60 Hz supplies, notably electricity power transmission in Japan. A low frequency eases the design of electric motors, particularly for hoisting, crushing and rolling applications, and commutator - type traction motors for applications such as railways. However, low frequency also causes noticeable flicker in arc lamps and incandescent light bulbs. The use of lower frequencies also provided the advantage of lower impedance losses, which are proportional to frequency. The original Niagara Falls generators were built to produce 25 Hz power, as a compromise between low frequency for traction and heavy induction motors, while still allowing incandescent lighting to operate (although with noticeable flicker). Most of the 25 Hz residential and commercial customers for Niagara Falls power were converted to 60 Hz by the late 1950s, although some 25 Hz industrial customers still existed as of the start of the 21st century. 16.7 Hz power (formerly 16 2 / 3 Hz) is still used in some European rail systems, such as in Austria, Germany, Norway, Sweden and Switzerland. Off - shore, military, textile industry, marine, aircraft, and spacecraft applications sometimes use 400 Hz, for benefits of reduced weight of apparatus or higher motor speeds. Computer mainframe systems were often powered by 400 Hz or 415 Hz for benefits of ripple reduction while using smaller internal AC to DC conversion units. In any case, the input to the M-G set is the local customary voltage and frequency, variously 200 V (Japan), 208 V, 240 V (North America), 380 V, 400 V or 415 V (Europe), and variously 50 Hz or 60 Hz.
A direct current flows uniformly throughout the cross-section of a uniform wire. An alternating current of any frequency is forced away from the wire 's center, toward its outer surface. This is because the acceleration of an electric charge in an alternating current produces waves of electromagnetic radiation that cancel the propagation of electricity toward the center of materials with high conductivity. This phenomenon is called skin effect. At very high frequencies the current no longer flows in the wire, but effectively flows on the surface of the wire, within a thickness of a few skin depths. The skin depth is the thickness at which the current density is reduced by 63 %. Even at relatively low frequencies used for power transmission (50 Hz -- 60 Hz), non-uniform distribution of current still occurs in sufficiently thick conductors. For example, the skin depth of a copper conductor is approximately 8.57 mm at 60 Hz, so high current conductors are usually hollow to reduce their mass and cost. Since the current tends to flow in the periphery of conductors, the effective cross-section of the conductor is reduced. This increases the effective AC resistance of the conductor, since resistance is inversely proportional to the cross-sectional area. The AC resistance often is many times higher than the DC resistance, causing a much higher energy loss due to ohmic heating (also called I R loss).
For low to medium frequencies, conductors can be divided into stranded wires, each insulated from one another, and the relative positions of individual strands specially arranged within the conductor bundle. Wire constructed using this technique is called Litz wire. This measure helps to partially mitigate skin effect by forcing more equal current throughout the total cross section of the stranded conductors. Litz wire is used for making high - Q inductors, reducing losses in flexible conductors carrying very high currents at lower frequencies, and in the windings of devices carrying higher radio frequency current (up to hundreds of kilohertz), such as switch - mode power supplies and radio frequency transformers.
As written above, an alternating current is made of electric charge under periodic acceleration, which causes radiation of electromagnetic waves. Energy that is radiated is lost. Depending on the frequency, different techniques are used to minimize the loss due to radiation.
At frequencies up to about 1 GHz, pairs of wires are twisted together in a cable, forming a twisted pair. This reduces losses from electromagnetic radiation and inductive coupling. A twisted pair must be used with a balanced signalling system, so that the two wires carry equal but opposite currents. Each wire in a twisted pair radiates a signal, but it is effectively cancelled by radiation from the other wire, resulting in almost no radiation loss.
Coaxial cables are commonly used at audio frequencies and above for convenience. A coaxial cable has a conductive wire inside a conductive tube, separated by a dielectric layer. The current flowing on the surface of the inner conductor is equal and opposite to the current flowing on the inner surface of the outer tube. The electromagnetic field is thus completely contained within the tube, and (ideally) no energy is lost to radiation or coupling outside the tube. Coaxial cables have acceptably small losses for frequencies up to about 5 GHz. For microwave frequencies greater than 5 GHz, the losses (due mainly to the electrical resistance of the central conductor) become too large, making waveguides a more efficient medium for transmitting energy. Coaxial cables with an air rather than solid dielectric are preferred as they transmit power with lower loss.
Waveguides are similar to coaxial cables, as both consist of tubes, with the biggest difference being that the waveguide has no inner conductor. Waveguides can have any arbitrary cross section, but rectangular cross sections are the most common. Because waveguides do not have an inner conductor to carry a return current, waveguides can not deliver energy by means of an electric current, but rather by means of a guided electromagnetic field. Although surface currents do flow on the inner walls of the waveguides, those surface currents do not carry power. Power is carried by the guided electromagnetic fields. The surface currents are set up by the guided electromagnetic fields and have the effect of keeping the fields inside the waveguide and preventing leakage of the fields to the space outside the waveguide. Waveguides have dimensions comparable to the wavelength of the alternating current to be transmitted, so they are only feasible at microwave frequencies. In addition to this mechanical feasibility, electrical resistance of the non-ideal metals forming the walls of the waveguide cause dissipation of power (surface currents flowing on lossy conductors dissipate power). At higher frequencies, the power lost to this dissipation becomes unacceptably large.
At frequencies greater than 200 GHz, waveguide dimensions become impractically small, and the ohmic losses in the waveguide walls become large. Instead, fiber optics, which are a form of dielectric waveguides, can be used. For such frequencies, the concepts of voltages and currents are no longer used.
Alternating currents are accompanied (or caused) by alternating voltages. An AC voltage v can be described mathematically as a function of time by the following equation:
where
The peak - to - peak value of an AC voltage is defined as the difference between its positive peak and its negative peak. Since the maximum value of sin (x) (\ displaystyle \ sin (x)) is + 1 and the minimum value is − 1, an AC voltage swings between + V p e a k (\ displaystyle + V_ (\ rm (peak))) and − V p e a k (\ displaystyle - V_ (\ rm (peak))). The peak - to - peak voltage, usually written as V p p (\ displaystyle V_ (\ rm (pp))) or V P − P (\ displaystyle V_ (\ rm (P-P))), is therefore V p e a k − (− V p e a k) = 2 V p e a k (\ displaystyle V_ (\ rm (peak)) - (- V_ (\ rm (peak))) = 2V_ (\ rm (peak))).
The relationship between voltage and the power delivered is
Rather than using instantaneous power, p (t) (\ displaystyle p (t)), it is more practical to use a time averaged power (where the averaging is performed over any integer number of cycles). Therefore, AC voltage is often expressed as a root mean square (RMS) value, written as V r m s (\ displaystyle V_ (\ rm (rms))), because
Below it is assumed an AC waveform (with no DC component).
To illustrate these concepts, consider a 230 V AC mains supply used in many countries around the world. It is so called because its root mean square value is 230 V. This means that the time - averaged power delivered is equivalent to the power delivered by a DC voltage of 230 V. To determine the peak voltage (amplitude), we can rearrange the above equation to:
For 230 V AC, the peak voltage V p e a k (\ displaystyle V_ (\ mathrm (peak))) is therefore 230 V × 2 (\ displaystyle 230V \ times (\ sqrt (2))), which is about 325 V. During the course of one cycle the voltage rises from zero to 325 V, falls through zero to - 325 V, and returns to zero.
Alternating current is used to transmit information, as in the cases of telephone and cable television. Information signals are carried over a wide range of AC frequencies. POTS telephone signals have a frequency of about 3 kHz, close to the baseband audio frequency. Cable television and other cable - transmitted information currents may alternate at frequencies of tens to thousands of megahertz. These frequencies are similar to the electromagnetic wave frequencies often used to transmit the same types of information over the air.
The first alternator to produce alternating current was a dynamo electric generator based on Michael Faraday 's principles constructed by the French instrument maker Hippolyte Pixii in 1832. Pixii later added a commutator to his device to produce the (then) more commonly used direct current. The earliest recorded practical application of alternating current is by Guillaume Duchenne, inventor and developer of electrotherapy. In 1855, he announced that AC was superior to direct current for electrotherapeutic triggering of muscle contractions. Alternating current technology had first developed in Europe due to the work of Guillaume Duchenne (1850s), the Hungarian Ganz Works company (1870s), and in the 1880s: Sebastian Ziani de Ferranti, Lucien Gaulard, and Galileo Ferraris.
In 1876, Russian engineer Pavel Yablochkov invented a lighting system where sets of induction coils were installed along a high voltage AC line. Instead of changing voltage, the primary windings transferred power to the secondary windings which were connected to one or several ' electric candles ' (arc lamps) of his own design, used to keep the failure of one lamp from disabling the entire circuit. In 1878, the Ganz factory, Budapest, Hungary, began manufacturing equipment for electric lighting and, by 1883, had installed over fifty systems in Austria - Hungary. Their AC systems used arc and incandescent lamps, generators, and other equipment.
Alternating current systems can use transformers to change voltage from low to high level and back, allowing generation and consumption at low voltages but transmission, possibly over great distances, at high voltage, with savings in the cost of conductors and energy losses. A bipolar open - core power transformer developed by Lucien Gaulard and John Dixon Gibbs was demonstrated in London in 1881, and attracted the interest of Westinghouse. They also exhibited the invention in Turin in 1884. However these early induction coils with open magnetic circuits are inefficient at transferring power to loads. Until about 1880, the paradigm for AC power transmission from a high voltage supply to a low voltage load was a series circuit. Open - core transformers with a ratio near 1: 1 were connected with their primaries in series to allow use of a high voltage for transmission while presenting a low voltage to the lamps. The inherent flaw in this method was that turning off a single lamp (or other electric device) affected the voltage supplied to all others on the same circuit. Many adjustable transformer designs were introduced to compensate for this problematic characteristic of the series circuit, including those employing methods of adjusting the core or bypassing the magnetic flux around part of a coil. The direct current systems did not have these drawbacks, giving it significant advantages over early AC systems.
In the autumn of 1884, Károly Zipernowsky, Ottó Bláthy and Miksa Déri (ZBD), three engineers associated with the Ganz factory, determined that open - core devices were impractical, as they were incapable of reliably regulating voltage. In their joint 1885 patent applications for novel transformers (later called ZBD transformers), they described two designs with closed magnetic circuits where copper windings were either a wound around iron wire ring core or b) surrounded by iron wire core. In both designs, the magnetic flux linking the primary and secondary windings traveled almost entirely within the confines of the iron core, with no intentional path through air (see toroidal cores). The new transformers were 3.4 times more efficient than the open - core bipolar devices of Gaulard and Gibbs. The Ganz factory in 1884 shipped the world 's first five high - efficiency AC transformers. This first unit had been manufactured to the following specifications: 1,400 W, 40 Hz, 120: 72 V, 11.6: 19.4 A, ratio 1.67: 1, one - phase, shell form.
The ZBD patents included two other major interrelated innovations: one concerning the use of parallel connected, instead of series connected, utilization loads, the other concerning the ability to have high turns ratio transformers such that the supply network voltage could be much higher (initially 1400 V to 2000 V) than the voltage of utilization loads (100 V initially preferred). When employed in parallel connected electric distribution systems, closed - core transformers finally made it technically and economically feasible to provide electric power for lighting in homes, businesses and public spaces. The other essential milestone was the introduction of ' voltage source, voltage intensive ' (VSVI) systems ' by the invention of constant voltage generators in 1885. Ottó Bláthy also invented the first AC electricity meter.
The AC power systems was developed and adopted rapidly after 1886 due to its ability to distribute electricity efficiently over long distances, overcoming the limitations of the direct current system. In 1886, the ZBD engineers designed the world 's first power station that used AC generators to power a parallel - connected common electrical network, the steam - powered Rome - Cerchi power plant. The reliability of the AC technology received impetus after the Ganz Works electrified a large European metropolis: Rome in 1886.
In the UK, Sebastian de Ferranti, who had been developing AC generators and transformers in London since 1882, redesigned the AC system at the Grosvenor Gallery power station in 1886 for the London Electric Supply Corporation (LESCo) including alternators of his own design and transformer designs similar to Gaulard and Gibbs. In 1890 he designed their power station at Deptford and converted the Grosvenor Gallery station across the Thames into an electrical substation, showing the way to integrate older plants into a universal AC supply system.
In the US William Stanley, Jr. designed one of the first practical devices to transfer AC power efficiently between isolated circuits. Using pairs of coils wound on a common iron core, his design, called an induction coil, was an early (1885) transformer. Stanley also worked on engineering and adapting European designs such as the Gaulard and Gibbs transformer for US entrepreneur George Westinghouse who started building AC systems in 1886. The spread of Westinghouse and other AC systems triggered a push back in late 1887 by Edison (a proponent of direct current) who attempted to discredit alternating current as too dangerous in a public campaign called the "War of Currents ''. In 1888 alternating current systems gained further viability with introduction of a functional AC motor, something these systems had lacked up till then. The design, an induction motor, was independently invented by Galileo Ferraris and Nikola Tesla (with Tesla 's design being licensed by Westinghouse in the US). This design was further developed into the modern practical three - phase form by Mikhail Dolivo - Dobrovolsky and Charles Eugene Lancelot Brown.
The Ames Hydroelectric Generating Plant (spring of 1891) and the original Niagara Falls Adams Power Plant (August 25, 1895) were among the first hydroelectric alternating current power plants. The first long distance transmission of single - phase electricity was from a hydroelectric generating plant in Oregon at Willamette Falls which in 1890 sent power fourteen miles downriver to downtown Portland for street lighting. In 1891, a second transmission system was installed in Telluride Colorado. The San Antonio Canyon Generator was the third commercial single - phase hydroelectric AC power plant in the United States to provide long - distance electricity. It was completed on December 31, 1892 by Almarian William Decker to provide power to the city of Pomona, California which was 14 miles away. In 1893 he next designed the first commercial three - phase power plant in the United States using alternating current was the hydroelectric Mill Creek No. 1 Hydroelectric Plant near Redlands, California. Decker 's design incorporated 10 kV three - phase transmission and established the standards for the complete system of generation, transmission and motors used today. The Jaruga Hydroelectric Power Plant in Croatia was set in operation on 28 August 1895. The two generators (42 Hz, 550 kW each) and the transformers were produced and installed by the Hungarian company Ganz. The transmission line from the power plant to the City of Šibenik was 11.5 kilometers (7.1 mi) long on wooden towers, and the municipal distribution grid 3000 V / 110 V included six transforming stations. Alternating current circuit theory developed rapidly in the latter part of the 19th and early 20th century. Notable contributors to the theoretical basis of alternating current calculations include Charles Steinmetz, Oliver Heaviside, and many others. Calculations in unbalanced three - phase systems were simplified by the symmetrical components methods discussed by Charles Legeyt Fortescue in 1918.
|
1. a normal gene that if mutated can lead to cancer is called a(n) | Oncogene - wikipedia
An oncogene is a gene that has the potential to cause cancer. In tumor cells, they are often mutated and / or expressed at high levels.
Most normal cells will undergo a programmed form of rapid cell death (apoptosis) when critical functions are altered and malfunctioning. Activated oncogenes can cause those cells designated for apoptosis to survive and proliferate instead. Most oncogenes began as proto - oncogenes, normal genes involved in cell growth and proliferation or inhibition of apoptosis. If normal genes promoting cellular growth, through mutation, are up - regulated, (gain of function mutation) they will predispose the cell to cancer and are thus termed oncogenes. Usually multiple oncogenes, along with mutated apoptotic and / or tumor suppressor genes will all act in concert to cause cancer. Since the 1970s, dozens of oncogenes have been identified in human cancer. Many cancer drugs target the proteins encoded by oncogenes.
The theory of oncogenes was foreshadowed by the German biologist Theodor Boveri in his 1914 book Zur Frage der Entstehung Maligner Tumoren (' The Origin of Malignant Tumours '), Gustav Fisher, Jena, 1914. Oncogenes (Teilungsfoerdernde Chromosomen) that become amplified (im permanenten Übergewicht) during tumour development.
Later on the term "oncogene '' was rediscovered in 1969 by National Cancer Institute scientists George Todaro and Robert Heubner.
The first confirmed oncogene was discovered in 1970 and was termed src (pronounced sarc as in sarcoma). Src was in fact first discovered as an oncogene in a chicken retrovirus. Experiments performed by Dr. G. Steve Martin of the University of California, Berkeley demonstrated that the Src was indeed the oncogene of the virus. The first nucleotide sequence of v - src was sequenced in 1980 by A.P. Czernilofsky et al.
In 1976 Drs. Dominique Stehelin, J. Michael Bishop and Harold E. Varmus of the University of California, San Francisco demonstrated that oncogenes were activated proto - oncogenes, found in many organisms including humans. For this discovery, proving Todaro and Heubner 's "oncogene theory '', Bishop and Varmus were awarded the Nobel Prize in Physiology or Medicine in 1989.
The resultant protein encoded by an oncogene is termed oncoprotein. Oncogenes play an important role in the regulation or synthesis of proteins linked to tumorigenic cell growth. Some oncoproteins are accepted and used as tumor markers.
A proto - oncogene is a normal gene that could become an oncogene due to mutations or increased expression. Proto - oncogenes code for proteins that help to regulate cell growth and differentiation. Proto - oncogenes are often involved in signal transduction and execution of mitogenic signals, usually through their protein products. Upon acquiring an activating mutation, a proto - oncogene becomes a tumor - inducing agent, an oncogene. Examples of proto - oncogenes include RAS, WNT, MYC, ERK, and TRK. The MYC gene is implicated in Burkitt 's lymphoma, which starts when a chromosomal translocation moves an enhancer sequence within the vicinity of the MYC gene. The MYC gene codes for widely used transcription factors. When the enhancer sequence is wrongly placed, these transcription factors are produced at much higher rates. Another example of an oncogene is the Bcr - Abl gene found on the Philadelphia chromosome, a piece of genetic material seen in Chronic Myelogenous Leukemia caused by the translocation of pieces from chromosomes 9 and 22. Bcr - Abl codes for a tyrosine kinase, which is constitutively active, leading to uncontrolled cell proliferation. (More information about the Philadelphia Chromosome below)
The proto - oncogene can become an oncogene by a relatively small modification of its original function. There are three basic methods of activation:
The expression of oncogenes can be regulated by microRNAs (miRNAs), small RNAs 21 - 25 nucleotides in length that control gene expression by downregulating them. Mutations in such microRNAs (known as oncomirs) can lead to activation of oncogenes. Antisense messenger RNAs could theoretically be used to block the effects of oncogenes.
There are several systems for classifying oncogenes, but there is not yet a widely accepted standard. They are sometimes grouped both spatially (moving from outside the cell inwards) and chronologically (parallelling the "normal '' process of signal transduction). There are several categories that are commonly used:
More detailed information for the above Table:
|
when did cavity wall insulation become part of building regulations | Cavity wall - wikipedia
Cavity walls consist of two ' skins ' separated by a hollow space (cavity). The skins are commonly masonry such as brick or concrete block. Masonry is an absorbent material, and therefore will slowly draw rainwater or even humidity into the wall, as well as from the inside of the house as from outside. The cavity serves as a way to drain water back out through weep holes at the base of the wall system or above windows. The weep holes allow wind to create an air stream through the cavity and the stream removes evaporated water from the cavity to the outside. Usually weep holes are created by intentionally leaving several vertical joints, also open head joints, open about two meters apart at the base of in every story. Weep holes are also placed above windows to prevent dry rot of a wooden window frame. A cavity wall with masonry as both inner and outer skins is more commonly referred to as a double wythe masonry wall.
The typical cavity wall method of construction was introduced in Northwest Europe during the 19th century and gained widespread use from the 1920s. In some early examples stones were used to tie the two leaves of the cavity wall together, while in the 20th century metal ties came into use. Initially cavity widths were extremely narrow and were primarily implemented to prevent the passage of moisture into the interior of the building. The widespread introduction of insulation into the cavity began in the 1970s with it becoming compulsory in building regulations during the 1990s.
A cavity wall is composed of two masonry walls separated by a continuous air space in between the outer and the inner wall. The outer wall is the brick wall that faces the outside of the building structure. (1) The inner wall, separated from the outer wall by a continuous air space or the cavity, is the wall on the interior of the building structure. The inner wall may be constructed of masonry units such as concrete block, structural clay, brick or in some other cases reinforced concrete. (1) These two walls have to be fastened together with metal ties or bonding blocks. (2) The ties will allow the cavity wall to be strong.
The water barrier is a thin membrane that will keep moisture away from the cavity side of the interior wall.
In cavity walls the flashing component is very important for the overall water management of the wall. The main purpose of the flashing is to direct water out of the cavity. Metal flashing will usually extend from the interior wall through the outer wall and a weep hole with a downward curve should be provided in order to get water out. Flashing system in cavity walls are typically located close to the base of the wall, so that it will collect, on the bottom, all the water that goes down the wall.
Weep Holes are drainage holes left in the exterior wall of the cavity wall, to provide an exit way for water in the cavity.
Expansion and Control Joints do not have to be aligned in cavity walls.
In modern cavity wall construction, cavity insulation is typically added. This construction makes possible to add a continuous insulation layer in between the two wythes and, vertically, through the slabs, which minimizes thermal bridges.
Breathing Performance: Early cavity wall buildings are structures that exchange moisture readily with the indoor and outdoor environment. Materials used in repairs must be selected with care so they do not affect the breathing performance of the materials. (4)
Thermal mass cavity walls are thick walls. These help stabilize the interior environment of a building better that the thinner modern walls. (4)
Environmental Influences: The orientation or design of a building may affect the performance of different façades on a building. Some walls may receive more rainwater and wind than others depending in their orientation or protection to some of the faces. (4)
Damp: Moisture is one of the main problems in materials weathering. (4)
Salts
Allen, E., & Iano, J. (1986). Fundamentals of building construction (5th ed.). New York: Wiley.
Ching, F.D., Onouye, B., & Zuberbuhler, D. (2014). Building structures illustrated (2nd ed.). Hoboken (N.J.): J. Wiley & Sons.
Ching, F. (2012). A visual dictionary of architecture. Hoboken, NJ: Wiley.
Whittemore, H.L. (1939). Structural properties of a concrete - block cavity - wall construction sponsored by the National Concrete Masonry Association. Washington, D.C.: U.S. Dept. of Commerce, National Bureau of Standards.
Whittemore, H.L. b. 1876. (1939). Structural properties of a brick cavity - wall construction sponsored by the Brick Manufacturers Association of New York, Inc... Washington, D.C.: U.S. Dept. of Commerce, National Bureau of Standards.
Whittemore, H.L. (1939). Structural properties of a reinforced - brick wall construction and a brick - tile cavity - wall construction sponsored by the Structural Clay Products Institute. Washington, D.C.: U.S. Dept. of Commerce, National Bureau of Standards.
|
what movie did matthew mcconaughey win an oscar for 2017 | List of awards and nominations received by Matthew McConaughey - wikipedia
This is the list of awards and nominations received by American actor Matthew McConaughey.
Sir Tyston 's film award (2017)
|
but it's still rock and roll to me lyrics | It 's Still Rock and Roll to Me - wikipedia
"It 's Still Rock and Roll to Me '' is a hit 1980 song performed by Billy Joel, from the hit album Glass Houses. The song was number 1 on the Billboard Hot 100 charts for two weeks, from July 19 through August 1, 1980. The song spent 11 weeks in the top 10 of the Billboard Hot 100 and was the 7th biggest hit of 1980 according to American Top 40. The song is an examination of the themes of a musician 's declining fame and changing public tastes that were expressed in his 1975 hit "The Entertainer ''.
The single eventually reached Platinum status from the RIAA for sales of over 1 million copies in the United States.
The video version differs from the album version.
The song is a cynical look at the music industry as a publicist / manager begs the protagonist to remain hip for the younger crowd ("What 's the matter with the car I 'm driving? '' / "Ca n't you tell that it 's out of style? ''), and the protagonist 's refusal to change, claiming his music will remain relevant regardless of his appearance. The song was a reaction by Joel at the new music genres that were around in the late 1970s (punk, funk, new wave). It was inspired by Joel reading a review about a particular (unnamed) band, and realizing that he had no idea what their music sounded like. The song also includes the line "Alright Rico! '' to kick off the saxophone solo performed by Richie Cannata.
The music video for the song depicted Joel mixing elements of new wave, punk, and funk as he records a music video.
The song appears on the game Karaoke Revolution Presents: American Idol Encore.
Pop rock musician Drake Bell covered the song in 2014 on his rockabilly album Ready Steady Go!. Kid Rock covered it as a demo in 1992 but rewrote the lyrics and changed title to "It 's Still East Detroit To Me '', it 's a hardcore punk song. "Weird Al '' Yankovic recorded a parody of the song called "It 's Still Billy Joel to Me ''. Also, was covered by Sugaryline in the album WackyMusic.
|
who won the women's money in the bank 2018 | Money in the Bank (2018) - wikipedia
Money in the Bank (2018) was a professional wrestling pay - per - view (PPV) event and WWE Network event, produced by WWE for their Raw and SmackDown brands. It took place on June 17, 2018, at the Allstate Arena in the Chicago suburb of Rosemont, Illinois. It was the ninth event under the Money in the Bank chronology.
The card comprised eleven matches, including one match on the pre-show. In the main event, Braun Strowman won the titular ladder match on the men 's side, while Alexa Bliss won the women 's ladder match. Bliss cashed in her contract later in the night to win the Raw Women 's Championship from Nia Jax after causing a disqualification in the previous title match between Ronda Rousey and Jax. On the undercard, AJ Styles retained the WWE Championship against Shinsuke Nakamura in a Last Man Standing match, and Carmella retained the SmackDown Women 's Championship against Asuka with help from the returning James Ellsworth.
WWE 's Money in the Bank pay - per - view event centers around a match of the same name, in which multiple wrestlers use ladders to retrieve a briefcase hanging above the ring. The briefcase contains a contract that guarantees the winner a match for a world championship at any time within the next year. The contracts for 2018 specifically granted the male and female winners a match for the world championship of their respective brand. The 2018 event included two ladder matches, one for male wrestlers and one for females, each having eight participants, evenly divided between the Raw and SmackDown brands. Male wrestlers competed for a contract to grant them a match for either Raw 's Universal Championship or SmackDown 's WWE Championship, while female wrestlers competed for a contract to grant them a match for either the Raw Women 's Championship or SmackDown Women 's Championship.
The card originally comprised ten matches, including one on the pre-show, that resulted from scripted storylines with results predetermined by WWE on the Raw and SmackDown brands. An impromptu eleventh match was added during the show. Storylines were produced on WWE 's weekly television shows, Monday Night Raw and SmackDown Live.
Qualification matches for the men 's ladder match began on May 7 on Raw. Braun Strowman defeated Kevin Owens to qualify, while Finn Bálor qualified by defeating Sami Zayn and Roman Reigns in a triple threat match, after Jinder Mahal interfered and attacked Reigns. The following week, further triple threat matches yielded two more participants: Bobby Roode defeated Baron Corbin and No Way Jose, while Owens (standing in for Mahal, who had been injured by Reigns) defeated Elias and Bobby Lashley after Zayn interfered and attacked Lashley.
Qualification matches for the women 's ladder match also began on May 7, with Ember Moon defeating Sasha Banks and Ruby Riott in a triple threat match to qualify. The following week, Alexa Bliss defeated Bayley and Mickie James in a triple threat match to qualify. The following week, Natalya defeated Sarah Logan, Liv Morgan, and Dana Brooke in a fatal four - way match to qualify. The final qualification match occurred on the May 28 episode of Raw, where Banks defeated Bayley, Brooke, James, Logan, Morgan, and Riott in a seven - woman gauntlet match to earn the final spot.
During the NBCUniversal Upfront event on May 14, Ronda Rousey was interviewed by Cathy Kelley. Raw Women 's Champion Nia Jax interrupted the interview and challenged Rousey to a match at Money in the Bank, putting her title on the line. Rousey accepted the challenge. The following week, both Rousey and Jax signed the contract for their match.. The following week, Jax demonstrated on an amateur wrestler that she would counter Rousey 's armbar and perform a powerbomb.
On the May 7 episode of Raw, Jinder Mahal cost Roman Reigns a chance to qualify for the Money in the Bank ladder match. The following week, Reigns attacked Mahal backstage and speared him through a wall. On the May 21 episode of Raw, after Reigns and Seth Rollins defeated Mahal and Kevin Owens, Mahal attacked Reigns with a chair. A match between the two was scheduled for Money in the Bank.
During a tag team match on the April 23 episode of Raw, Bobby Lashley performed a one - handed suspended vertical suplex on Sami Zayn, which Zayn claimed gave him vertigo and why he missed the Greatest Royal Rumble. At Backlash, Lashley and Braun Strowman defeated Kevin Owens and Sami Zayn. During an interview on the May 7 episode, Lashley spoke dearly about his family, including his three sisters. In response, Zayn said that Lashley was not a nice guy and promised to bring out Lashley 's sisters to tell the truth. On the May 21 episode, three men dressed up as Lashley 's sisters accused Lashley of violence. Lashley interrupted the segment and attacked the imposters along with Zayn. The following week, a match between the two was made for Money in the Bank.
On the May 28 episode of Raw, Seth Rollins interrupted Elias, who was once again trying to sing for the crowd, and forced him to leave. After Rollins 's match, Elias smashed a guitar on his back. On May 31, a match between the two for the Intercontinental Championship was made for Money in the Bank.
Qualifications for the men 's ladder match continued on SmackDown on May 8. The Miz and Rusev qualified for the match by defeating Jeff Hardy and Daniel Bryan, respectively. The following week, Big E and Xavier Woods of The New Day defeated Cesaro and Sheamus in a tag team match, allowing one member of New Day to qualify. Samoa Joe secured the final spot by defeating Big Cass and Daniel Bryan in a triple threat match on the May 29 episode of SmackDown.
Qualifications for the women 's ladder match also continued on the May 8 episode, with Charlotte Flair qualified over Peyton Royce. The following week, Becky Lynch defeated Mandy Rose and Sonya Deville in a triple threat match to qualify. On the May 15 episode, Lana and Naomi qualified for the match by defeating Billie Kay and Deville, respectively.
At WrestleMania 34, AJ Styles defeated Shinsuke Nakamura to retain the WWE Championship. After the match, Nakamura attacked Styles with a low blow, turning heel in the process. The two continued to feud through Greatest Royal Rumble and Backlash with their matches ending in draws. SmackDown Commissioner Shane McMahon scheduled a fourth match for Money in the Bank and promised there would a decisive winner. On the May 15 episode of SmackDown, Nakamura earned the right to choose a stipulation for the match by defeating Styles. The following week, Nakamura attacked Styles with a Kinshasa at ringside and chose a Last Man Standing match.
Big Cass and Daniel Bryan had begun feuding after Cass returned from injury during the 2018 WWE Superstar Shake - up. Cass had attacked Bryan on SmackDown and eliminated him from the Greatest Royal Rumble match. Bryan then defeated Cass at Backlash, but was attacked by his opponent after the match. Bryan returned the favor on the May 15 episode of SmackDown and a house show in Germany. By injuring Cass ' leg, Bryan not only cost him his spot in an upcoming Money in the Bank qualifying match against Samoa Joe, but secured that spot for himself by defeating Jeff Hardy on the May 22 episode of SmackDown. However, when Cass returned from injury in time, the match was turned into the aforementioned triple threat match, won by Joe. After the match, Cass attacked Bryan again. On June 2, another match between the two was made for Money in the Bank.
On the May 15 episode of SmackDown, SmackDown Women 's Champion Carmella celebrated her championship reign. SmackDown General Manager Paige interrupted Carmella and scheduled a title match between her and Asuka for Money in the Bank.
On the May 22 episode of SmackDown, Luke Gallows and Karl Anderson defeated The Usos (Jey and Jimmy Uso) to become the number one contenders to face The Bludgeon Brothers (Harper and Rowan) for the SmackDown Tag Team Championship at Money in the Bank. The match was scheduled for the Money in the Bank pre-show.
During the pre-show, The Bludgeon Brothers (Harper and Rowan) defended the SmackDown Tag Team Championship against Luke Gallows and Karl Anderson. Harper and Rowan performed The Reckoning on Gallows to retain the title.
The actual pay - per - view opened with Daniel Bryan facing Big Cass. In the climax, Bryan performed a running high knee on Cass and forced him to submit with a heel hook.
Next, Bobby Lashley faced Sami Zayn. In the climax, Lashley performed three delayed vertical suplexes on Zayn for the pinfall.
After that, Seth Rollins defended the Intercontinental Championship against Elias. In the end, Rollins pinned Elias with a schoolboy whilst holding Elias 's tights to retain the title.
In the fifth match, Ember Moon, Charlotte Flair, Alexa Bliss, Becky Lynch, Natalya, Naomi, Lana, and Sasha Banks competed in the Women 's Money in the Bank match. In the end, Lynch attempted to retrieve the briefcase only for Bliss to push the ladder causing Lynch to fall. Bliss ascended the ladder to retrieve the briefcase to win the match.
Next, Roman Reigns faced Jinder Mahal, who was accompanied by an injured Sunil Singh in a wheelchair. Singh pushed Reigns into the ring post, proving his injury was fake. During the match, Reigns performed a superman punch and a spear on Singh. In the end, Reigns performed a spear on Mahal for the win.
Next, Carmella defended the SmackDown Women 's Championship against Asuka. In the end, Asuka was distracted by a person masquerading as her. The masked figure revealed as a returning James Ellsworth. Carmella took advantage of the distraction and performed a Princess Kick on Asuka to retain the title.
After that, AJ Styles defended the WWE Championship against Shinsuka Nakamura in a Last Man Standing match. Early in the match, Nakamura countered Styles ' Phenomenal Forearm by kicking Styles 's leg. On top of a broadcast table, Nakamura performed a Kinshasa on Styles, however, Styles stood up at a nine count. Nakamura threw Styles through a table, which was positioned in the corner in the ring, only for Styles to stand at a nine count. Styles began to target Nakamura 's leg and applied the Calf Crusher on Nakamura. Styles struck Nakamura with a chair, however, Nakamura retaliated with a low blow; Styles stood at an eight count. At ringside, Nakamura performed a Kinshasa on Styles, who stood at a nine count. Styles performed a Phenomenal Forearm off an announce table and a Styles Clash on Nakamura off the steel steps, with Nakamura standing at a nine count. In the end, Styles attacked Nakamura with a low blow and performed a Phenomenal Forearm on Nakamura through a broadcast table. Nakamura could not stand by a ten count, thus Styles won the match and retained the title.
Next, Nia Jax defended the Raw Women 's Championship against Ronda Rousey. In the end, as Rousey applied the armbar on Jax, Alexa Bliss appeared and attacked Rousey with her Money in the Bank briefcase causing a disqualification. Jax was disqualified, but retained her title. Bliss further attacked Rousey and Jax and then cashed in her Money in the Bank contract and performed a snap DDT and Twisted Bliss to win the title for a third time.
The main event was the men 's Money in the Bank ladder match involving Braun Strowman, Finn Bálor, The Miz, Rusev, Bobby Roode, Kevin Owens, Samoa Joe, and one member of The New Day, revealed to be Kofi Kingston. All of the participants attacked Strowman, and buried him underneath a pile of ladders. Strowman later recovered and attacked Bálor and Kingston. Joe, Owens, and Rusev placed Strowman on a table, with Owens climbing a large ladder to leap unto Strowman. Strowman broke free, climbed the ladder, and threw Owens through two tables. Joe attacked Strowman with a ladder. Bálor performed a Coup de Grâce on Roode off a ladder. Bálor ascended the ladder and attempted to retrieve the briefcase, only to be stopped by Strowman. Strowman then performed running powerslams on both Miz and Joe. In the end, as Bálor and Strowman ascended the ladder, Kingston jumped unto Strowman 's back. Strowman pushed both opponents off and retrieved the briefcase to win the match.
|
who wrote i see you on spencer's car | Original G'A'ngsters - wikipedia
"Original G'A'ngsters '' is the seventh episode of the seventh season of the mystery drama television series Pretty Little Liars, which aired on August 9, 2016, on the cable network Freeform. The hundred and forty - seventh episode of the series, it was directed by Melanie Mayron and written by Kateland Brown. The episode received a Nielsen rating of 0.6 and was viewed by 1.16 million viewers, up from the previous episode. It received positive reviews from critics. This episode is rated TV - 14.
The series focuses on a group of five women, collectively known as Liars, who receive anonymous messages in the form of threats from an unknown person, while they struggle to survive a life with danger. In this episode, the city of Rosewood is shocked by Sara Harvey 's death, while the Liars must struggle with their own problems. Jason DiLaurentis comes back to Rosewood, and is fearful on Mary Drake 's intentions. A new evidence about Mary and Jessica 's past at the Radley Sanitarium is discovered, and it changes everything. Meanwhile, Noel Kahn 's behaviour starts to worry the Liars. Ezra and Aria decide to elope in Italy, but right before they are about to leave, the FBI contacts Ezra and tells him Nicole might be alive.
Spencer (Troian Bellisario) presents the girls with necklaces symbolizing their friendship. Upon receiving the bill, Alison (Sasha Pieterse) reads a message that A.D. wrote inside the bill, revealing that A.D. knows that they killed Elliott. Some police officers appear in the Radley, and the girls shock to think that they are there to capture them. However, Emily (Shay Mitchell) discovers then that the cops are there because of Sara Harvey 's murder, who was killed in Jenna 's hotel room. The girls complain about the false cries of Jenna (Tammin Sursok), stating that she was pretending to like Sara. While Sara 's dead body is carried on a stretcher, her hand accidentally escapes the sheet and the girls are scared by the state of her hand.
Later, Emily and her mother, Pam (Nia Peeples), are running and exercising through the city forest and Emily decides to take her to celebrate her birthday in the Radley. Spencer talks to Toby (Keegan Allen), questioning why Jenna returned to Rosewood. In a flashback, Jenna and Toby are in a summerhouse on New Year 's Eve, and Jenna says she can not see some things in her mind, such as the face of Toby. He then lets Jenna touch his face in order to make her remember how it is, but Jenna ends up wanting to kiss him, but Toby pulls away and leaves. Back in the present, Toby says to Spencer that Jenna left the summer house the next day. Then, through Toby 's communication radio, they discover that someone broke into Toby 's house. He then leaves running and Spencer is disconcerted.
In the DiLaurentis house, Mary (Andrea Parker) and Alison discovers that Jason (Drew Van Acker) returned to Rosewood, but Jason soon expels Mary out of the residence, stating that Alison 's health is under his care. Following, Alison argues with Jason, and asks him to take a chance to meet Mary. Anyway, Jason is still certain that Mary has bad intentions. Ezra (Ian Harding) is abuzz with the preparations of his wedding with Aria (Lucy Hale), and he then decides to elope to Tuscany, but Aria is n't okay. Hanna (Ashley Benson) invades Jenna 's room in Radley in order to find clues about A.D. when Caleb (Tyler Blackburn) enters the room and warns that he is working as technology security at the hotel. He says he will help Hanna to find something against Jenna, and she accepts. Aria accepts Ezra 's offer to elope, and they plan to travel the next day. Aria then receives a message from Jason, asking her to meet up with him.
At the Brew, Aria reveals to Jason that she 's engaged to Ezra and asks him if he had told anyone about what happened to them, and Jason denies. Aria also says she did not tell anyone, and Jason says he will keep secret. Changing the subject, Jason enlists Aria 's help to make Alison see who Mary really is. Jason then reveals that Elliott completely diverted the Carissimi Group 's money. Back at the Radley, Caleb is disguised as a massager in order to find, inside Jenna 's bag, the key to open the mysterious box in which is under Jenna 's bed in her room. After Caleb have achieved, he delivers the key to Spencer and Hanna, and the two go to Jenna 's room to find out what 's inside the box. They find several papers; then, someone slowly opens the door and starts to come in, and the girls quickly hide under the bed. The person hides Mary Drake 's old Radley Sanitarium file inside the box and is revealed to be Noel Kahn (Brant Daugherty). Yet under the bed, Spencer and Hanna see him calling Dr. Cochrane, saying he is impatient and out of time.
At night, Emily and Pam get together for dinner and to celebrate Pam 's birthday. They talk about Wayne, the late father of Emily and Pam 's husband. The dining plan them goes down when a group of women arrive at Radley shouting, singing and having fun. Caleb and Hanna begin working on papers found inside the mysterious box of Jenna, and Caleb ends up seeing that Hanna is not wearing the engagement ring on her left hand and eventually causes tension between the two. During the DiLaurentis ' dinner, Mary offers wine to Jason, but he denies, since he does n't ingest alcoholic beverages. A tense atmosphere is formed in the living room, but Jason reveals that he invited a friend -- Aria.
At the hospital, Spencer tells Toby that he should leave Rosewood with Yvonne (Kara Royster) and build a house for her away from the disturbing city. Pam and Emily receive two drinks ordered by the brides - to - be as a peace offer, and Pam decides to join them. Meanwhile, Jason argues with Alison, saying Mary 's not a good person, not even a way to replace their mother, Jessica. Then, Mary tells them about the day she saw Jason in years. In a flash back, Jason appears in Carol Ward 's house, and Jessica (Andrea Parker) quickly dispatches him. Through the window, Mary sees him and is crying. Jessica complains about Mary being there, and Mary says she is worried about what really happened to her then deceased child -- Charles --, but Jessica expels her. Back in the present, Mary says that, when Jessica decided that a conversation was over, it was over. Mary then talks about a storm cellar of Carol Ward 's house, and Alison and Aria grow suspicious.
Aria tells Emily about the cellar and that she and the others decided to visit the cellar in order to find answers. Emily asks to visit the shelter the next day, but Aria says she can not because she will elope with Ezra. Emily is happy that Aria will get married to Ezra. Hanna tells Caleb about the end of her engagement, and she asks him if they are still friends, and Caleb says that they are.
Toby visits Spencer 's loft and he says that he was building the house for her, not for Yvonne. However, he revealed that he could not imagine his life without Yvonne, and then he says that they will move to Maine, where Yvonne has family. They say goodbye, and Spencer cries against the door. Alison, Emily, Spencer and Hanna arrives at Carol Ward 's house and they found the hidden cellar. Inside, the four discover that Jessica was investigating their lives and Alison 's disappearance. Jessica also kept files of each one of the girls, except Aria. Within a Mary Drake file, Spencer discovers that Jessica was in charge of Mary 's mental health and that she had authorized electroshock therapy. In another file, Emily discovers that Mary had a second child while he was hospitalized in the Radley, and that this child would have the same age of them. They then begin to think that this child -- cousin of Alison -- may be behind A.D. mask, and be wanting revenge for something.
Aria and Ezra are preparing for the trip to Tuscany, when an FBI agent (Nikki Crawford) appears in the apartment, saying that Nicole, Ezra 's deceased ex-girlfriend, may be alive. The girls walks away from the cellar when Spencer 's car began to emit a deafening sound. They get into the car in order to stop the noise, but they end up getting trapped inside the vehicle. A countdown starts in the car 's computer screen, and they think the car will explode. However, a message appears on the computer screen, startling them. The storm cellar then explodes, and someone writes "I see you '' on the rear window of the car.
Meanwhile, somewhere, someone is with Aria 's and Noel 's file that was in the cellar. This person then burns up Noel 's file.
The episode was written by Kateland Brown and directed by Melanie Mayron. On May 28, 2016, the actress Nia Peeples revealed the episode 's title, writer and director, also announcing that she would be returning to the series with her role of Pam Fields. Cast members Keegan Allen, Brant Daugherty, Kara Royster and Tammin Sursok guest star as the roles of Toby Cavanaugh, Noel Kahn, Yvonne Phillips and Jenna Marshall, respectively. The actor Drew Van Acker returns for the seventh season for his role of Jason DiLaurentis and first appearance in this episode.
In the U.S., "Original G'A'ngsters '' was aired on August 9, 2016 to a viewership of 1.16 million, up from the previous episode. It garnered a 0.6 rating among adults aged 18 -- 49, according to Nielsen Media Research. After Live + 3 DVR ratings, the episode had the biggest gain in Adults 18 - 49, finishing with a 1.1 rating among adults aged 18 -- 49, and aired to a total viewership of 2.11 million, placing in the fifth spot in viewership.
Paul Dailly, from TV Fanatic, gave the episode 4 out of 5 stars and wrote a mixed review, saying, "So many questions and so little answers. You 'd think Pretty Little Liars was renewed for another four seasons with the amount of questions that pop up. '' Also, he added, "' Original G'A'ngsters ' was another solid episode of this Freeform drama. It 's time to start giving us all of the answers now. '' For SpoilerTV, Gavin Hetherington wrote for the episode a mixed to positive review, stating, "The seventh episode in Pretty Little Liars ' seventh season proved to be a worthy hour for the show, with minor plot progression for some of the characters, much needed reunions and flashbacks galore. A couple of shady moments thrown in and we have a typical episode of our favourite Freeform show. '' Jessica Goldstein of Vulture gave the episode three out of 5 stars.
The episode currently holds a 8.6 / 10 rating on TV.com and 8.5 / 10 rating on IMDb.
|
chris brown heartbreak on a full moon wiki | Heartbreak on a Full Moon - wikipedia
Heartbreak on a Full Moon is the upcoming eighth studio album by American singer Chris Brown. The album is scheduled to release on October 31, 2017 by RCA Records. The track listing was announced by Brown on his Instagram account on May 2, 2017, and it will be a double album.
Brown started working and recording tracks for the album some weeks before the dropping of Royalty, his seventh studio album, in late 2015, then he continued working on the album during 2016 and 2017, also during his "The Party tour '', also building a recording studio inside of his house. Brown said about the album during an interview for Complex:
On January 10, 2016 Brown had previewed 11 unreleased songs on his Periscope and Instagram profiles, showing him dancing and lip - synching these songs. Later on January, February and March 2016, he released videos on his Instagram profile where he was lip - synching snippets from the unreleased songs "Sirens '', "Lost And Found '', and "Dead Wrong ''.
On April 27 through Twitter, he announced the release of a new single on May 5. On May 3 he revealed that the single would be "Grass Ai n't Greener '', showing its cover art and announcing it as the first single from Heartbreak on a Full Moon. The single was released on May 5, 2016, the day of Brown 's 27th birthday, but it was not included on the album.
In November and December 2016, he released videos on his Instagram profile where he was lip - synching snippets from other unreleased songs: "For Me '', "Post & Delete '', and "Yellow Tape ''.
On December 16, 2016, he released the first official single from the album, "Party '', that features guest vocals from American R&B singer Usher and rapper Gucci Mane. On January 4, 2017 Brown had previewed two snippets on his Instagram profile from the songs "Privacy '' and "Tell Me Baby '', then in February announced that "Privacy '' would have been released as the next single from Heartbreak on a Full Moon. The single was released on March 24, 2017.
The initial track listing was announced by Brown on his Instagram account on May 1, 2017, saying that it will be a double - disc album. In the first days of June 2017 46 songs discarded from the album were leaked, most of them were unfinished versions while a few were demos.
In July 2017 he announced the pending release of upcoming singles from his album. Later on August 4, 2017, he released the promotional single "Pills & Automobiles '', that features guest vocals from American trap artists Yo Gotti, A Boogie Wit Da Hoodie and Kodak Black. Then on August 14, 2017 he announced the release of the fourth official single from the album, "Questions '', on August 16, announcing the album release date, saying that it would be released on October 31, 2017.
Sample credits
|
where is the painting of starry night located | The Starry Night - Wikipedia
The Starry Night is an oil on canvas by the Dutch post-impressionist painter Vincent van Gogh. Painted in June 1889, it depicts the view from the east - facing window of his asylum room at Saint - Rémy - de-Provence, just before sunrise, with the addition of an idealized village. It has been in the permanent collection of the Museum of Modern Art in New York City since 1941, acquired through the Lillie P. Bliss Bequest. Regarded as among Van Gogh 's finest works, The Starry Night is one of the most recognized paintings in the history of Western culture.
In the aftermath of the 23 December 1888 breakdown that resulted in the self - mutilation of his left ear, Van Gogh voluntarily admitted himself to the Saint - Paul - de-Mausole lunatic asylum on 8 May 1889. Housed in a former monastery, Saint - Paul - de-Mausole catered to the wealthy and was less than half full when Van Gogh arrived, allowing him to occupy not only a second - story bedroom but also a ground - floor room for use as a painting studio.
During the year Van Gogh stayed at the asylum, the prolific output of paintings he had begun in Arles continued. During this period, he produced some of the best - known works of his career, including the Irises from May 1889, now in the J. Paul Getty Museum, and the blue self - portrait from September, 1889, in the Musée d'Orsay. The Starry Night was painted mid-June by around 18 June, the date he wrote to his brother Theo to say he had a new study of a starry sky.
Although The Starry Night was painted during the day in Van Gogh 's ground - floor studio, it would be inaccurate to state that the picture was painted from memory. The view has been identified as the one from his bedroom window, facing east, a view which Van Gogh painted variations of no fewer than twenty - one times, including The Starry Night. "Through the iron - barred window, '' he wrote to his brother, Theo, around 23 May 1889, "I can see an enclosed square of wheat... above which, in the morning, I watch the sun rise in all its glory. ''
Van Gogh depicted the view at different times of day and under various weather conditions, including sunrise, moonrise, sunshine - filled days, overcast days, windy days, and one day with rain. While the hospital staff did not allow Van Gogh to paint in his bedroom, he was able there to make sketches in ink or charcoal on paper; eventually he would base newer variations on previous versions. The pictorial element uniting all of these paintings is the diagonal line coming in from the right depicting the low rolling hills of the Alpilles mountains. In fifteen of the twenty - one versions, cypress trees are visible beyond the far wall enclosing the wheat field. Van Gogh telescoped the view in six of these paintings, most notably in F717 Wheat Field with Cypresses and The Starry Night, bringing the trees closer to the picture plane.
One of the first paintings of the view was F611 Mountainous Landscape Behind Saint - Rémy, now in Copenhagen. Van Gogh made a number of sketches for the painting, of which F1547 The Enclosed Wheatfield After a Storm is typical. It is unclear whether the painting was made in his studio or outside. In his June 9 letter describing it, he mentions he had been working outside for a few days. Van Gogh described the second of the two landscapes he mentions he was working on, in a letter to his sister Wil on 16 June 1889. This is F719 Green Field, now in Prague, and the first painting at the asylum he definitely painted outside en plein air. F1548 Wheat field, Saint - Rémy de Provence, now in New York, is a study for it. Two days later, Vincent wrote Theo that he had painted "a starry sky ''.
The Starry Night is the only nocturne in the series of views from his bedroom window. In early June, Vincent wrote to Theo, "This morning I saw the countryside from my window a long time before sunrise with nothing but the morning star, which looked very big '' Researchers have determined that Venus was indeed visible at dawn in Provence in the spring of 1889, and was at that time nearly as bright as possible. So the brightest "star '' in the painting, just to the viewer 's right of the cypress tree, is actually Venus.
The moon is stylized, as astronomical records indicate that it actually was waning gibbous at the time Van Gogh painted the picture, and even if the phase of the moon had been its waning crescent at the time, Van Gogh 's moon would not have been astronomically correct. (For other interpretations of the moon, see below.) The one pictorial element that was definitely not visible from Van Gogh 's cell is the village, which is based on a sketch F1541v made from a hillside above the village of Saint - Rémy. Pickvance thought F1541v was done later, and the steeple more Dutch than Provençal, a conflation of several Van Gogh had painted and drawn in his Nuenen period, and thus the first of his "reminisces of the North '' he was to paint and draw early the following year. Hulsker thought a landscape on the reverse F1541r was also a study for the painting.
F1548 Wheatfield, Saint - Rémy de Provence, Morgan Library & Museum
F719 Green Field, National Gallery in Prague
F1547 The Enclosed Wheatfield After a Storm, Van Gogh Museum
F611 Mountainous Landscape Behind Saint - Rémy, Ny Carlsberg Glyptotek
F1541v Bird 's - Eye View of the Village, Van Gogh Museum
F1541r Landscape with Cypresses, Van Gogh Museum
Despite the large number of letters Van Gogh wrote, he said very little about The Starry Night. After reporting that he had painted a starry sky in June, Van Gogh next mentioned the painting in a letter to Theo on or about 20 September 1889, when he included it in a list of paintings he was sending to his brother in Paris, referring to it as a "night study. '' Of this list of paintings, he wrote, "All in all the only things I consider a little good in it are the Wheatfield, the Mountain, the Orchard, the Olive trees with the blue hills and the Portrait and the Entrance to the quarry, and the rest says nothing to me ''; "the rest '' would include The Starry Night. When he decided to hold back three paintings from this batch in order to save money on postage, The Starry Night was one of the paintings he did n't send. Finally, in a letter to painter Émile Bernard from late November 1889, Van Gogh referred to the painting as a "failure. ''
Van Gogh argued with Bernard and, especially, Paul Gauguin as to whether one should paint from nature, as Van Gogh preferred, or paint what Gauguin called "abstractions '': paintings conceived in the imagination, or de tête. In the letter to Bernard, Van Gogh recounted his experiences when Gauguin lived with him for nine weeks in the fall and winter of 1888: "When Gauguin was in Arles, I once or twice allowed myself to be led astray into abstraction, as you know... But that was delusion, dear friend, and one soon comes up against a brick wall... And yet, once again I allowed myself to be led astray into reaching for stars that are too big -- another failure -- and I have had my fill of that. '' Van Gogh here is referring to the expressionistic swirls which dominate the upper center portion of The Starry Night.
Theo referred to these pictorial elements in a letter to Vincent dated 22 October 1889: "I clearly sense what preoccupies you in the new canvases like the village in the moonlight (The Starry Night) or the mountains, but I feel that the search for style takes away the real sentiment of things. '' Vincent responded in early November, "Despite what you say in your previous letter, that the search for style often harms other qualities, the fact is that I feel myself greatly driven to seek style, if you like, but I mean by that a more manly and more deliberate drawing. If that will make me more like Bernard or Gauguin, I ca n't do anything about it. But am inclined to believe that in the long run you 'd get used to it. '' And later in the same letter, he wrote, "I know very well that the studies drawn with long, sinuous lines from the last consignment were n't what they ought to become, however I dare urge you to believe that in landscapes one will continue to mass things by means of a drawing style that seeks to express the entanglement of the masses. ''
But although Van Gogh periodically defended the practices of Gauguin and Bernard, each time he inevitably repudiated them and continued with his preferred method of painting from nature. Like the impressionists he had met in Paris, especially Claude Monet, Van Gogh also favored working in series. He had painted his series of sunflowers in Arles, and he painted the series of cypresses and wheat fields at Saint - Rémy. The Starry Night belongs to this latter series, as well as to a small series of nocturnes he initiated in Arles.
The nocturne series was limited by the difficulties posed by painting such scenes from nature, i.e., at night. The first painting in the series was Café Terrace at Night, painted in Arles in early September 1888, followed by Starry Night Over the Rhone later that same month. Van Gogh 's written statements concerning these paintings provide further insight into his intentions for painting night studies in general and The Starry Night in particular.
Soon after his arrival in Arles in February 1888, Van Gogh wrote to Theo, "I... need a starry night with cypresses or -- perhaps above a field of ripe wheat; there are some really beautiful nights here. '' That same week, he wrote to Bernard, "A starry sky is something I should like to try to do, just as in the daytime I am going to try to paint a green meadow spangled with dandelions. '' He compared the stars to dots on a map and mused that, as one takes a train to travel on earth, "we take death to reach a star. '' Although at this point in his life Van Gogh was disillusioned by religion, he appears not to have lost his belief in an afterlife. He voiced this ambivalence in a letter to Theo after having painted Starry Night Over the Rhone, confessing to a "tremendous need for, shall I say the word -- for religion -- so I go outside at night to paint the stars. ''
He wrote about existing in another dimension after death and associated this dimension with the night sky. "It would be so simple and would account so much for the terrible things in life, which now amaze and wound us so, if life had yet another hemisphere, invisible it is true, but where one lands when one dies. '' "Hope is in the stars, '' he wrote, but he was quick to point out that "earth is a planet too, and consequently a star, or celestial orb. '' And he stated flatly that The Starry Night was "not a return to the romantic or to religious ideas. ''
Noted art historian Meyer Schapiro highlights the expressionistic aspects of The Starry Night, saying it was created under the "pressure of feeling '' and that it is a "visionary (painting) inspired by a religious mood. '' Schapiro theorizes that the "hidden content '' of the work makes reference to the New Testament book of Revelation, revealing an "apocalyptic theme of the woman in pain of birth, girded with the sun and moon and crowned with stars, whose newborn child is threatened by the dragon. '' (Schapiro, in the same volume, also professes to see an image of a mother and child in the clouds in Landscape with Olive Trees, painted at the same time and often regarded as a pendant to The Starry Night.)
Art historian Sven Loevgren expands on Schapiro 's approach, again calling The Starry Night a "visionary painting '' which "was conceived in a state of great agitation. '' He writes of the "hallucinatory character of the painting and its violently expressive form, '' although he takes pains to note that the painting was not executed during one of Van Gogh 's incapacitating breakdowns. Loevgren compares Van Gogh 's "religiously inclined longing for the beyond '' to the poetry of Walt Whitman. He calls The Starry Night "an infinitely expressive picture which symbolizes the final absorption of the artist by the cosmos '' and which "gives a never - to - be-forgotten sensation of standing on the threshold of eternity. '' Loevgren praises Schapiro 's "eloquent interpretation '' of the painting as an apocalyptic vision and advances his own symbolist theory with reference to the eleven stars in one of Joseph 's dreams in the Old Testament book of Genesis. Loevgren asserts that the pictorial elements of The Starry Night "are visualized in purely symbolic terms '' and notes that "the cypress is the tree of death in the Mediterranean countries. ''
Art historian Lauren Soth also finds a symbolist subtext in The Starry Night, saying that the painting is a "traditional religious subject in disguise '' and a "sublimated image of (Van Gogh 's) deepest religious feelings. '' Citing Van Gogh 's avowed admiration for the paintings of Eugène Delacroix, and especially the earlier painter 's use of Prussian blue and citron yellow in paintings of Christ, Soth theorizes that Van Gogh used these colors to represent Christ in The Starry Night. He criticizes Schapiro 's and Loevgren 's biblical interpretations, dependent as they are on a reading of the crescent moon as incorporating elements of the sun. He says it is merely a crescent moon, which, he writes, also had symbolic meaning for Van Gogh, representing "consolation. ''
It is in light of such symbolist interpretations of The Starry Night that art historian Albert Boime presents his study of the painting. As noted above, Boime has proven that the painting depicts not only the topographical elements of Van Gogh 's view from his asylum window but also the celestial elements, identifying not only Venus but also the constellation Aries. He suggests that Van Gogh originally intended to paint a gibbous moon but "reverted to a more traditional image '' of the crescent moon, and theorizes that the bright aureole around the resulting crescent is a remnant of the original gibbous version. He recounts Van Gogh 's interest in the writings of Victor Hugo and Jules Verne as possible inspiration for his belief in an afterlife on stars or planets. And he provides a detailed discussion of the well - publicized advances in astronomy that took place during Van Gogh 's lifetime.
Boime asserts that while Van Gogh never mentioned astronomer Camille Flammarion in his letters, he believes that Van Gogh must have been aware of Flammarion 's popular illustrated publications, which included drawings of spiral nebulae (as galaxies were then called) as seen and photographed through telescopes. Boime interprets the swirling figure in the central portion of the sky in The Starry Night to represent either a spiral galaxy or a comet, photographs of which had also been published in popular media. He asserts that the only non-realistic elements of the painting are the village and the swirls in the sky. These swirls represent Van Gogh 's understanding of the cosmos as a living, dynamic place.
Harvard astronomer Charles A. Whitney conducted his own astronomical study of The Starry Night contemporaneously with but independent of Boime (who spent almost his entire career at U.C.L.A.). While Whitney does not share Boime 's certainty with regard to the constellation Aries, he concurs with Boime on the visibility of Venus in Provence at the time the painting was executed. He also sees the depiction of a spiral galaxy in the sky, although he gives credit for the original to Anglo - Irish astronomer William Parsons, Lord Rosse, whose work Flammarion reproduced.
Whitney also theorizes that the swirls in the sky could represent wind, evoking the mistral that had such a profound effect on Van Gogh during the twenty - seven months he spent in Provence. (It was the mistral which triggered his first breakdown after entering the asylum, in July 1889, less than a month after painting The Starry Night.) Boime theorizes that the lighter shades of blue just above the horizon show the first light of morning.
The village has been variously identified as either a recollection of Van Gogh 's Dutch homeland. or based on a sketch he made of the town of Saint - Rémy. In either case, it is an imaginary component of the picture, not visible from the window of the asylum bedroom.
Cypress trees have long been associated with death in European culture, though the question of whether Van Gogh intended for them to have such a symbolic meaning in The Starry Night is the subject of an open debate. In an April 1888, letter to Bernard, Van Gogh referred to "funereal cypresses, '' though this is possibly similar to saying "stately oaks '' or "weeping willows. '' One week after painting The Starry Night, he wrote to his brother Theo, "The cypresses are always occupying my thoughts. I should like to make something of them like the canvases of the sunflowers, because it astonishes me that they have not yet been done as I see them. '' In the same letter he mentioned "two studies of cypresses of that difficult shade of bottle green. '' These statements suggest that Van Gogh was interested in the trees more for their formal qualities than for their symbolic connotation.
Schapiro refers to the cypress in the painting as a "vague symbol of a human striving. '' Boime calls it the "symbolic counterpart of Van Gogh 's own striving for the Infinite through non-orthodox channels. '' Art historian Vojtech Jirat - Wasiutynski says that for Van Gogh the cypresses "function as rustic and natural obelisks '' providing a "link between the heavens and the earth. '' (Some commentators see one tree, others see two or more.) Loevgren reminds the reader that "the cypress is the tree of death in the Mediterranean countries. ''
Art historian Ronald Pickvance says that with "its arbitrary collage of separate motifs, '' The Starry Night "is overtly stamped as an ' abstraction '. '' Pickvance claims that cypress trees were not visible facing east from Van Gogh 's room, and he includes them with the village and the swirls in the sky as products of Van Gogh 's imagination. Boime asserts that the cypresses were visible in the east, as does Jirat - Wasiutyński. Van Gogh biographers Steven Naifeh and Gregory White Smith concur, saying that Van Gogh "telescoped '' the view in certain of the pictures of the view from his window, and it stands to reason that Van Gogh would do this in a painting featuring the morning star. Such a compression of depth serves to enhance the brightness of planet.
Soth uses Van Gogh 's statement to his brother, that The Starry Night is "an exaggeration from the point of view of arrangement '' to further his argument that the painting is "an amalgam of images. '' However, it is by no means certain that Van Gogh was using "arrangement '' as a synonym for "composition. '' Van Gogh was, in fact, speaking of three paintings, one of which was The Starry Night, when he made this comment: "The olive trees with white cloud and background of mountains, as well as the Moonrise and the Night effect, '' as he called it, "these are exaggerations from the point of view of the arrangement, their lines are contorted like those of the ancient woodcuts. '' The first two pictures are universally acknowledged to be realistic, non-composite views of their subjects. What the three pictures do have in common is exaggerated color and brushwork of the type that Theo referred to when he criticized Van Gogh for his "search for style (that) takes away the real sentiment of things '' in The Starry Night.
On two other occasions around this time, Van Gogh used the word "arrangement '' to refer to color, similar to the way James Abbott McNeill Whistler used the term. In a letter to Gauguin in January 1889, he wrote, "As an arrangement of colours: the reds moving through to pure oranges, intensifying even more in the flesh tones up to the chromes, passing into the pinks and marrying with the olive and Veronese greens. As an impressionist arrangement of colours, I 've never devised anything better. '' (The painting he is referring to is La Berceuse, which is a realistic portrait of Augustine Roulin with an imaginative floral background.) And to Bernard in late November 1889: "But this is enough for you to understand that I would long to see things of yours again, like the painting of yours that Gauguin has, those Breton women walking in a meadow, the arrangement of which is so beautiful, the colour so naively distinguished. Ah, you 're exchanging that for something -- must one say the word -- something artificial -- something affected. ''
When Van Gogh calls The Starry Night a failure for being an "abstraction, '' he places the blame on his having painted "stars that are too big. ''
While stopping short of calling the painting a hallucinatory vision, Naifeh and Smith discuss The Starry Night in the context of Van Gogh 's mental illness, which they identify as temporal lobe epilepsy, or latent epilepsy. "Not the kind, '' they write, "known since antiquity, that caused the limbs to jerk and the body to collapse (' the falling sickness ', as it was sometimes called), but a mental epilepsy -- a seizing up of the mind: a collapse of thought, perception, reason, and emotion that manifested itself entirely in the brain and often prompted bizarre, dramatic behavior. '' Symptoms of the seizures "resembled fireworks of electrical impulses in the brain. ''
Van Gogh experienced his second breakdown in seven months in July 1889. Naifeh and Smith theorize that the seeds of this breakdown were present when Van Gogh painted The Starry Night, that in giving himself over to his imagination "his defenses had been breached. '' On that day in mid-June, in a "state of heightened reality, '' with all the other elements of the painting in place, Van Gogh threw himself into the painting of the stars, producing, they write, "a night sky unlike any other the world had ever seen with ordinary eyes. ''
After having initially held it back, Van Gogh sent The Starry Night to Theo in Paris on 28 September 1889, along with nine or ten other paintings. Theo died less than six months after Vincent, in January 1891. Theo 's widow, Jo, then became the caretaker of Van Gogh 's legacy. She sold the painting to poet Julien Leclercq in Paris in 1900, who turned around and sold it to Émile Schuffenecker, Gauguin 's old friend, in 1901. Jo then bought the painting back from Schuffenecker before selling it to the Oldenzeel Gallery in Rotterdam in 1906. From 1906 to 1938 it was owned by Georgette P. van Stolk, of Rotterdam, who sold it to Paul Rosenberg, of Paris and New York. It was through Rosenberg that the Museum of Modern Art acquired the painting in 1941.
The painting was investigated by the scientists at the Rochester Institute of Technology and the Museum of Modern Art in New York. The pigment analysis has shown that the sky was painted with ultramarine and cobalt blue and for the stars and the moon van Gogh employed the rare pigment indian yellow together with zinc yellow.
Moon.
Venus.
Hills and sky.
Left part of the canvas and frame.
Stars in the sky.
|
iglesia de la luz del mundo guadalajara jalisco mexico | La Luz del Mundo - wikipedia
Coordinates: 20 ° 40 ′ 19.02 '' N 103 ° 17 ′ 2.76 '' W / 20.6719500 ° N 103.2841000 ° W / 20.6719500; - 103.2841000
Spanish: La Luz del Mundo; LLDM; LDM; Iglesia La Luz del Mundo; ILLM
The Iglesia del Dios Vivo, Columna y Apoyo de la Verdad, La Luz del Mundo, (English: "Church of the Living God, Column and Ground of the Truth, The Light of the World '') -- or simply La Luz del Mundo -- is a Christian denomination with international headquarters in Guadalajara, Jalisco, Mexico. La Luz del Mundo (abbreviated LLDM, or sometimes The LDM) practices a form of restorationist theology centered on three leaders: founder Aarón -- born Eusebio -- Joaquín González (1896 -- 1964), Samuel Joaquín Flores (1937 -- 2014), and Naasón Joaquin García (born 1969). These three men are regarded by the Church as modern day Apostles of Jesus Christ and Servants of God.
The Church had its beginnings in 1926 just as Mexico plunged into a violent struggle between the anti-clerical government and Catholic rebels. The conflict centered in the west - central states like Jalisco, where González focused his missionary efforts. Given the environment of the time, La Luz del Mundo remained a small missionary endeavor until 1934 when it built its first temple. Thereafter the Church continued to grow and expand, interrupted only by an internal schism in 1942. Apostles: Aarón Joaquín is believed to be designated the ministry of the Lord by revelation of God, then his son Samuel; thereafter by his son Naasón. The Church is now present in 50 countries and has between 1 and 7 million adherents worldwide.
La Luz del Mundo describes itself as the restoration of primitive Christianity. It does not use crosses or images in its worship services and its members do not observe Christmas or Holy Week. Female members follow a dress code that includes long skirts and head coverings during religious services.
The full name of the Church is Iglesia del Dios Vivo Columna y Apoyo de la Verdad, La Luz del Mundo ("Church of the Living God, Column and Support of The Truth, The Light of The World '') which is derived from two passages in the Bible, Matthew 5: 14 and 1 Timothy 3: 15.
Eusebio Joaquín González was born on August 14, 1896 in Colotlán, Jalisco. At a young age, he joined the Constitutional Army during the Mexican Revolution. While he was on leave in 1920, he met Elisa Flores, whom he later married. While stationed in the state of Coahuila in 1926, he came into contact with Saulo and Silas, two ascetic preachers from the Iglesia Cristiana Espiritual. Their teachings forbade their followers to keep good hygiene and wear regular clothes. After being baptized by the two itinerant preachers, González resigned from the army, and along with his wife became domestic workers to the two preachers.
During the 1920s, Mexico underwent a period of instability under the Plutarco Elías Calles administration who was seeking to limit the influence of the Catholic Church to modernize and centralize the state within the religious sphere of Mexican society. To protest the policies, the Catholic Church suspended all religious services, bringing about an uprising in Mexico. This uprising, or Cristero War, lasted from 1926 to 1929 and reemerged in the 1930s. On April 6, 1926 González had a vision in which God changed his name from Eusebio to Aarón and was later told to leave Monterrey where he and his wife served Saulo and Silas. On his journey, he preached near the entrances of Catholic churches -- often facing religious persecution -- until he arrived at Guadalajara on December 12, 1926. The Cristero Wars impacted both Catholic and non-Catholic congregations and preachers, especially evangelical movements. Small movements were attacked by the government and the Cristeros, resulting in a hostile environment for González 's work.
Working as a shoe vendor, Joaquín formed a group of ten worshipers who met at his wife 's apartment. He began constructing the Church 's hierarchy by instituting the first two deacons, Elisa Flores and Francisca Cuevas. Later he charged the first minister to oversee 14 congregations in Ameca, Jalisco. During these early years (late 1920s), Joaquín traveled to the states of Michoacán, Nayarit, and Sinaloa to preach. In 1931, the first Santa Cena (Holy Supper) was held to commemorate the crucifixion of Jesus. The Church met in rural areas, fearing complaints from Catholic neighbors. Urbanization contributed migrants from the countryside who added a significant number of members to the Church.
In 1934, a temple was built in Sector Libertad of Guadalajara 's urban zone and members were encouraged to buy homes in the same neighborhood thereby establishing a community. The temple was registered as Iglesia Cristiana Espiritual (Spiritual Christian Church) but Joaquín claimed to have received God 's word in the dedication of the temple, saying that it was "light of the world '' and that they were the Iglesia del Dios Vivo, Columna y Apoyo de la Verdad (Church of the Living God, Column and Ground of the Truth). The Church used the latter name to identify itself. In 1939, it moved to a new meeting place at 12 de Octubre street in San Antonio in southeast Guadalajara, forming its second small community which was populated mainly by its members. This community was an attempt to escape the hostile environment, not to create an egalitarian society.
In 1938 Joaquín returned to Monterrey to preach to his former associates. There he learned that he had been baptized using the Trinitarian formula and not in the name of Jesus Christ as he preached. His re-baptism in the name of Jesus Christ by his collaborator Lino Figueroa marked Joaquín 's separation from the rest of the Pentecostal community.
In 1942, in its most significant schism, at least 250 members left the Church. Tensions began to build after Joaquín 's birthday, when the congregation gave him gifts of flowers and sang hymns celebrating his birthday. This celebration generated a heated debate that culminated with the defection of several church members, including some pastors. Anthropologist Renée de la Torre described this schism as a power struggle in which Joaquín was accused of having enriched himself at the expense of the faithful. Church dissidents took to El Occidental to accuse members of La Luz del Mundo of committing immoralities with young women. Some of the accusations were aimed to close down a temple that LLDM used with government permission. Members of La Luz del Mundo attribute this episode to the envy and ambition of the dissidents, who formed their own group called El Buen Pastor (The Good Shepherd) under the leadership of José María González, with doctrines and practices similar to those of La Luz del Mundo. The leader is considered a prophet of God. As of 2010, El Buen Pastor Church has a membership of 17,700 in Mexico.
Among those who defected to El Buen Pastor Church was Lino Figueroa, the pastor who had re-baptized González in 1938. González had a vision in July 1943 where the baptism by Figueroa was invalidated and he was ordered to re-baptize himself invoking Jesus ' name. The whole congregation was re-baptized as well, as now Joaquín was the source of baptismal legitimacy and authenticity. With all those who had challenged him gone, Joaquín was able to consolidate leadership of the Church.
With the growth of the Church in the city, issues of safety developed in the 12 de Octubre street meeting place in the late 1940s and early 1950s. In 1952, Joaquín purchased a plot of land outside the city and called it Hermosa Provincia (Beautiful Province).
In 1952, Joaquín purchased land on the outskirts of Guadalajara with the intent of forming a small community made up exclusively by members of LLDM. The land was then sold at reduced prices to church members. The community included most necessities; services provided in Hermosa Provincia included health, education, and other urban services, which were provided in full after six years partly with help that the Church received from municipal and non-municipal authorities. This dependency upon outside assistance to obtain public services ended by 1959 when residents formed the Association of Colonists of Hermosa Provincia, which was used to directly petition the government. Hermosa Provincia received a white flag from the city for being the only neighborhood in the city that has eliminated illiteracy by the early 1970s. Joaquín started missionary efforts in Central America and by the early 1960s, La Luz del Mundo had 64 congregations and 35 missions. By 1964, after his death, the Church had between 20,000 and 30,000 members spread through five countries, including Mexico. The neighborhood became a standard model for the Church which has replicated it in many cities in Mexico and other countries.
Samuel Joaquín Flores was born on February 14, 1937 and became the leader of the Church by the age of 27 after the death of AJG. He continued his father 's desire for international expansion by traveling outside of Mexico extensively. He first visited members of the Church in the Mexican state of Michoacán in August 1964 and later that year he went to Los Angeles on a missionary trip. By 1970, the Church had expanded to Costa Rica, Colombia, and Guatemala. The first small temple in the Hermosa Provincia was demolished and replaced by a larger one in 1967. With Samuel Joaquín 's work, La Luz del Mundo became integrated into Guadalajara and the Church replicated the model of Hermosa Provincia in many cities in Mexico and abroad. By 1972, there were approximately 72,000 members of the Church, which increased to 1.5 million by 1986 and to 4 million by 1993. Anthropologist Patricia Fortuny says that the Church 's growth can be attributed to several factors, including its social benefits, which "improves the living conditions of believers. '' Samuel oversaw the construction of schools, hospitals and other social services produced by the Church. It also expanded to countries including the UK, the Netherlands, Switzerland, Ethiopia and Israel between 1990 and 2010. After fifty years at the head of La Luz del Mundo church, Flores died in his home on December 8, 2014.
On December 14, 2014 Naasón Joaquín García, the fifth born of the Joaquin family, became the spiritual leader and international director of La Luz del Mundo church. Naasón was born on May 7, 1969 and previously served as pastor in Santa Ana, California. After almost three years of arduous work Naasón continues to lead the church, with ambition to fulfill a promise he received from God in a vision, that La Luz del Mundo Church will grow to a number that he could never imagine or perceive. He also urges the youth and members of the church to join him in spreading their doctrine to the ends of the earth.
During La Luz del Mundo 's religious services, male and female members are separated during worship; from the preacher 's perspective, women sit on the right side of the temple and men on the left. The Church does not use musical instruments during its services. There is no dancing or clapping, and women cover their heads with a veil during worship services. Hymns are sung a cappella; Despite this, members listen to instrumental music and some compose their music. When singing, all congregants sing at the same time, to maintain uniformity during their religious meetings. The Church believes that worship should be done "spiritually '' and only to God, and thus temples are devoid of images, saints, crosses, and anything that might be considered idolatry. The places of worship have plain walls and wide, clear windows.
The Church holds three daily prayer meetings during the week, with two meetings on Sundays and one regular consecration. On Sunday mornings, congregants meet at the temple for Sunday School, which begins with prayers and hymns. After that, the preacher -- usually a minister -- presides over a talk during which he reads from the Bible and presents the material to be covered throughout the week. During the talk, it is common for members of either sex to read a cited verse from the Bible. At the end of the talk, more hymns and prayers are recited and voluntary offerings are given. Sunday evening services begin with hymns and prayers, after which members of the congregation of both sexes recite from the Bible or sing hymns. A shorter talk is held with the aim of deepening the Sunday School 's talk.
The Church holds three scheduled prayer meetings each day. The first daily prayer meeting is at 5: 00 a.m. and usually lasts one hour. The service includes a talk that is meant to recordar (remember) the material covered in the Sunday School. The 9: 00 a.m. prayer was originally started by González 's wife, Elisa Flores. A female church member presides over the prayer meeting, which includes a talk. The evening prayer has the same structure as the 5: 00 a.m. meeting. In each prayer meeting members are expected to be prepared with their Bibles, hymn books and notebooks and to be consecrated.
Members of La Luz del Mundo believe that the Bible along with the teachings of their Apostle are the only source of Christian doctrine. It is used as the main source of ministers ' and lay persons ' talks during prayer meetings. Through organizational arrangements, such as Sunday school, church authorities attempt to maintain uniformity of teachings and beliefs throughout all congregations. The Bible is the only historical reference used by La Luz del Mundo during religious services. Members can find cited Bible verses quickly, regardless of their level of education. It is also seen as the one of two "sufficient rules of faith for salvation ''.
La Luz del Mundo teaches that there was no salvation on Earth between the death of the last Apostle (Apostle John) around 96 AD and the calling of González in 1926. Members believe that the Church itself was founded by Jesus Christ approximately two thousand years ago and that after the deaths of the Apostles of God, the church became corrupt and was lost. The Church claims that through González, it is the restoration of the Primitive Christian church that was lost during the formation of the Roman Catholic Church. After those times passed, the beginning of González 's ministry is seen as the restoration of the original Christian Church. Salvation can be attained in the Church by following the Bible - based teachings of their leader.
The Church states that its members believe in "the calling of the Servants of God, sent to express the will of God and Salvation ''. It believes that González was called by God to restore the Primitive Christian Church. González was succeeded by Flores upon his death in 1964; Flores was in turn succeeded by García upon his death in 2014. La Luz del Mundo teaches that it is the only true Christian church founded by Jesus Christ because it is led by García, who it considers the only true servant of God and Apostle of Jesus Christ in this era. Members believe that this Apostolic Authority allows them to find peace, feel close to God and attain meaning in their lives from the hopes of joining with Christ to reign with him for eternity.
La Luz del Mundo rejects the doctrine of the Trinity as a later addition to Christian theology. It believes in a "one and universal '' God and in Jesus Christ who is the "Son of God and Savior of the world '', rather than part of a trinity. God is worshiped "by essence '', whereas Jesus is worshiped "by commandment. '' Moreover, by worshiping Jesus Christ they are also worshiping God through him according to their teachings. The Church also preaches baptism in the name of Jesus Christ for forgiveness of sins, and baptism with the Holy Spirit as confirmation from God for entrance into heaven.
There is disagreement among external sources regarding the christology of La Luz del Mundo. According to theologian Roger S. Greenway, the Church is trinitarian but baptizes in the name of Jesus to conform to apostolic practice. Theological librarian Kenneth D. Gill agrees with Greenway that the Church is trinitarian, and says it refuses to "use the term ' person ' to describe the three modes of God. '' Sociocultural anthropologist Hugo G. Nutini also says the Church is trinitarian but that its worship focuses exclusively on Jesus Christ as the source of salvation.
= = = Role of women Female members of La Luz del Mundo do not wear jewelery or makeup and are instructed to wear full, long skirts. Women can have their hair as short as their shoulder blades. These restrictions do not apply to recreational activities, where wearing bathing suits is permitted. Women wear a head covering during religious meetings. According to an interview of one adherent, women in the Church are considered equal to men in social spheres and have equal capacities for obtaining higher education, social careers, and other goals that may interest them. González established the 9 a.m. prayer after hearing about one of his followers who was being abused by her Catholic husband. This prayer became one led by women. These prayers are seen as a religious activity equal to all other activities. This prayer provides space for empowerment in which women can express themselves and develop a status within the congregation 's membership. Anthropologist Patricia Fortuny said, concerning the 9 a.m. prayer, that, "I infer from this that, if the membership considers this as (a) female (gathering), they would be giving authority to women in the religious or ecclesiastical framework of the ritual, and this then (would) put (them) on a plane of equality or (in) absence of subordination to men. '' She said that women of the Church may be playing with their subordinate roles in the Church to acquire certain benefits.
Church women personalize their attire, according to Patricia Fortuny. Rebozos are worn by indigenous members and specially designed veils by other female members. Fortuny says that, "... wearing long skirts does not negate the meaning of being a woman and, although it underlines the difference between men and women, (the Church 's female members) say that it does not make them feel like inferior human beings ''. Fortuny says women describe their attire as part of obeying biblical commands found in 1 Timothy 2: 9, and 1 Corinthians 11: 15 for long hair. Female members say the Church 's dress code makes them feel they are honoring God and that it is part of their "essence ''.
Fortuny also states that dress codes are a sign of a patriarchal organization because men are only forbidden from growing their hair long or wearing shorts in public. Women, at times, can be more autonomous than those in the general population in Mexico. Fortuny says that the growing trend of educated women having husbands in supporting roles is also seen within the Church both in the Guadalajara (Mexico), and Houston (Texas) congregations. Many young female members said they want to undergo post-secondary education, and some told Fortuny they were degree students. Both young men and women are equally encouraged to enter post-compulsory education. Fathers who are members of La Luz del Mundo are more likely than their mothers to direct their daughters towards attending university.
La Luz del Mundo Church does not practice ordination of women. According to Fortuny, women can become missionaries or evangelizers; the lowest tier of the Church 's hierarchy. She states that "the rank of deaconess is not a position which common women could aspire to ''. Dormady states that the first two deaconesses were Elisa Flores and Francisca Cuevas. Wives of important members of the Church usually get the rank of deaconess, according to Dormady.
Women are active and play key roles in organizing activities and administering them in the Church. Female office holders are always head of groups of women and not groups of men. A deaconess can help pastors and deacons, but can not herself administer the sacrament. All members of the ministerial hierarchy are paid for their services as part of the tithe by the congregational members.
At the turn of the century, La Luz del Mundo Church began promoting women to public relations positions that were previously held by men only. As of December 2014, two women (and three men) serve as legal representatives of the church in Northern Mexico. Public relations positions that have been held by women include spokesperson, director of social communication, and assistant director of international affairs. Within church operated civil organizations women also occupy executive positions such as director of La Luz del Mundo Family Services, a violence prevention and intervention center in South Side, Milwaukee; Director of Social Work and Psychology within the Ministry of Social Welfare; director of the Samuel Joaquín Flores Foundation; president of Recab de México A.C.; and director of the Association of Students and Professionals in the USA.
The Church teaches moral and civil principles such as community service and that science is a gift from God. Members of La Luz Del Mundo do not celebrate Christmas or Holy Week. The most important yearly rituals are the Holy Supper (Santa Cena in Spanish or "Santa Convocation ''), held yearly on August 14, and the anniversary of García 's birth is held on May 7 at its international headquarters in Guadalajara.
The organization of La Luz del Mundo is apostolic. The head of the Church is Jesus Christ and in his representation is Naason Joaquin García, who is the "Apostle and Servant of God '' and the organizational authority as General Director of the Church. Below him is the ranks of pastors, who are expected to develop one or more of the qualities as doctor, prophet, and evangelist. All pastors are evangelists and are expected to undertake missionary tasks. As doctors, pastors explain the word of God and as prophets they interpret it. Below them are the deacons, who administer the sacraments to the congregational members. Below the deacons are the encargados (managers or overseers), who have responsibility for the moral conduct and well - being of certain groups within the congregation. Overseers grant permits to members who wish to leave their congregations for vacations or to take jobs outside of the church district. At the lowest echelon of the hierarchy are the obreros (laborers), who mainly assist their higher - ups with missionary work.
A church, or group, that is unable to fully provide for the religious needs of its members is called a mission. Missions are dependent on a congregation which is administered by a minister. A group of several congregations with their missions form a district. The Church in each nation is divided into multiple districts. In Mexico, several districts form together into five jurisdictions that act as legal entities.
La Luz del Mundo uses the architecture of its temples to express its faith through symbolism and to attract potential converts. Among the Church 's buildings are a replica of a Mayan pyramid in Honduras, a mock Taj Mahal in Chiapas, Mexico, and a Greco - Roman - inspired temple in California. Its flagship temple is located in its headquarters in Hermosa Provincia. Two smaller replicas of this temple are being built in Anchorage, Alaska, and in Chile to symbolize "the northern and southern-most reach of the Church 's missionary efforts. ''
The flagship temple in Guadalajara is pyramidal and has an innovative structure. The project began in 1983, when the Church 's former temple built to accommodate 8,000 people was deemed insufficient to accommodate the growing number of people who attended various annual celebrations. Construction began on July 3, 1983 when Flores laid the cornerstone and lasted until August 1, 1992. The temple was completed largely by members of the Church. It is a notable architectural feature in Guadalajara in a working - class district on the outskirts of the city. Dozens of institutions, architects, and engineers were invited to submit proposals for a new temple. The pyramidal design submitted by Leopoldo Fernández Font was from the final shortlist of four proposals. Fernandez Font was later awarded an honorary degree for this and other structures. He said that one of his favorite works is the Temple of the Resurrection, but that the temple of La Luz del Mundo seemed difficult to him. The temple was built to accommodate 12,000 worshipers and is used for annual ceremonies.
The building 's design represents the infinite power and existence of God. It consists of seven levels over a base menorah, each of which symbolize steps toward the human spirit 's perfection. In February 1991, a laser beacon was installed to commemorate the 449 - year anniversary of the founding of Guadalajara. On July 1999 the pinnacle of the temple "La Flama '' was replaced by Aaron 's rod, a twenty - ton bronze sculpture by artist Jorge de la Peña. The installation of the 23 - metre (75 ft) long structure required a special crane.
The main temple in Houston, Texas, was inspired by Greco - Roman architecture. It is the largest temple constructed by La Luz del Mundo in the United States as of 2011. The temple 's pillars resemble the Parthenon, according to religious historian Timothy Wyatt. The front of the building is decorated with carved scenes from the Bible and three panes of stained glass also depict biblical scenes. The temple can hold 4,500 people. The interior has marble floors, glass chandeliers, and wood paneling.
The structure is worth US $18 million and consists of the temple, classrooms, offices, and a parsonage. There is a sitting area with 14 free - standing columns in a circle next to the temple. Each column represents each of the Apostles -- including Aarón and Samuel Joaquín. On top of the temple under Aaron 's rod -- the Church 's symbol which represents God 's power tobring spiritual life to believers -- is a large, golden dome. The symbol is also a reference to the Church 's founder.
Construction of the temple began in 2000 and it was finished in 2005. Most of the construction was done by Church volunteers, who provided funding and a skilled workforce. The structure was designed by Church members and the design was revised by architects to ensure compliance with building codes. The decorations and ornaments were also designed and installed by Church members. The temple serves as a central congregation for southeastern Texas.
There are no definitive statistics for the total membership of La Luz del Mundo. It has reported having over five million members worldwide in 2000 with 1.5 million in Mexico. The Church does not appear in the 1990 Mexican census or any census prior to that.
The 2000 Mexican census reported about 70,000 members nationwide, and the 2010 census reported 188,326 members. The Church of Jesus Christ of Latter - day Saints, whose numbers also differ significantly from those of the census -- 1,234,545 compared to the census figure of 314,932 -- said ambiguity in the census questionnaire was the source of the disparity. The World Christian Encyclopedia reports 430,000 adherents in Mexico in 2000 and 488,000 in Mexico in 2010. Anthropologist Hugo G. Nutini estimated that the Church had around 1,125,000 members in 2000 in Mexico. In 2008, Fortuny and Williams estimated the membership at 7,000,000. Anthropologist Ávila Meléndez says that the membership numbers reported by La Luz del Mundo are plausible given the great interest it has generated among "religious authorities '' and the following it receives in Mexico.
In El Salvador, as of 2009, there are an estimated 70,000 members of La Luz del Mundo, which had 140 congregations with a minister and 160 other congregations with between 13 and 80 members. As of 2008, there were around 60,000 members of the Church in the United States.
In the wake of the Heaven 's Gate mass suicide, La Luz del Mundo found itself embroiled in a controversy that would play out in some of Mexico 's major media outlets. Former members and anti-cult groups leveraged accusations against the church and its leadership. Church members and sympathizers defended the integrity of the church. Academics, meanwhile, denounced a climate of intolerance toward religious minorities in Mexico.
Following the Heaven 's Gate mass suicide on March 26, 1997, Mexican journalists asked which religious group at home could stage similar acts. The following day, in TV Azteca 's flagship program, Jorge Erdely Graham leader of the obscure anti-cult group Instituto Cristiano de Mexico (Christian Institute of Mexico) pointed to La Luz del Mundo as a group with the potential to commit mass suicide. Erdely 's claims were based on an interview with three church members conducted two years earlier. La Luz del Mundo denied the claims arguing that the answers of three individuals can not be used to make generalizations about an entire community. Religious scholars Gordon Melton and David Bromley characterized the accusations as "fraudulent reports by ideological enemies. '' Anthropologist Elio Masferrer Kan criticized the methodology employed in the interview, noting that the interviewer cornered the subjects to obtain the desired response.
This incident focused media attention on the church, and in May 4, 1997 the accusations were broadcast on Mexico 's most - watched newscast on Televisa.
On May 18, 1997 (a day after Flores ' 35th wedding anniversary), in a follow - up report on Televisa, a handful of women claimed to have been sexually abused by Flores approximately twenty years earlier. In a third report on August 17, shortly after the church 's most significant holiday, former member Moisés Padilla Íñiguez also accused Flores of sexually abusing him when he was a teenager. These accusations were spearheaded by Erdely 's anti-cult group, which demanded that La Luz del Mundo be stripped of its legal recognition as a religious organization. Four people later filed formal complaints with the state prosecutor, but the statute of limitations for the alleged crimes had passed.
The issue came back to life in February of the following year when, two days before Flores ' birthday, Padilla reported being kidnapped and stabbed by two gunmen. Padilla received 57 shallow slashes from a dagger which did not put his life in danger, but he could have died from blood loss. Padilla blamed Flores for the stabbing and for an earlier attack in which he was allegedly beaten by men who warned him against criticizing the church leader. A church spokesman denied that the church or Flores had any involvement in the attack and suggested that Padilla may have orchestrated the attack in a desperate attempt to authenticate his previous charges against the church.
Judicial authorities investigating the charges said the alleged victims were not being fully cooperative, whereas former members were suspicious of the Mexican legal system, arguing that it favored the church. Ten years later a spokesman for the state prosecutor said the criminal complaints were unsuccessful because, in addition to the statute of limitations, the accusations were incomplete.
Sociologist Roberto Blancarte called the controversy a "persecution '' fueled by "obscure interests. '' Journalist Carlos Monsiváis described the issue as a defamation campaign. Sociologist Bernardo Barranco described it as a dirty war that was well exploited by the media. Anthropologist Carlos Garma Navarro criticized that the accusations were first brought before the mass media, and thought it was very likely that the accusations were an attempt to give the church a bad image. Journalist Gastón Pardo called it a smear campaign characterized by the systematic use of defamation and slander. Note: All media supporters were also considered involved in the covering up of the allegations.
In 1995, La Luz del Mundo acquired a vacant nursery building in a commercial zone in Ontario, California. The Church planned to use it for religious activities and was assured that it could as long as building requirements were met. The city then passed a law requiring all new religious organizations to obtain a conditional use permit to operate a church in the commercial zone. In 1998, the Church petitioned for such a permit but concerned residents objected to its plans. María de Lourdes Argüelles, professor at Claremont Graduate University and board member of the Instituto Cristiano de México, led the opposition against the Church, which she called a "destructive sect ''. She said she had seen children and teenagers working overnight on the site under precarious conditions.
Ontario officials met with objecting residents and began researching the Church and checking with cities where Luz del Mundo had temples, but found no problems. After considering zoning questions and citing traffic, parking and disruption of economic plans for that area, the city denied the permit to the Church. La Luz del Mundo then sued the city for denying it use of its own building for services and for allegedly violating its civil rights. The case was settled out of court in 2004, and the Church was allowed to build the temple. The city agreed to pay about US $150,000 in cash and fee credits to the Church. The case was not taken to court because city officials and attorneys concluded the city would most likely lose the case and spend more money than the settlement.
According to Patricia Fortuny, members of La Luz del Mundo, along with members of other Protestant denominations, are treated as "second class citizens ''. She says the church is called a "sect '' in an offensive manner in Mexico. Rodolfo Morán Quiroz, a sociologist, said that the discrimination started by the Catholic Church, which in the past caused La Luz del Mundo to seek help from the authorities who promoted religious freedom in establishing its community in Hermosa Provincia, continues in Mexico. Church founder González was beaten by Cristeros and was jailed by the government for preaching in the open air.
In 1995, as thousands of members of the church traveled to the Holy Supper celebration in Guadalajara, several members of a neighboring community supported by Cardinal Juan Sandoval Íñiguez protested the use of schools to provide temporary shelters for the Luz del Mundo pilgrims. The protesters said that after the ceremony the schools were left in disarray; however church authorities presented photographic evidence to newspapers to refute these claims.
According to Armando Maya Castro, many students who are members of the church have been discriminated against for refusing to partake in celebrations and customs concerning the Day of the Dead in their schools, and some have been punished for it. In one case reported by a Mexican newspaper, La Gaceta, a female member of the church was pushed by a fellow bus passenger, who then crossed herself because the member was wearing a long skirt. In July 25, 2008, a public official sealed the entrance to a La Luz del Mundo temple in Puerto Vallarta, Jalisco, trapping the congregation inside until other officials removed the seals. This incident occurred because of complaints from individuals who did not like the presence of the church in the area. Reporter Rodolfo Chávez Calderón stated that the church was in compliance with local laws.
Many female church members have faced discrimination and verbal abuse on buses, in schools, and in hospitals. Church members who were patients in a Mexican hospital were denied access to their ministers in 2011. The hospital required permission from Catholic clergy so that LLDM ministers could visit patients.
Ministers of the church reported that the site of a newly constructed temple in Silao was subject to harassment of its members, vandalism, and physical threats because of religious intolerance, which caused them to request increased police protection. In February 2012, seventy ministers of La Luz del Mundo from different nations appeared before Mexican authorities in Guadalajara to denounce the lack of police protection for the church 's residents in the city after a series of attacks left several members hospitalized.
Note: Most of De la Torre 's work listed below was incorporated into her book Los hijos de La Luz.
|
where is the tree of life mentioned in the bible | Tree of life (biblical) - wikipedia
The tree of life (Hebrew: עֵץ הַחַיִּים , Standard: Etz haChayim) is a term used in the Hebrew Bible that is a component of the world tree motif.
In the Book of Genesis, the tree of life is first described in chapter 2, verse 9 as being planted with the tree of the knowledge of good and evil (Hebrew: עֵץ הַדַּעַת ) "in the midst of the Garden of Eden '' by God. In Genesis 3: 24 cherubim guard the way to the tree of life at the east end of the Garden. The tree of life has become the subject of some debate as to whether or not the tree of the knowledge of good and evil is the same tree.
In the Bible outside of Genesis, the term "tree of life '' appears in Proverbs (3: 18; 11: 30; 13: 12; 15: 4) and Revelation (2: 7; 22: 2, 14, 19). It also appears in 2 Esdras (2: 12; 8: 52) and 4 Maccabees (18: 16), which are included among the Jewish apocrypha.
According to the one - tree theory proposed by Karl Budde, in his critical research of 1883, he outlined that there was only one tree in the body of the Genesis narrative and it qualified in two ways: one as the tree in the middle of the Garden, and two as the forbidden tree. Claus Westermann gave recognition to Budde 's theory in 1976.
Ellen van Wolde noted in her 1994 survey that among Bible scholars "the trees are almost always dealt with separately and not related to each other '' and that "attention is almost exclusively directed to the tree of knowledge of good and evil, whereas the tree of life is paid hardly any attention. ''
The tree of life is represented in several examples of sacred geometry and is central in particular to the Kabbalah (the mystic study of the Torah), where it is represented as a diagram of ten points.
The Eastern Orthodox Church has traditionally understood the tree of life in Genesis as a prefiguration of the Cross, which humanity could not partake of until after the incarnation, death and resurrection of Jesus.
One of the hymns chanted during the forefeast of the nativity of Christ says:
Make ready, O Bethlehem, for Eden hath been opened for all. Prepare, O Ephratha, for the tree of life hath blossomed forth in the cave from the Virgin; for her womb did appear as a spiritual paradise in which is planted the divine Plant, whereof eating we shall live and not die as did Adam. Christ shall be born, raising the image that fell of old.
The cross of Christ is also referred to as the tree of life, and in the service books, Jesus is sometimes likened to a "divine cluster '' of grapes hanging on the "Tree of the Cross '' from which all partake in Holy Communion.
This theme is also found in Western Christianity. By way of an archetypal example consider Bonaventure 's "biography '' of the second person of the Trinity, entitled "The Tree of Life. '' (see Cousins, The Classics of Western Spirituality Series)
Until the Enlightenment, the Christian church generally gave biblical narratives of early Genesis the weight of historical narratives. In the City of God (xiii. 20 - 21), Augustine of Hippo offers great allowance for "spiritual '' interpretations of the events in the garden, so long as such allegories do not rob the narrative of its historical reality. However, the allegorical meanings of the early and medieval church were of a different kind than those posed by Kant and the Enlightenment. Precritical theologians allegorized the genesis events in the service of pastoral devotion. Enlightenment theologians (culminating perhaps in Brunner and Niebuhr in the twentieth century) sought for figurative interpretations because they had already dismissed the historical possibility of the story.
Others sought very pragmatic understandings of the tree. In the Summa Theologica (Q97), Thomas Aquinas argued that the tree served to maintain Adam 's biological processes for an extended earthly animal life. It did not provide immortality as such, for the tree, being finite, could not grant infinite life. Hence after a period of time, the man and woman would need to eat again from the tree or else be "transported to the spiritual life. '' The common fruit trees of the garden were given to offset the effects of "loss of moisture '' (note the doctrine of the humors at work), while the tree of life was intended to offset the inefficiencies of the body. Following Augustine in the City of God (xiv. 26), "man was furnished with food against hunger, with drink against thirst, and with the tree of life against the ravages of old age. ''
John Calvin (Commentary on Genesis 2: 8), following a different thread in Augustine (City of God, xiii. 20), understood the tree in sacramental language. Given that humanity can not exist except within a covenantal relationship with God, and all covenants use symbols to give us "the attestation of his grace '', he gives the tree, "not because it could confer on man that life with which he had been previously endued, but in order that it might be a symbol and memorial of the life which he had received from God. '' God often uses symbols - He does n't transfer his power into these outward signs, but "by them He stretches out His hand to us, because, without assistance, we can not ascend to Him. '' Thus he intends man, as often as he eats the fruit, to remember the source of his life, and acknowledge that he lives not by his own power, but by God 's kindness. Calvin denies (contra Aquinas and without mentioning his name) that the tree served as a biological defense against physical aging. This is the standing interpretation in modern Reformed theology as well.
|
who served as pm of india for longest duration | List of Prime Ministers of India - Wikipedia
-- -- -- -- -- -- --
-- -- -- -- -- -- -- Executive:
-- -- -- -- -- -- -- Legislature:
Judiciary:
-- -- -- -- -- -- -- Political parties
National coalitions:
-- -- -- -- -- -- -- State governments
Legislatures:
-- -- -- -- -- -- -- Local governments: Rural bodies:
Urban bodies:
The Prime Minister of India is the chief executive of the Government of India. In India 's parliamentary system, the Constitution names the President as head of state de jure, but his or her de facto executive powers are vested in the Prime Minister and his Council of Ministers. Appointed and sworn - in by the President, the Prime Minister is usually the leader of the party or alliance that has a majority in the Lok Sabha, the lower house of India 's Parliament.
Since 1947, India has had fourteen Prime Ministers, fifteen including Gulzarilal Nanda who twice acted in the role. The first was Jawaharlal Nehru of the Indian National Congress party, who was sworn - in on 15 August 1947, when India gained independence from the British. Serving until his death in May 1964, Nehru remains India 's longest - serving prime minister. He was succeeded by fellow Congressman Lal Bahadur Shastri, whose 19 - month term also ended in death. Indira Gandhi, Nehru 's daughter, succeeded Shastri in 1966 to become the country 's first woman premier. Eleven years later, she was voted out of power in favour of the Janata Party, whose leader Morarji Desai became the first non-Congress prime minister. After he resigned in 1979, his former deputy Charan Singh briefly held office until Indira Gandhi was voted back six months later. Indira Gandhi 's second stint as Prime Minister ended five years later on the morning of 31 October 1984, when she was gunned down by her own bodyguards. That evening, her son Rajiv Gandhi was sworn - in as India 's youngest premier, and the third from his family. Thus far, members of Nehru -- Gandhi dynasty have been Prime Minister for a total of 37 years and 303 days.
Rajiv 's five - year term ended with his former cabinet colleague, V.P. Singh of the Janata Dal, forming the year - long National Front coalition government in 1989. A six - month interlude under Prime Minister Chandra Shekhar followed, after which the Congress party returned to power, forming the government under P.V. Narasimha Rao in June 1991. Rao 's five - year term was succeeded by four short - lived governments -- the Bharatiya Janata Party 's Atal Bihari Vajpayee for 13 days in 1996, a year each under United Front prime ministers H.D. Deve Gowda and I.K. Gujral, and Vajpayee again for 19 months in 1998 -- 99. After Vajpayee was sworn - in for the third time, in 1999, he managed to lead his National Democratic Alliance (NDA) government to a full five - year term, the first non-Congressman to do so. Vajpayee was succeeded by Congressman Manmohan Singh, the first Sikh premier, whose United Progressive Alliance government was in office for 10 years between 2004 and 2014. The incumbent Prime Minister of India is Narendra Modi who has headed the BJP - led NDA government since 26 May 2014 which is India 's first non-Congress single party majority government.
Rajendra Prasad
Prime Minister (Re-elected)
(acting President)
Pratibha Patil
|
what happens in the last episode of stargate universe | Gauntlet (Stargate Universe) - wikipedia
"Gauntlet '' is the twentieth episode of the second season and series finale of the military science fiction television series Stargate Universe. The episode originally aired on May 9, 2011 on Syfy in the United States. The episode was directed by longtime director and producer of the Stargate franchise Andy Mikita. It was written by executive producers Joseph Mallozzi and Paul Mullie.
In this episode, the drones are now monitoring every Stargate from Destiny 's position to the edge of the galaxy; instead of just the stars. With their main supply line blocked, the crew manage to destroy a Command Ship to resupply only to take considerable damage in the process. Realizing they can not continue fighting, Eli (David Blue) comes up with an idea to put Destiny in one continuous FTL jump until they reach the next galaxy. This journey will take at least three years and the crew are to use the stasis chambers aboard to keep themselves alive.
This episode represented the final foray into the Stargate franchise until Stargate Origins in 2018.
As Lisa Park is still recovering from T.J. 's treatment to try to restore her vision, Dr. Rush and Eli inform Colonel Young that they have been able to improve Destiny 's sensors, but show that Command Ships await them at every Stargate from where they are to the edge of the galaxy, rendering themselves unable to obtain supplies. Young reports this to Colonel Telford on Earth, but unfortunately, the only known Stargate capable of reaching Destiny remains in control of the Langarans who refuse to allow for its use.
Dr. Rush proposes a means of tuning the shields to improve their efficiency against the Control ship drone attacks; they use this maneuver to gain time while they try to resupply on a nearby planet through the Stargate while resisting the Control ship attack. The drones, finding their weapons ineffective, begin to perform kamikaze attacks on Destiny, causing minor damage. On return to faster - than - light speed, the crew agree that while the idea worked, they would not survive for many more attacks.
Eli offers the idea of keeping the Destiny in FTL and travel through the rest of the galaxy and the void beyond as to reach Stargates in the next galaxy. The plan would require the crew to use the recently discovered stasis pods as to remove power needed for life support on the ship 's power supply as well as to extend their meager supplies. Though there is a risk the ship would drop out of FTL before then, stranding them on what would then be a thousand - year journey, they all agree they have a better chance at survival than facing the Command ships.
While the science team program Destiny 's course and revival systems, the other crew are each given the opportunity to use the communication stones to return to Earth and say their goodbyes to loved ones before being placed in stasis. Eventually, all but Col. Young, Dr. Rush, and Eli have been safely placed aside. As they are about to initiate the long jump, they find that one of the last empty pods is not working, and one of the three will have to stay outside; they would be able to sustain life support for two weeks before it would need to be shut off, giving them the opportunity to try to fix the last chamber in that time. Dr. Rush offers to be the one, but Col. Young confides in Eli that he would not trust Dr. Rush to do what is right if he can not fix the chamber, and believes he should stay. Eli refuses to accept this, having been in Dr. Rush 's shadow since they arrived on Destiny, and believes he would have the best chance of survival as he is smarter than Dr. Rush. Col. Young accepts the decision, and Eli helps to put them in stasis. As power to the rest of the ship is shut down, Eli goes to the observation deck and silently watches the passing starscape.
Prior to the script being written, the original pitch called for Colonel Young and Dr. Rush to be the last two people remaining with only one working stasis pod. The decision about who would be left out was decided by a coin flip, but the result would remain unknown to the audience, essentially setting up the season three premiere. The pitch for the season three premiere had Dr. Rush maintaining Destiny 's systems in his solitude until Earth eventually found a way to dial the ship. This would then only prove to be a dream that Dr. Rush has whilst he is in stasis, as Colonel Young is revealed to be the one left out. However, because of the lack of action, the "it was all a dream '' scenario, and Dr. Rush 's character development, it was deemed unusable and, therefore, the original season two finale pitch would not work either.
Joseph Mallozzi had the task of writing the first draft for "Gauntlet. '' At the time, Mallozzi was under the impression that the show would continue into a third season, so he wrote the episode as if it were a season finale. The supper scene was originally planned to be only a montage of the crew having their last meal. However, SyFy 's Erika Kennair requested a change, in which after Mallozzi wrote Colonel Young 's speech, it was tweaked by Paul Mullie to include a reference of "three years, '' which alluded to the time it took for Destiny to reach its destination and the show 's expected run.
"Gauntlet '' was viewed by 1.134 million live viewers, resulting in a 0.8 Household rating, a 0.2 among adults 18 -- 49.
Meredith Woerner from io9 called the finale "more like a bittersweet goodbye than an action - packed cliffhanger, even though it ended with a much more frightening conclusion than the first season. '' She was reminded of the failures of the first season saying that "Perhaps Eli said it best in this episode: What 's the point of having tremendous potential if you 're not going to step up when you 're really needed? Stargate Universe started off with mounds of potential... Sadly, it was just too little too late. '' However once again she praised the dynamics between Eli and Dr. Rush remarking that "Even when Rush finally admits to Eli that he 's full of potential and gives him the pat on the head he 's been shirking the whole series... This was n't about Eli seeking acceptance, it was about Rush humbling himself for the first time in a long time. The mentor steps down, and Eli takes up the mantle of hero. '' Mike Moody from TVSquad declared "Gauntlet '' "a strong, densely plotted, emotional hour. '' He enjoyed the overall theme of the episode, speaking volumes about the characters working as a family, rather than trying to focus on "the search for the God signal that was said to be Destiny 's true purpose. '' Like Woerner, Moody praised the dynamics between Eli and Dr. Rush adding that "These two shared a satisfying scene where the elder scientist finally admitted to the slacker genius that he had tremendous potential. '' Moody had particular praise for Eli 's character development saying that "Leaving the fates of the ship and its crew in Eli 's hands felt appropriate... Eli, like many of us at some point in our lives, was faced with the difficult task of believing in himself. He still looked like the videogame - obsessed slacker we met in the pilot, but the heart of a true hero. '' Ramsey Isler from IGN remarked that "All things considered, this was as good of a finale as we could expect. '' Isler had particular praise for the writing expressing that "We should never take for granted how hard it is to come up with plausible scenarios for sci - fi shows... This story works as well as it does because these are all smart ideas, and the creative team deserves credit for all it. '' Like Moody, Isler also praised Eli saying that "The greatest achievement of SGU is that it told a grand and poignant tale of a wide - eyed, lonely, insecure boy growing into a brilliant man, and leading a life of adventure that few could ever imagine. '' Isler was also reminded of the many failures that led the show 's demise but claimed that "this particular episode showed few flaws, and gave us reasons to recall just how much potential this series had. '' Overall, Isler said "As sweet as the final moments of this series are, there is a certain bitter aftertaste too. ''
|
what kind of crops are grown in north korea | Agriculture in North Korea - wikipedia
Farming in North Korea is concentrated in the flatlands of the four west coast provinces, where a longer growing season, level land, adequate rainfall, and good irrigated soil permit the most intensive cultivation of crops. A narrow strip of similarly fertile land runs through the eastern seaboard Hamgyŏng provinces and Kangwŏn Province.
The interior provinces of Chagang and Ryanggang are too mountainous, cold, and dry to allow much farming. The mountains contain the bulk of North Korea 's forest reserves while the foothills within and between the major agricultural regions provide lands for livestock grazing and fruit tree cultivation.
Major crops include rice and potatoes. 23.4 % of North Korea 's labor force worked in agriculture in 2012.
North Korea 's sparse agricultural resources limit agricultural production. Climate, terrain, and soil conditions are not particularly favorable for farming, with a relatively short cropping season. Only about 17 % of the total landmass, or approximately 20,000 km, is arable, of which 14,000 km is well suited for cereal cultivation; the major portion of the country is rugged mountain terrain.
The weather varies markedly according to elevation, and lack of precipitation, along with infertile soil, makes land at elevations higher than 400 meters unsuitable for purposes other than grazing. Precipitation is geographically and seasonally irregular, and in most parts of the country as much as half the annual rainfall occurs in the three summer months. This pattern favors the cultivation of paddy rice in warmer regions that are outfitted with irrigation and flood control networks. Rice yields are 5.3 tonnes per hectare, close to international norms.
Rice is North Korea 's primary farm product.
Potatoes have become an important food source in North Korea. After the 1990s famine, a "potato revolution '' has taken place. Between 1998 and 2008 the area of potato cultivation in North Korea quadrupled to 200,000 ha and per capita consumption increased from 16 to 60 kilograms (35 to 132 lb) per year.
The potato was considered a second grade food item, but has become the main staple in rural areas, replacing rice.
Since 2014 many greenhouses have been built, funded by the new semi-private traders in co-operation with farmers, growing soft fruits such as strawberries and melons. The traders arrange distribution and sale in the Jangmadang markets in cities.
Since the 1950s, a majority of North Koreans have received their food through the Public Distribution System (PDS). The PDS requires farmers in agricultural regions to hand over a portion of their production to the government and then reallocates the surplus to urban regions, which can not grow their own foods. About 70 % of the North Korean population, including the entire urban population, receives food through this government - run system.
Before the floods, recipients were generally allotted 600 - 700 grams per day while high officials, military men, heavy laborers, and public security personnel were allotted slightly larger portions of 700 - 800 grams per day. As of 2013, the target average distribution was 573 grams of cereal equivalent per person per day, but varied according to age, occupation, and whether rations are received elsewhere (such as school meals).
Decreases in production affected the quantity of food available through the public distribution system. Shortages were compounded when the North Korean government imposed further restrictions on collective farmers. When farmers, who had never been covered by the PDS, were mandated by the government to reduce their own food allotments from 167 kilograms to 107 kilograms of grain per person each year, they responded by withholding portions of the required amount of grain. Famine refugees reported that the government decreased PDS rations to 150 grams in 1994 and to as low as 30 grams by 1997.
The PDS failed to provide any food from April to August 1998 (the "lean '' season) as well as from March to June 1999. In January 1998, the North Korean government publicly announced that the PDS would no longer distribute rations and that families needed to somehow procure their own food supplies. By 2005 the PDS was only supplying households with approximately one half of an absolute minimum caloric need. By 2008 the system had significantly recovered, and from 2009 to 2013 daily per person rations averaged at 400 grams per day for much of the year, though in 2011 it dropped to 200 grams per day from May to September.
It is estimated that in the early 2000s, the average North Korean family drew some 80 % of its income from small businesses that were technically illegal (though unenforced) in North Korea. In 2002, and in 2010, private markets were progressively legalized. As of 2013, urban and farmer markets were held every 10 days, and most urban residents lived within 2 km of a market, with markets having an increasing role in obtaining food.
Since self - sufficiency remains an important pillar of North Korean ideology, self - sufficiency in food production is deemed a worthy goal. Another aim of government policies -- to reduce the "gap '' between urban and rural living standards -- requires continued investment in the agricultural sector. Finally, as in most countries, changes in the supply or prices of foodstuffs probably are the most conspicuous and sensitive economic concerns for the average citizen. The stability of the country depends on steady, if not rapid, increases in the availability of food items at reasonable prices. In the early 1990s, there were severe food shortages.
The most far - reaching statement on agricultural policy is embodied in Kim Il - sung 's 1964 Theses on the Socialist Agrarian Question in Our Country, which underscores the government 's concern for agricultural development. Kim emphasized technological and educational progress in the countryside as well as collective forms of ownership and management.
As industrialization progressed, the share of agriculture, forestry, and fisheries in the total national output declined from 63.5 % and 31.4 %, respectively, in 1945 and 1946, to a low of 26.8 % in 1990. Their share in the labor force also declined from 57.6 % in 1960 to 34.4 % in 1989.
In the 1990s decreasing ability to carry out mechanized operations (including the pumping of water for irrigation), as well as lack of chemical inputs, was clearly contributing to reduced yields and increased harvesting and post-harvest losses.
Incremental improvements in agricultural production have been made since the late 1990s, bringing North Korea close to self - sufficiency in staple foods by 2013. In particular rice yields have steadily improved, though yields on other crops have generally not improved. The production of protein foods remains inadequate. Access to chemical fertilizer has declined, but the use of compost and other organic fertilizer has been encouraged.
From 1994 to 1998 North Korea suffered a famine. Since 1998 there has been a gradual recovery in agriculture production, which by 2013 brought North Korea back close to self - sufficiency in staple foods. However, as of 2013, most households have borderline or poor food consumption, and consumption of protein remains inadequate.
In the 1990s the North Korean economy saw stagnation turning into crisis. Economic assistance received from the USSR and China was an important factor of its economic growth. In 1991 USSR collapsed, withdrew its support and demanded payment in hard currency for imports. China stepped in to provide some assistance and supplied food and oil, most of it reportedly at concessionary prices. But in 1994 China reduced its exports to North Korea. The rigidity in the political and economic systems of North Korea left the country ill - prepared for a changing world. The North Korean economy was undermined and its industrial output began to decline in 1990.
Deprived of industrial inputs, including fertilizers, pesticides, and electricity for irrigation, agricultural output also started to decrease even before North Korea had a series of natural disasters in the mid-1990s. This evolution, combined with a series of natural disasters including record floods in 1995, caused one of the worst economic crises in North Korea 's history. Other causes of this crisis were high defense spending (about 25 % of GDP) and bad governance. It is estimated that between 1992 and 1998 North Korea 's economy contracted by 50 % and several hundred thousand (possibly up to 3 million) people died of starvation.
North Korea announced in December 1993 a 3 - year transitional economic policy placing primary emphasis on agriculture, light industry, and foreign trade. A lack of fertilizer, natural disasters, and poor storage and transportation practices have left the country more than a million tons per year short of grain self - sufficiency. Moreover, lack of foreign exchange to purchase spare parts and oil for electricity generation left many factories idle.
The 1990s famine paralyzed many of the Marxist -- Leninist economic institutions. The government pursued Kim Jong Il 's Songun policy, under which the military is deployed to direct production and infrastructure projects. As a consequence of the government 's policy of establishing economic self - sufficiency, the North Korean economy has become increasingly isolated from that of the rest of the world, and its industrial development and structure do not reflect its international competitiveness.
The food shortage was caused as a direct result of the massive flooding and a mix of political failure and poor amounts of arable land in the country. In 2004, more than half (57 %) of the population did n't have enough food to stay healthy. 37 % of the children had their growth stunted and 1 / 3 of mothers were severely undernourished.
In 2006, the World Food Program (WFP) and FAO estimated a requirement of 5.3 to 6.5 million tons of grain when domestic production fulfilled only 3.825 million tons. The country also faces land degradation after forests stripped for agriculture resulted in soil erosion. Harsh weather conditions that dented the agricultural output (wheat and barley production dropped 50 % and 80 % respectively in 2011) and rising global food prices stressed greater food shortage, putting 6 million North Koreans at risk.
With a dramatic increase on the reliance on private sales of goods, as well as increased international aid, the situation has improved somewhat with undernourishment no longer being a major concern for most North Koreans as of 2014, although PDS (the Public Distribution System) still continues.
|
who is the minister of trade and industry in ghana | Minister for Trade and Industry (Ghana) - wikipedia
The Minister for Trade and Industry is the Ghana government official responsible for both internal and external trade as well as the promotion of Ghanaian industries.
The Minister for Trade and Industry since 16 July 2014 is Ekwow Spio - Garbrah.
|
tell me that you love me song download | Tell Me You Love Me (song) - wikipedia
"Tell Me You Love Me '' is a song by American singer - songwriter Demi Lovato. It was written by Kirby Lauryen, Stint and John Hill, with production handled by the latter two. It was released through Hollywood, Island and Safehouse Records on August 23, 2017, as the title track from Lovato 's sixth studio album, Tell Me You Love Me (2017).
Lovato published a black - and - white teaser on August 23, 2017, on social media, announcing the new album, with "Tell Me You Love Me '' playing in the background. The clip shows Lovato singing the song in a studio, as it fades into the album artwork.
"Tell Me You Love Me '' is written in the key of E ♭ minor with a tempo of 72 beats per minute in common time. The song follows a chord progression of E ♭ m -- D ♭ -- C ♭, and Lovato 's vocals span from A ♭ to G ♯.
Elias Leight of Rolling Stone called the song "a swelling ballad full of horns and handclaps ''. Mike Wass of Idolator described the song as "a fiery mid-tempo anthem ''. Jeff Benjamin of Fuse called the song a "booming ballad ''. Deepa Lakshmin of MTV News described Lovato 's vocals as "sweeping ''. Raisa Bruner of Time described the song as a "lush and dramatic love song '' that had "powerful hand - clap chorus and big horns ''.
Recording and management
Personnel
Credits adapted from the liner notes of Tell Me You Love Me.
|
when did the chicago bulls became a team | Chicago Bulls - wikipedia
The Chicago Bulls are an American professional basketball team based in Chicago, Illinois. The Bulls compete in the National Basketball Association (NBA) as a member of the league 's Eastern Conference Central Division. The team was founded on January 16, 1966. The team plays its home games at the United Center, an arena shared with the Chicago Blackhawks of the National Hockey League (NHL).
The Bulls saw their greatest success during the 1990s, when they were responsible for popularizing the NBA worldwide. They are known for having one of the NBA 's greatest dynasties, winning six NBA championships between 1991 and 1998 with two three - peats. All six championship teams were led by Hall of Famers Michael Jordan, Scottie Pippen and coach Phil Jackson. The Bulls are the only NBA franchise to win multiple championships and never lose an NBA Finals series in their history.
The Bulls won 72 games during the 1995 -- 96 NBA season, setting an NBA record that stood until the Golden State Warriors won 73 games during the 2015 -- 16 NBA season. The Bulls were the first team in NBA history to win 70 games or more in a single season, and the only NBA franchise to do so until the 2015 -- 16 Warriors. Many experts and analysts consider the 1996 Bulls to be one of the greatest teams in NBA history.
As of 2017, the Bulls are the fourth most valuable NBA franchise according to Forbes. Michael Jordan and Derrick Rose have both won the NBA Most Valuable Player Award while playing for the Bulls, for a total of six MVP awards.
The Bulls share rivalries with the Detroit Pistons, New York Knicks, and the Miami Heat. The Bulls ' rivalry with the Pistons was highlighted heavily during the late 1980s and early 1990s.
On January 16, 1966 Chicago was granted an NBA franchise to be called the Bulls. The Chicago Bulls became the third NBA franchise in the city, after the Chicago Stags (1946 -- 1950) and the Chicago Packers / Zephyrs (now the Washington Wizards). The Bulls ' founder, Dick Klein, was the Bulls ' only owner to ever play professional basketball (for the Chicago American Gears). He served as the Bulls ' president and general manager in their initial years.
After the 1966 NBA Expansion Draft, the newly founded Chicago Bulls were allowed to acquire players from the previously established teams in the league for the upcoming 1966 -- 67 season. The team started in the 1966 -- 67 NBA season, and posted the best record by an expansion team in NBA history. Coached by Chicagoan and former NBA star Johnny "Red '' Kerr, and led by former NBA assist leader Guy Rodgers, guard Jerry Sloan and forward Bob Boozer, the Bulls qualified for the playoffs, the only NBA team to do so in their inaugural season.
In their first two seasons, the Bulls played most of their home games at the International Amphitheatre, before moving to Chicago Stadium.
Fan interest was diminishing after four seasons, with one game in the 1968 season having an official attendance of 891 and some games being played in Kansas City. In 1969, Klein dropped out of the general manager job and hired Pat Williams, who as the Philadelphia 76ers ' business manager created promotions that helped the team become third in attendance the previous season. Williams revamped the team roster, acquiring Chet Walker from his old team in exchange for Jim Washington and drafting Norm Van Lier -- who was traded to the Cincinnati Royals and only joined the Bulls in 1971 -- while also investing in promotion, with actions such as creating mascot Benny the Bull. The Bulls under Williams and head coach Dick Motta qualified for four straight playoffs and had attendances grow to over 10,000. In 1972, the Bulls set a franchise win - loss record at 57 wins and 25 losses. During the 1970s, the Bulls relied on Jerry Sloan, forwards Bob Love and Chet Walker, point guard Norm Van Lier, and centers Clifford Ray and Tom Boerwinkle. The team made the conference finals in 1975 but lost to the eventual champions, the Golden State Warriors, 4 games to 3.
After four 50 - win seasons, Williams returned to Philadelphia, and Motta decided to take on the role of GM as well. The Bulls ended up declining, winning only 24 games in the 1975 -- 1976 season. Motta was fired and replaced by Ed Badger.
Artis Gilmore, acquired in the ABA dispersal draft in 1976, led a Bulls squad which included guard Reggie Theus, forward David Greenwood and forward Orlando Woolridge.
In 1979, the Bulls lost a coin flip for the right to select first in the NBA draft (Rod Thorn, the Bulls ' General Manager, called "heads ''). Had the Bulls won the toss, they would have selected Magic Johnson; instead, they selected David Greenwood with the second pick. The Los Angeles Lakers selected Johnson with the pick acquired from the New Orleans Jazz, who traded the selection for Gail Goodrich.
After Gilmore was traded to the San Antonio Spurs for center Dave Corzine, the Bulls employed a high - powered offense centered around Theus, and which soon included guards Quintin Dailey and Ennis Whatley. However, with continued dismal results, the Bulls decided to change direction, trading Theus to the Kansas City Kings during the 1983 -- 84 season.
In the summer of 1984, the Bulls had the third pick of the 1984 NBA draft, after Houston and Portland. The Rockets selected Hakeem Olajuwon, the Blazers picked Sam Bowie and the Bulls chose shooting guard Michael Jordan. The team, with new management in owner Jerry Reinsdorf and general manager Jerry Krause, decided to rebuild around Jordan. Jordan set franchise records during his rookie campaign for scoring (third in the league) and steals (fourth), and led the Bulls back to the playoffs, where they lost in four games to the Milwaukee Bucks. For his efforts, he was rewarded with a selection to the All - NBA Second Team and the NBA Rookie of the Year Award.
In the following off - season, the team acquired point guard John Paxson and on draft day traded with the Cavaliers for the rights to power forward Charles Oakley. Along with Jordan and center Dave Corzine, they provided much of the Bulls ' offense for the next two years. After suffering a broken foot early in the 1985 -- 86 season, Jordan finished second on the team to Woolridge in scoring. Jordan returned for the playoffs, and led the eighth - place Bulls against the 67 -- 15 Boston Celtics, led by Larry Bird. At the time, the Bulls had the fifth worst record of any team to qualify for the playoffs in NBA history. Though the Bulls were swept, Jordan recorded a playoff single - game record 63 points in Game 2 (which still stands to this day), prompting Bird to call him ' God disguised as Michael Jordan. '
In the 1986 -- 87 NBA season, Jordan continued his assault on the record books, leading the league in scoring with 37.1 points per game and becoming the first Bull named to the All - NBA First Team. The Bulls finished 40 -- 42, which was good enough to qualify them for the playoffs. However, they were again swept by the Celtics in the playoffs. In the 1987 draft, to address their lack of depth, Krause selected center Olden Polynice eighth overall and power forward Horace Grant 10th overall, then sent Polynice to Seattle in a draft - day trade for the fifth selection, small forward Scottie Pippen. With Paxson and Jordan in the backcourt, Brad Sellers and Oakley at the forward spots, Corzine anchoring center, and rookies Pippen and Grant coming off the bench, the Bulls won 50 games and advanced to the Eastern Conference semifinals, where they were beaten by the eventual Eastern Conference Champions Detroit Pistons in five games. For his efforts, Jordan was named NBA Most Valuable Player, an award he would win four more times over his career. The 1987 -- 88 season would also mark the start of the Pistons - Bulls rivalry which was formed from 1988 to 1991.
The 1988 -- 89 season marked a second straight year of major off - season moves. Power forward Charles Oakley, who had led the league in total rebounds in both 1987 and 1988, was traded on the eve of the 1988 NBA draft to the New York Knicks along with a first round draft pick used by the Knicks to select Rod Strickland for center Bill Cartwright and a first round pick, which the Bulls used to obtain center Will Perdue. In addition, the Bulls acquired three - point shooter Craig Hodges from Phoenix. The new starting lineup of Paxson, Jordan, Pippen, Grant, and Cartwright took some time to mesh, winning fewer games than the previous season, but made it all the way to the Eastern Conference Finals, where they were defeated in six games by the NBA champion Pistons.
In 1989 -- 90, Jordan led the league in scoring for the fourth straight season, and was joined on the all - star squad for the first time by Pippen. There was also a major change during the off - season, where head coach Doug Collins was replaced by assistant coach Phil Jackson. The Bulls also picked up rookie center Stacey King and rookie point guard B.J. Armstrong in the 1989 draft. With these additional players and the previous year 's starting five, the Bulls again made it to the Conference Finals, and pushed the Pistons to seven games before being eliminated for the third straight year, the Pistons going on to repeat as NBA champions.
In the 1990 -- 91 season, the Bulls recorded a then - franchise record 61 wins, and romped through the playoffs, where they swept the Knicks in the first round, defeated the Philadelphia 76ers in the semifinals, then eliminated defending champion Pistons in the Conference Finals and won the NBA Finals in five games over the Magic Johnson - led Los Angeles Lakers.
The Bulls won their second straight title in 1992 after racking up another franchise record for wins with 67. They defeated the Miami Heat in four games in the first round, the Knicks in seven hard - fought games in the second round, then the Cleveland Cavaliers in six games in the Eastern Conference to the Finals for the second year in a row where they defeated the Clyde Drexler - led Portland Trail Blazers in six games.
In 1993, the Bulls won their third consecutive championship by defeating the Atlanta Hawks, Cleveland Cavaliers and New York Knicks in the first three rounds and then defeating regular season MVP Charles Barkley and the Phoenix Suns, with John Paxson 's three - pointer with 3.9 seconds left giving them a 99 -- 98 victory in Game 6 in Phoenix, Arizona.
On October 6, 1993, Michael Jordan shocked the basketball community by announcing his retirement, three months after his father 's murder. The Bulls were then led by Scottie Pippen, who established himself as one of the top players in the league by winning the 1994 All - Star MVP. He received help from Horace Grant and B.J. Armstrong, who were named to their first all - star games. The three were assisted by Cartwright, Perdue, shooting guard Pete Myers, and Croatian rookie forward Toni Kukoč. Despite the Bulls winning 55 games during the 1993 -- 94 season, they were beaten in seven games by the Knicks in the second round of the playoffs, after a controversial foul call by referee Hue Hollins in game 5 of that series. The Knicks eventually reached the finals that year, but lost to the Houston Rockets. The Bulls opened the 1994 -- 95 season by leaving their home of 27 years, Chicago Stadium, and moving into their current home, the United Center.
In 1994, the Bulls lost Grant, Cartwright and Scott Williams to free agency, and John Paxson to retirement, but picked up shooting guard Ron Harper, the seeming heir apparent to Jordan in assistant coach Tex Winter 's triple - post offense, and small - forward Jud Buechler. The Bulls started Armstrong and Harper in the backcourt, Pippen and Kukoc at the forward spots, and Perdue at center. They also had sharpshooter Steve Kerr, whom they acquired via free agency before the 1993 -- 94 season, Myers, and centers Luc Longley (acquired via trade in 1994 from the Minnesota Timberwolves) and Bill Wennington. However, they were struggling during the season, on March 18, 1995, they received the news that Michael Jordan was coming out of retirement. He scored 55 points against the Knicks in only his fifth game back, and led the Bulls to the fifth seed in the playoffs, where they defeated the Charlotte Hornets. However, Jordan and the Bulls were unable to overcome the eventual Eastern Conference champion Orlando Magic, which included Horace Grant, Anfernee Hardaway, and Shaquille O'Neal. When Jordan returned to the Bulls, he initially wore No. 45 (which was his number while playing for the Birmingham Barons, a minor - league affiliate of the Chicago White Sox). He chose the No. 45 because his older brother Larry wore that number in high school. Michael wanted to be half as good as his brother so he chose 23 which is half of 45 (22.5) rounded up. However, Jordan switched back to the familiar 23 before game 2 of the Orlando Magic series.
In the off - season, the Bulls lost Armstrong in the expansion draft, and Krause traded Perdue to the San Antonio Spurs for rebounding specialist Dennis Rodman, who had won the past four rebounding titles, and who had also been a member of the Detroit Pistons ' "Bad Boys '' squad that served as the Bulls ' chief nemesis in the late 1980s.
With a lineup of Harper, Jordan, Pippen, Rodman and Longley, and perhaps the league 's best bench in Steve Kerr, Kukoc, Wennington, Buechler, and guard Randy Brown, the Bulls posted one of the best single - season improvements in league history and the best single - season record at that time, moving from 47 -- 35 to 72 -- 10, becoming the first NBA team to win 70 or more games. Jordan won his eighth scoring title, and Rodman his fifth straight rebounding title, while Kerr finished second in the league in three - point shooting percentage. Jordan garnered the elusive triple crown with the NBA MVP, NBA All - Star Game MVP, and NBA Finals MVP. Krause was named NBA Executive of the Year, Jackson Coach of the Year, and Kukoc the Sixth Man of the Year. Both Pippen and Jordan made the All - NBA First Team, and Jordan, Pippen, and Rodman made the All - Defensive First Team, making the Bulls the only team in NBA history with three players on the All - Defensive First Team.
In addition, the 1995 -- 96 team holds several other records, including the best road record in a standard 41 - road - game season (33 -- 8), the all - time best start by a team (41 -- 3), and the best start at home (37 -- 0). The Bulls also posted the second - best home record in history (39 -- 2), behind only the 1985 -- 86 Celtics 40 -- 1 home mark. The team triumphed over the Miami Heat in the first round, the New York Knicks in the second round, the Orlando Magic in the Eastern Conference Finals and finally Gary Payton, Shawn Kemp and the Seattle SuperSonics for their fourth title. The 1995 -- 96 Chicago Bulls are widely regarded as one of the greatest teams in the history of basketball.
In the 1996 -- 97 season, the Bulls narrowly missed out on a second consecutive 70 - win season by losing their final two games to finish 69 -- 13. They repeated their home dominance, going 39 -- 2 at the United Center. The Bulls capped the season by defeating the Bullets, Hawks and Heat in the first three rounds of the playoffs en route to winning their fifth NBA championship over John Stockton, Karl Malone and the Utah Jazz. Jordan earned his second straight and ninth career scoring title, while Rodman earned his sixth straight rebounding title. Jordan and Pippen, along with Robert Parish, who was a member of the Bulls at the time, were also honored as members of the 50 greatest players of all - time with the NBA celebrating its 50th season. Parish, whose single season with the Bulls would be his last year in the league, was nominated for his stellar career with the Boston Celtics.
The 1997 -- 98 season was one of turmoil for the NBA champion Bulls. Many speculated this would be Michael Jordan 's final season with the team. Phil Jackson 's future with the team was also questionable, as his relationship with team general manager Jerry Krause was one of growing tension. Scottie Pippen was looking for a significant contract extension that he thought he deserved, but was not getting from the organization. In spite of the turmoil that surrounded the Bulls, they still had a remarkable season, with a final regular - season record of 62 -- 20. Michael Jordan would be named the league MVP for the fifth and final time, and the Bulls went into the playoffs as the number one seed in the Eastern Conference.
The first round of the playoffs for the Bulls was against the New Jersey Nets, a team led by Keith Van Horn, Kendall Gill and Sam Cassell. The Bulls swept the Nets three to nothing in a best of five series. The conference semi-finals were more challenging with the Charlotte Hornets stealing game two from the Bulls at the United Center, and tying the series 1 -- 1. But the Bulls easily defeated the Hornets in the next three games of the series. The Conference Finals was a challenge for the Bulls as they went up against the Reggie Miller - led Indiana Pacers. Experts were of the opinion that the Pacers had the best chance to defeat the Bulls. The Pacers gave the Bulls no road wins, winning games 3, 4, and 6, sending the series to a deciding game seven at the United Center. The Bulls prevailed and beat the Pacers 88 -- 83, winning their 6th Eastern Conference Championship.
In a much anticipated Finals, The Bulls faced the team they beat the previous year, the Utah Jazz. Led by Karl Malone and John Stockton, the Jazz felt confident that they could defeat the Bulls, winning game one at Utah 's Delta Center. Facing a potential two to nothing deficit, the Bulls won Game 2 at the Delta Center and tied the series. The Bulls returned to the United Center and, by winning the next two games, took a 3 -- 1 series lead. The Jazz won Game 5 by two points, 83 -- 81. Game 6 was a tough battle for both teams, as the Jazz had a lead late in the game. Down by three points to the Jazz, Michael Jordan led the Bulls to one final win. Jordan hit a shot to bring the Bulls within 1, then stole the ball from Karl Malone and hit the game winning shot with 5.2 seconds remaining on the clock. With a score of 87 -- 86, John Stockton put up a three - pointer, but missed, giving the Bulls their sixth championship in eight years. Jordan would be named the Finals MVP for the sixth time in his career. He retired for the second time on January 13, 1999.
The summer of 1998 brought an abrupt end to the championship era. Krause felt that the Bulls were on the verge of being too old and unable to compete. He decided that the team 's only choices were to rebuild or endure a slow decline. His plan was to trade away the aging talent and acquire high draft picks while clearing salary cap space to make a run at several promising free agents in two years ' time. After having been vetoed in a previous attempt by owner Jerry Reinsdorf, Krause traded Scottie Pippen for Roy Rogers (who was released in February 1999) and a conditional second - round draft pick from the Houston Rockets. He also decided not to re-sign Dennis Rodman, and traded Luc Longley and Steve Kerr for other draft picks. He hired a new coach, Tim Floyd, who had run a successful program at Iowa State University. Upon Phil Jackson 's departure, Michael Jordan made his second retirement official. With a new starting lineup of point guard Randy Brown, shooting guard Ron Harper, newcomer Brent Barry at small forward, power forward Toni Kukoč, and center Bill Wennington, the team began the lockout - shortened 1998 -- 99 season. Kukoc led the team in scoring, rebounding, and assists, but the team won only 13 of 50 games. The lowest point of the season came on April 10 in a game against the Miami Heat. In that game, the Bulls scored 49 points to set an NBA record for the fewest points in a game in the shot clock era.
The previous year 's dismal finish came with one highlight: the team won the draft lottery and the rights to power forward Elton Brand. Since the team lost Harper, Wennington and Barry in the offseason, Brand and fellow rookie Ron Artest led the team throughout the year, especially after Kukoc missed most of the season due to injury and was then dealt for a draft pick at the trading deadline. Brand recorded the first 20 -- 10 average for the Bulls since the days of Artis Gilmore. He led all rookies in scoring, rebounds, blocks, field goal percentage and minutes, while Artest led all rookies in steals and finished second on the team in scoring. For his efforts Brand was named 1999 -- 2000 co-Rookie of the Year with Houston 's Steve Francis, and to the all - rookie first team, while Artest was named to the all - rookie second team. However, the team established a franchise low at 17 -- 65, second worst in the league.
After a summer in which the Bulls witnessed most major and minor free agents Tim Duncan, Grant Hill, Tracy McGrady, Eddie Jones and even Tim Thomas choose to stay with their teams (or go elsewhere) rather than sign with them, Krause signed free agent center Brad Miller and shooting guard Ron Mercer, and drafted power forward Marcus Fizer and traded draft pick Chris Mihm to Cleveland for the rights of guard Jamal Crawford. Brand again led the team in scoring and rebounds with another 20 -- 10 season, but the new acquisitions failed to make a major impact, and they finished with the worst record in team history and the league 's worst for the season at 15 -- 67.
Krause shocked Bulls fans on draft day in 2001 when he traded franchise player Brand to the Los Angeles Clippers for the second pick in the draft, Tyson Chandler. He also selected Eddy Curry with the fourth pick. Since both Chandler and Curry came straight out of high school, neither was expected to make much of a contribution for several years, but they were seen as potential franchise players. The team floundered without veteran leadership. At mid-season, the Bulls traded their top three scorers -- Mercer, Artest, and Miller along with Kevin Ollie -- to the Indiana Pacers for veteran guard Jalen Rose, Travis Best and Norman Richardson. There was also a change in coaching, with Floyd being dismissed in favor of assistant coach and former Bulls co-captain Bill Cartwright, following a series of arguments with players and management. The Bulls improved from 15 to 21 wins, although they were still tied for last in the league.
For the 2002 -- 03 season, the Bulls came to play with much optimism. They picked up college phenom Jay Williams with the second pick in the draft. Williams teamed with Jalen Rose, Crawford, Fizer, newcomer Donyell Marshall, Curry, Chandler, and guard Trenton Hassell to form a young and exciting nucleus which improved to 30 -- 52 in Bill Cartwright 's first full season as head coach. Curry led the league in field goal percentage, becoming the first Bull since Jordan to lead the league in a major statistical category.
During the summer of 2003, long - time GM Jerry Krause retired, and former player and color commentator John Paxson was tapped as his successor. Jay Williams, coming off a promising rookie campaign, was seriously injured in a motorcycle accident. His contract was bought out by the Bulls in February 2004 and he has yet to return to the game. Paxson selected point guard Kirk Hinrich with the seventh pick in the draft, and signed veteran free agent and former franchise player Scottie Pippen. With Pippen playing, Cartwright at the sidelines, and Paxson in the front office, the Bulls hoped that some of the championship magic from before would return.
However, the 2003 -- 04 season was a resounding disappointment. Eddy Curry regressed, leading to questions about his conditioning and commitment. Tyson Chandler was plagued by a chronic back injury, missing more than thirty games. Pippen 's ability to influence games was impaired by knee problems, and he openly contemplated retirement. Jamal Crawford remained inconsistent. Bill Cartwright was fired as head coach in December and replaced with former Phoenix coach Scott Skiles. A trade with the Toronto Raptors brought Antonio Davis and Jerome Williams in exchange for Rose and Marshall in what was seen as a major shift in team strategy from winning with athleticism to winning with hard work and defense. After struggling throughout the season, the Bulls finished with 23 wins and 59 losses, the second - worst record in the league. Fizer was not re-signed, and Crawford was re-signed and traded to the Knicks for expiring contracts. Hinrich provided the lone bright spot, becoming a fan favorite for his gritty determination and tenacious defense. He won a place on the All - Rookie first team.
During the 2004 off - season, Paxson traded a 2005 draft pick to the Phoenix Suns in return for an additional pick in the 2004 NBA draft. He used the picks to select Connecticut guard Ben Gordon and Duke small forward Luol Deng in the first round, and Duke point guard Chris Duhon in the second. Paxson also signed free agent small forward Andrés Nocioni, who had recently won an Olympic gold medal as a member of the Argentine national team. After losing the first nine games of the season, the Bulls began to show signs of improvement behind their improved team defense and clutch fourth - quarter play from Gordon. The Bulls, who were 0 -- 9 to start the season, finished the regular season 47 -- 35, with the third - best record in the Eastern Conference and advanced to the NBA playoffs for the first time since Jordan 's departure. In the first round, the 4th - seeded Bulls played the Washington Wizards. Despite an injury to Deng and a heart issue with Curry, the Bulls opened the series with two wins at home, but lost the next four games and the series. After the season, Ben Gordon became the first rookie to win the NBA Sixth Man Award and the first Bull to win the award since 1996 with Toni Kukoč.
During the 2005 off - season, the Bulls re-signed free agent Tyson Chandler. However, Curry showed possible symptoms of a heart disease resulting of a heart murmur during checkups, and Paxson would not clear him to play without extensive DNA testing. Ultimately, Curry refused to participate in the tests, and he was traded along with Antonio Davis to the New York Knicks for Michael Sweetney, Tim Thomas, and what became the second pick of the 2006 NBA draft -- as well as the right to swap picks with New York in the 2007 NBA draft.
Without a significant post presence, the Bulls struggled for most of the 2005 -- 06 season. However, a late - season 12 -- 2 surge allowed them to finish 41 -- 41 and qualify for the 2006 playoffs as the seventh seed. There, the Bulls faced the Miami Heat. After two close losses in Miami, the Bulls broke through with a blowout win in Game 3, and another win in Game 4. However, the Heat took the next two games to win the series and went on to win that year 's championship. The Bulls ' several young players nevertheless earned additional postseason experience, and Nocioni turned in a remarkable series of performances that far exceeded his season averages.
In the 2006 NBA Draft, the Bulls were awarded forward - center LaMarcus Aldridge and immediately traded him to the Portland Trail Blazers for forward Tyrus Thomas and forward Viktor Khryapa. In a second draft - day trade, the Bulls selected Rodney Carney and traded him to the Philadelphia 76ers for guard Thabo Sefolosha. Later that summer, four - time Defensive Player of the Year Ben Wallace signed with the Bulls for a reported four - year, $60 million contract. Following the signing of Wallace, the Bulls traded Tyson Chandler, the last remaining player of the Krause era, to the (then) New Orleans / Oklahoma City Hornets for veteran power forward P.J. Brown and J.R. Smith and salary cap space that was used to sign former Chicago co-captain Adrian Griffin.
In 2006 -- 07, the Bulls overcame a 3 -- 9 season start to finish 49 -- 33, the third - best record in the Eastern Conference. In the first round, the Bulls again faced Miami, the defending NBA champions. The Bulls narrowly won Game 1 at home, then followed it with a blowout victory in Game 2. In Miami, the Bulls rallied from a 12 - point second - half deficit to win Game 3 and then posted another comeback win in Game 4. The Bulls ' four - game sweep of the defending champions stunned many NBA observers. It was Chicago 's first playoff series victory since 1998, Jordan 's last season with the team.
The Bulls then advanced to face the Detroit Pistons, marking the first time the Central Division rivals had met in the playoffs since 1991. The Pistons won the first three games including a big comeback in Game 3. No NBA team had ever come back from a 0 -- 3 deficit to win the series, but the Bulls avoided a sweep by winning Game 4 by 10 points. The Bulls then easily won Game 5 in Detroit, and had a chance to make NBA history. But they lost at home in game 6 by 10, and the Pistons won the series 4 -- 2 on May 17.
During the off season, the Bulls signed forward Joe Smith and guard Adrian Griffin, and drafted center Joakim Noah. However, distractions began when Luol Deng and Ben Gordon turned down contract extensions, never citing reasons. Then rumors surfaced that the Bulls were pursuing stars like Kevin Garnett, Pau Gasol, and most notably, Kobe Bryant. None of these deals happened, and general manager John Paxson denied a deal was ever imminent.
The Bulls started the 2007 -- 08 NBA season by losing 10 of their first 12 games and on December 24, 2007, after a 9 -- 16 start, the Bulls fired head coach Scott Skiles. Jim Boylan was named the interim head coach on December 27, 2007.
On February 21, 2008, Ben Wallace, Joe Smith, Adrian Griffin and the Bulls ' 2009 2nd round draft pick were exchanged for Drew Gooden, Cedric Simmons, Larry Hughes and Shannon Brown in a three - team trade deal involving the Cleveland Cavaliers and the Seattle SuperSonics. Boylan was not retained on April 17 at the conclusion of the 2007 -- 08 season after compiling a 24 -- 32 record with the Bulls. The Bulls ended the 2007 -- 08 campaign with a 33 -- 49 record, a complete reversal of last year 's record.
After Jim Boylan 's interim tenure expired, the Bulls began the process of selecting a new head coach. They were in talks with former Phoenix head coach Mike D'Antoni, but on May 10, 2008, he signed with the New York Knicks. Other possible options included former Dallas head coach Avery Johnson and former Bulls head coach Doug Collins. Collins resigned from the coaching list on June 4, 2008, reporting that he did not want to ruin his friendship with Jerry Reinsdorf.
On June 10, 2008, the Chicago Bulls G.M. John Paxson hired Vinny Del Negro, with no coaching experience, to coach the young Bulls. On July 3, 2008, the Chicago Tribune reported that Del Harris agreed to become an assistant coach for the Chicago Bulls along with former Charlotte Bobcats head coach Bernie Bickerstaff and longtime NBA assistant Bob Ociepka. Along with Bickerstaff and Ociepka, Harris helped establish a veteran presence on the coaching staff and helped rookie head coach Del Negro.
With a slim 1.7 % chance of winning the rights to draft number 1, the Bulls won the 2008 NBA Draft Lottery and selected first overall. With this, the Bulls became the team with the lowest chance of winning to ever win the lottery since it was modified for the 1994 NBA draft, and second lowest ever. On June 26, 2008, the Bulls drafted Chicago native Derrick Rose from the University of Memphis as the number 1 draft pick. At pick number 39 they selected Sonny Weems. The Bulls later traded Weems to the Denver Nuggets for Denver 's 2009 regular second - round draft pick. The Bulls then acquired Ömer Aşık from the Portland Trail Blazers (selected with the 36th pick) for Denver 's 2009 regular second - round draft pick, New York 's 2009 regular second - round draft pick, and the Bulls ' 2010 regular second - round draft pick. The Bulls re-signed Luol Deng to a six - year $71 million contract on July 30, 2008. He was later plagued with an injury keeping him from action for most of the 2008 -- 2009 season. Ben Gordon signed a one - year contract on October 2, 2008.
On February 18, 2009, the Bulls made their first of several trades, sending Andrés Nocioni, Drew Gooden, Cedric Simmons, and Michael Ruffin to the Sacramento Kings for Brad Miller and John Salmons. Then on February 19, 2009, the NBA trade deadline, the Bulls traded Larry Hughes to the New York Knicks for Tim Thomas, Jerome James, and Anthony Roberson. Later that day the Bulls made the third trade in a span of less than 24 hours by sending swingman Thabo Sefolosha to the Oklahoma City Thunder for a 2009 first - round pick. The trades brought a late - season push for the Bulls, which finally clinched a playoff berth on April 10, 2009, their fourth in the last five years. They finished the season with a 41 -- 41 record. Their record was good enough to secure a No. 7 seed in the 2009 NBA Playoffs, playing a tough series against the Boston Celtics. In Game 1, Derrick Rose scored 36 points, along with 11 assists, tying Kareem Abdul - Jabbar 's record for most points scored by a rookie in a playoff debut. After breaking the record for most overtimes played in an NBA Playoffs Series, the Boston Celtics managed to overcome the Bulls after 7 games and 7 overtimes played.
The Bulls had two first round picks in the 2009 NBA draft and decided to take Wake Forest stand out forward James Johnson and athletic USC forward Taj Gibson. In the 2009 NBA off - season the Bulls lost their leading scorer, Ben Gordon, when he signed with their divisional rival, the Detroit Pistons.
On February 18, 2010, John Salmons was traded to the Milwaukee Bucks for Joe Alexander and Hakim Warrick. Meanwhile, Tyrus Thomas was traded to the Charlotte Bobcats for Acie Law, Flip Murray and a future protected first round pick. On April 14, 2010, the Bulls clinched the playoffs with the number 8 seed. Unlike the previous year, however, the Bulls ' playoff run was shorter and less dramatic as they were eliminated by the Cleveland Cavaliers in five games. On May 4, 2010, the Bulls officially fired head coach Vinny Del Negro.
In early June 2010, Boston Celtics assistant Tom Thibodeau accepted a three - year contract to fill the Bulls ' head coaching vacancy. He was officially introduced on June 23. On July 7, it was revealed that Carlos Boozer of the Utah Jazz had verbally agreed to an $80 million, five - year contract. Afterwards, the Bulls traded veteran point guard Kirk Hinrich to the Washington Wizards to create more cap space. The Bulls also signed former 76er and Jazz sharpshooter Kyle Korver to a three - year, $15 million contract. The same day that the Bulls signed Kyle Korver, they signed Turkish All - Star Ömer Aşık. After being matched by the Orlando Magic for J.J. Redick, they signed their third free agent from the Jazz in the off - season in shooting guard Ronnie Brewer, traded for former Warrior point guard C.J. Watson, and signed former Bucks power forward Kurt Thomas as well as former Spurs player Keith Bogans and former Celtic Brian Scalabrine.
Rose earned the 2011 NBA MVP Award, thereby becoming the youngest player in NBA history to win it. He became the first Bulls player since Michael Jordan to win the award. As a team, Chicago finished the regular season with a league - best 62 -- 20 record and clinched the first seed in the Eastern Conference for the first time since 1998. The Bulls defeated the Indiana Pacers and the Atlanta Hawks in five and six games, respectively, thereby reaching the Eastern Conference finals for the first time since 1998, and faced the Miami Heat. After winning the first game of the series, they lost the next four games, ending their season.
During the off - season, the Bulls drafted Jimmy Butler 30th overall in the 2011 NBA draft. After the NBA lockout ended, the Bulls lost Kurt Thomas to free agency, and released Keith Bogans. The Bulls signed veteran shooting guard Richard "Rip '' Hamilton to a three - year deal, after he was waived by the Detroit Pistons. The Bulls also gave MVP Derrick Rose a 5 - year contract extension worth $94.8 million.
Derrick Rose was voted as an NBA All - Star starter for the second consecutive year, and was the third leading voted player overall behind Dwight Howard and Kobe Bryant. Luol Deng was also selected as a reserve for the Eastern Conference. This was the first time that the Bulls had two all stars since 1997, when Michael Jordan and Scottie Pippen were the duo. Derrick Rose was injured for most of the 2011 -- 12 NBA season; however, the team was still able to finish with a 50 -- 16 record and clinched the first seed in the Eastern Conference for the second straight year and the best overall record in the NBA (tied with the San Antonio Spurs). Rose suffered a new injury when he tore his ACL during the 4th quarter of the first playoff game on April 28, 2012, against the Philadelphia 76ers and missed the rest of the series. Head coach Tom Thibodeau was criticized for keeping Rose in the game even though the Bulls were essentially minutes away from their victory over the 76ers. The Bulls lost the next three games, and also lost Noah to a foot injury after he severely rolled his ankle stepping on Andre Iguodala 's foot in Game 3; he briefly returned for part of the fourth quarter of that game, but missed the following games in the series. After winning Game 5 at home, Bulls were eliminated by the 76ers in Game 6 in Philadelphia, becoming the fifth team in NBA history to be eliminated as a first seed by an eighth seed. In Game 6, Andre Iguodala sank two free throws with 2.2 seconds left to put the 76ers up 79 -- 78 after getting fouled by Ömer Aşık, who had missed two free throws five seconds earlier. At the end of the season, Boozer and Aşık were the only members on the Bulls ' roster to have played in every game, with Korver and Brewer missing one game apiece. In the offseason, the Bulls gave up Lucas to the Toronto Raptors, Brewer to the New York Knicks, Korver to the Atlanta Hawks, Watson to the Brooklyn Nets and Aşık to the Houston Rockets, but brought back Kirk Hinrich. In addition, they added Marco Belinelli, Vladimir Radmanovic, Nazr Mohammed and Nate Robinson to the roster via free agency.
Rose missed the entire 2012 -- 13 season, but despite his absence, the Bulls finished 45 -- 37, second in the Central Division (behind the Indiana Pacers) and 5th in their conference. They defeated the Brooklyn Nets 4 -- 3 (after leading 3 -- 1) in the first round of the playoffs and lost to the Miami Heat 4 -- 1 in the next round.
During the season, the Bulls snapped both Miami 's 27 - game winning streak and the New York Knicks ' 13 - game winning streak, becoming the second team in NBA history to snap two winning streaks of 13 games or more in a season.
Just 10 games into the 2013 -- 14 season, Derrick Rose would tear his medial meniscus on a non-contact play. He declared he would miss the remainder of the season. On January 7, 2014, veteran forward Luol Deng was traded to the Cleveland Cavaliers for center Andrew Bynum and a set of picks. Bynum was immediately waived after the trade went through. The Bulls would finish second in the Central Division with 48 wins, and earned home - court advantage in the first round. However, due to lack of a strong offensive weapon, they failed to win a single home game en route to losing to the Washington Wizards in five games.
In the 2014 NBA draft, the Bulls traded their # 16 and # 19 picks for Doug McDermott, the former Creighton star and 5th leading scorer in NCAA history, who was selected with the 11th pick, and in the second round, took Cameron Bairstow with the 49th pick. That offseason, they signed Pau Gasol, re-signed Kirk Hinrich and brought over Eurostar Nikola Mirotić, who was acquired via a draft day trade in 2011, but could not come over sooner due to salary cap constraints.
The second return of Derrick Rose gave the Bulls and their fans optimism for the 2014 -- 15 season. With 2 - time NBA Champion Pau Gasol and a deep bench consisting of Taj Gibson, Nikola Mirotić, Tony Snell, Aaron Brooks, Doug McDermott, Kirk Hinrich, among others, the Bulls were one of the two favorite teams to come out of the Eastern Conference along with the Cleveland Cavaliers. The Bulls started off the season in style with a blowout win of the New York Knicks, and then winning 7 of their first 9 games (losses coming to the Cleveland Cavaliers and Boston Celtics). The emergence of Jimmy Butler as a primary scorer for the Bulls was a major surprise and he surged into the forefront of the "Most Improved Player of the Year '' award race. Butler 's statistical jump was noted by many as one of the greatest in NBA History, going from scoring just 13 points per game in 2013 -- 14 to scoring 20 points per game in 2014 -- 15. Pau Gasol was considered a huge asset for the Bulls and averaged a double - double throughout the season. Both Butler and Gasol ended up making the Eastern Conference All - Star team. The Bulls ' second half of the season was marred by inconsistency and frustration set in with Derrick Rose blasting the team for not being on the same page. Tension between management and Tom Thibodeau continued to be a dark cloud hanging over the organization. The Bulls finished with a 50 -- 32 record and the 3rd seed in the Eastern Conference. They faced the Milwaukee Bucks in the first round, and took advantage of the young and inexperienced Bucks by going up a quick 3 -- 0 in the series. However, inconsistency and not being on the same page yet again plagued the Bulls as the Bucks won the next two games, sending a scare to Chicago. The Bulls bounced back with fury in Game 6 however, beating the Bucks by a playoff record 54 points winning the series 4 -- 2. The next round saw the Bulls facing their arch - rival Cleveland Cavaliers, and their biggest nemesis, LeBron James, who had beaten the Bulls in all three of their previous playoff meetings. The Bulls shocked the Cavs in Game 1 dominating them and never trailing. The Cavs answered back in Game 2 in the same fashion, never trailing the entire game. In a pivotal Game 3 in Chicago, the Bulls and Cavs battled closely all the way through, but the Bulls prevailed on a last - second buzzer beating 3 - pointer by Derrick Rose. In Game 4, the Cavs would answer once again, with LeBron James hitting the buzzer - beating shot to win the game. The Bulls lack of consistency and poor offensive showing doomed them once again as the Cavs won the next 2 games handily and closed out the series 4 -- 2. After the series, speculation erupted about Tom Thibodeau 's job security due to escalating feud between Thibodeau and Bulls front office managers Gar Forman and John Paxson.
On May 28, 2015, the Bulls fired Tom Thibodeau to seek a "change in approach ''. The Bulls named Fred Hoiberg as their head coach on June 2, 2015. The Bulls had only 1 draft pick in the 2015 NBA draft, and selected center Bobby Portis from the University of Arkansas. Bulls forward Mike Dunleavy Jr. was ruled out for at least the first four months of the season after completing back surgery. With Dunleavy out indefinitely, the Bulls promoted Doug McDermott to the starting lineup in his place at small forward. Before the season started, coach Fred Hoiberg made an incredibly controversial move by putting Nikola Mirotić as his starting power forward to pair with center Pau Gasol, meaning Joakim Noah, a long - time Bulls veteran and a fan - favorite was to come off the bench. Hoiberg told the media that the move was suggested by Noah himself but Noah denied having made any suggestions to Hoiberg, which sparked a distrust between the two before the season even began.
The Bulls started the 2015 -- 16 season off well with an impressive season - opening 97 -- 95 victory against archrivals and defending Eastern Conference Champion Cleveland Cavaliers and jumped to an 8 -- 3 record in the first month. The Bulls went 10 -- 9 and through late November and December. The Bulls came back and won six straight games. However soon afterwards, they lost 12 of their next 17 games and Butler missed four weeks after injuring his knee. The Bulls were eliminated from playoff contention after a loss to the Miami Heat on April 7, 2016 although finishing with season with a winning record of 42 -- 40. It was the first time in nearly 8 years that the Bulls had missed the playoffs.
On June 22, 2016, Derrick Rose and Justin Holiday, along with a 2017 second - round draft pick, were traded to the New York Knicks for center Robin Lopez, and point guards Jerian Grant and José Calderón, who was soon traded to the Los Angeles Lakers. On July 7, the Bulls announced the signing of Rose 's replacement, guard Rajon Rondo. On July 15, the Bulls signed Chicago native Dwyane Wade. On October 17, 2016, the Bulls acquired 2014 Rookie of the Year Michael Carter - Williams in exchange for Tony Snell. The Bulls clinched the eighth seed in 2017 NBA Playoffs after winning seven of their final ten games and finishing the season with a 41 -- 41 record. The team struck an early 2 -- 0 lead against the top - seeded Boston Celtics in the first round of the playoffs, but ultimately lost the series after dropping the next four games.
On June 22, 2017, Jimmy Butler, along with Chicago 's 2017 first - round pick, was traded to the Minnesota Timberwolves for Zach LaVine, Kris Dunn, and Minnesota 's 2017 first - round pick, which the Bulls used to select Lauri Markkanen. Additionally, on June 27, the Bulls did not give a qualifying offer to Michael Carter - Williams, allowing him to enter unrestricted free agency. On June 30, Rajon Rondo and Isaiah Canaan were waived by the Bulls. On September 24, 2017, Dwyane Wade and the Bulls reportedly agreed to a buyout of the remaining year on his contract. Adrian Wojnarowski reported that Wade gave back $8 million of his $23.2 million contract as part of the agreement.
The Bulls ' main division rivals have been the Detroit Pistons ever since the Jordan - led Bulls met the "Bad Boy '' Pistons in the 1988 Eastern Conference semifinals. The two teams met in the playoffs four consecutive years, with the Pistons winning each time until 1991. The Eastern Conference Finals in 1991 ended with a four - game sweep of the Pistons, who walked off the floor with time still on the game clock. The rivalry was renewed in the 2007 Eastern Conference Semifinals, in which former Detroit cornerstone Ben Wallace met his former team (the Pistons won in 6 games). The geographic proximity and membership in the Central Division further intensify the rivalry, which has been characterized by intense, physical play ever since the teams met in the late 1980s. Chicago fans have been known to have a disliking for Detroit professional teams, as it was in the same division as Chicago in all four major North American sports until recently when the Red Wings moved to the Atlantic Division of the Eastern Conference for the 2013 -- 14 season.
The Bulls and the Miami Heat rivalry began once the Heat became contenders during the 1990s, a decade dominated by the Bulls. They were eliminated 3 times by Chicago, who went on to win the title each time. The rivalry has come back due to the return of the Bulls to the playoffs in the post-Michael Jordan era and the emergence of Dwyane Wade and Derrick Rose. The revived rivalry has been very physical, involving rough plays and hard fouls between players, most notably the actions of former Heat player James Posey. The Bulls and Heat met in the 2011 Eastern Conference Finals, with the Heat winning in 5 games. On March 27, 2013, Chicago snapped Miami 's 27 - game winning streak. The Bulls and Heat met later that year in the 2013 Eastern Conference Semifinals. Miami won the series 4 -- 1. Since LeBron James 's departure from Miami, the Bulls - Heat rivalry has experienced a tough in comparison to the better part of the century as the Bulls chop the Heat 's playoff hopes in the 2017 regular season.
Another franchise that the Bulls have competed fiercely with is the New York Knicks. The two teams met in the playoffs in four consecutive years (1991 -- 94) and again in 1996, with the teams ' series twice (1992 and 1994) going the full seven games. Their first playoff confrontation, however, came in 1989 when both teams were called "teams on the rise '' under Michael Jordan and Patrick Ewing, respectively (rivalry that started their freshman year in the 1982 NCAA Men 's Division I Basketball Championship Game with Jordan hitting the deciding jumper of the final). That first confrontation would belong to Chicago within six games of the Eastern Semifinals. The Bulls triumphed in the first three years (1991 -- 93) before narrowly losing in 1994 but exacted revenge in 1996. As with Detroit, the historic rivalry between the cities has led to animosity between the teams and occasionally their fans.
During the Bulls ' run of dominance, the player introductions became world - famous. Longtime announcer Tommy Edwards was the first to use "Sirius '', "On The Run '' and other songs in game presentation in the NBA. When Edwards moved to Boston for employment with CBS Radio, he was replaced by Ray Clay in 1990, and Clay continued many of the traditional aspects of the Bulls introductions, including the music, The Alan Parsons Project 's "Sirius '', for all six championship runs. The lights are first dimmed during the visiting team introduction, accompanied by "The Imperial March '' from Star Wars composed by John Williams or "On the Run '' by Pink Floyd, or "Tick of the Clock '' by Chromatics. Virtually all lights in the stadium are then shut off for the Bulls introduction, and a spotlight illuminates each player as he is introduced and runs onto the court; the spotlight is also focused on the Bulls logo prior to the introductions. Since the move to the United Center, lasers and fireworks have been added, and with improvements to the arena 's White Way video screen, computer graphics on the stadium monitors have been added. These graphics feature the 3D - animated ' Running of the Bulls ' en route to the United Center, along the way smashing a bus featuring the opposing team 's logo. Coincidentally, Alan Parsons wrote "Sirius '' for his own band and was the sound engineer for "On the Run '' from Pink Floyd 's album The Dark Side of the Moon.
Traditionally, the players have been introduced in the following order: small forward, power forward, center, point guard, shooting guard. Thus during the championship era, Scottie Pippen was usually the first (or second) Bulls player introduced, and Michael Jordan the last. (Pippen and Jordan are the only players to play on all six Bulls championship teams.) More recently with Derrick Rose 's arrival, the guards have been reversed in order, making the Chicago - bred point guard the last player introduced. Although internal disputes eventually led to the dismissal of Clay, the Bulls in 2006 announced the return of Tommy Edwards as the announcer.
As part of Edwards ' return, the introductions changed as a new introduction was developed by Andy and Larry Wachowski, Ethan Stoller and Jamie Poindexter, all from Chicago. The introduction also included a newly composed remix of the traditional Sirius theme.
The Bulls have an unofficial tradition of wearing black shoes (regardless of being home or away) during the playoffs, which dates all the way back to 1989 when they debuted the tradition. Then - Bulls backup center Brad Sellers suggested to wear black shoes as a way to show unity within the team. For the 1996 playoffs, they became the first team to wear black socks with the black shoes, similar to the University of Michigan and the Fab Five which started the trend in college earlier in the decade. Since, many teams have this look in both the regular season and playoffs. It was noted when the Bulls made their first playoff appearance during the 2004 -- 05 season after a six - year hiatus, they continued the tradition and wore black shoes.
Even though the Bulls generally wear black footwear in the playoffs since 1989, there have been some notable exceptions. In the 1995 playoffs against the Magic, when Michael Jordan debuted his Air Jordan XI shoe, he wore the white colorway during the Bulls ' playoff games in Orlando. He was fined by the Bulls for not complying with their colorway policy. During the 2009 playoffs, the Bulls again broke the tradition when all of their players wore white shoes and socks in Game 3 of the first round against the Boston Celtics. More recently, since the NBA 's relaxation of sneaker color rules, some Bulls players wore either red or white sneakers in defiance of the tradition.
The Bulls and their arena mates, the Chicago Blackhawks, shared an odd tradition dating to the opening of Chicago Stadium. Every fall, Feld Entertainment 's Ringling Bros. and Barnum & Bailey Circus came to Chicago on its nationwide tour. Since it used large indoor venues rather than tents, it took over the United Center for its entire run and the Bulls were forced, along with the Blackhawks, to take an extended road trip that lasted around two weeks. The start of the "circus trip '', as many sportscasters dubbed it, was noted in local newspapers, television and radio sports reports as "the circus trip '', along with national programs like SportsCenter. The annual circus trip will no longer take place starting with the 2017 - 2018 NBA season as Blackhawks chairman Rocky Wirtz, who co-owns the United Center with Bulls chairman Jerry Reinsdorf, would let the contract lapse after the circus 's 2016 run, and condensed the formerly two - week local run of Feld 's Disney on Ice to a week - long period in February (which had also caused minor complications for scheduling in February). Feld and Ringling Bros. effectively made the former issue moot with their January 2017 announcement that the circus would end all operations nationwide in May 2017 due to costs and other multiple factors.
Dick Klein wanted a name that evoked Chicago 's traditional meat packing industry and the Chicago Stadium 's proximity to the Union Stock Yards. Klein considered names like Matadors or Toreadors, but dismissed them, saying, "If you think about it, no team with as many as three syllables in its nickname has ever had much success except for the (Montreal Canadiens). '' After discussing possible names with his family, Klein settled on Bulls when his son Mark said, "Dad, that 's a bunch of bull! ''
The iconic Bulls ' logo is a charging red bull 's face (seen at left). The logo was designed by noted American graphic designer Dean P. Wessel and was adopted in 1966. At one point, the Bulls also had an alternate logo during the early 1970s, featuring the same Bulls logo, but with a cloud that says "Windy City '' below the bull 's nose.
The Bulls currently wear three different uniforms: a white uniform, a red uniform, and a black alternate uniform. The original uniforms were aesthetically close to what the Bulls wear today, featuring the iconic diamond surrounding the Bulls logo on the shorts and block lettering. What distinguished the original uniforms were the black drop shadows, red or white side stripes with black borders, and white lettering on the red uniforms. For the 1969 -- 70 season, the red uniforms were tweaked to include the city name.
For the 1973 -- 74 season, the Bulls drastically changed their look, removing the side stripes and drop shadows while moving the front numbers to the left chest. While the white uniforms saw the "Bulls '' wordmark go from a vertically arched to radially arched arrangement, the red uniforms saw a more significant makeover, featuring black lettering and a script "Chicago '' wordmark. With a few tweaks in the lettering, these uniforms were used until 1985.
This uniform set was later revived as a throwback uniform during the 2003 -- 04 and 2015 -- 16 seasons.
Starting with the 1985 -- 86 season, the Bulls updated their uniform. Among the more notable changes in the look were centered uniform numbers and a vertically arched "Bulls '' wordmark in both the red and white uniforms. Like the previous set, this uniform saw a few tweaks particularly in the treatment of the player 's name.
When Nike became the NBA 's uniform provider in 2017, the Bulls kept much of the same look save for the truncated shoulder striping and the addition of the Chicago four stars on the waistline. With Nike and the NBA eliminating any designations on home and away uniforms, the Bulls also announced that their red "Icon '' uniforms would become their primary home uniforms while the white "Association '' uniforms would become their primary away uniforms.
In the 1995 -- 96 season, the Bulls added a black uniform to their set. The initial look featured red pinstripes and lacked the classic diamond on the shorts. This set was revived as throwback uniforms in the 2007 -- 08 and 2012 -- 13 seasons.
From the 1997 -- 98 to the 2005 -- 06 seasons, the Bulls wore slightly modified black uniforms without pinstripes. This set, with a few slight changes in the template, also marked the return of the city name in front of the uniform during the 1999 -- 2000 season.
The 2006 -- 07 season saw another change in the Bulls ' black alternate uniform, now resembling the red and white uniform with the addition of a red diamond in the shorts. For the 2014 -- 15 season, the uniforms were tweaked a bit to include sleeves and a modernized diamond treatment in black with red and white borders.
Since the 2017 -- 18 season, the Bulls ' black uniforms remained mostly untouched save for the aforementioned switch to the new Nike template that affected the treatment towards the shoulder piping. Nike also dubbed this uniform as the "Statement '' uniform in reference to its third jerseys.
During the 2005 -- 06 season, the Bulls honored the defunct Chicago Stags by wearing the team 's red and blue throwback uniforms. The set featured red tops and blue shorts.
From 2006 to 2017, the Bulls wore a green version of their red uniforms during the week of St. Patrick 's Day in March. The only red elements visible were those found on the team logo. For 2015 the Bulls wore sleeved versions of the green uniform that featured white lettering with gold and black trim and the "Chicago '' wordmark replacing "Bulls in front. In 2016 and 2017, they wore the same uniforms minus the sleeves.
Between 2009 and 2017, the Bulls wore a variation of their red uniforms as part of the NBA 's "Noche Latina '' festivities every March. The only notable change in this uniform was the "Los Bulls '' wordmark in front. For 2014, the Bulls briefly retired the look in favor of a black sleeved uniform featuring "Los Bulls '' in white with red trim.
During the NBA 's "Green Week '' celebrations, the Bulls also wore green uniforms, but with a slightly darker shade from their St. Patrick 's Day counterparts. They used their black alternate uniforms as its template. They donned the uniforms in a game against the Philadelphia 76ers on April 9, 2009.
The Bulls also wore special edition Christmas uniforms as part of the NBA 's Christmas Day games. The one - off Christmas uniforms were as follows:
From 2015 to 2017, the Bulls wore a grey "Pride '' sleeved uniform, featuring the team name and other lettering in red with white trim. The shorts featured a more modernized version of the diamond, along with four six - pointed stars on either side.
In the 2017 -- 18 season, the Bulls will wear special "City '' uniforms designed by Nike. The uniforms, designed to pay homage to Chicago 's flag, are in white and feature the classic "Chicago '' script and numbers in red with light blue trim along with four six - pointed stars on each side.
Benny the Bull is the main mascot of the Chicago Bulls. He was first introduced in 1969. Benny is a red bull who wears number 1. Benny is one of the oldest and best known mascots in all of professional sports. The Bulls also had another mascot named Da Bull. Introduced in 1995, he was described on the team website as being the high flying cousin of Benny, known for his dunking skills. The man who portrayed Da Bull was arrested in 2004 for possession and selling marijuana from his car. Da Bull was retired soon after the incident. While Benny has a family - friendly design, Da Bull was designed as a more realistic bull. Unlike Benny, Da Bull was brown. He also had a meaner facial expression and wore number 95.
In 1992, the team began training at the Berto Center, located at 550 Lake Cook Rd., Deerfield, Illinois. However, on June 13, 2012, the team announced that it would move its practice facility to a downtown location closer to the United Center to reduce game day commutes. On September 12, 2014, the Bulls officially opened their new training facility, the Advocate Center.
Roster Transactions Last transaction: 2017 -- 10 -- 19
The Bulls hold the draft rights to the following unsigned draft picks who have been playing outside the NBA. A drafted player, either an international draftee or a college draftee who is n't signed by the team that drafted him, is allowed to sign with any non-NBA teams. In this case, the team retains the player 's draft rights in the NBA until one year after the player 's contract with the non-NBA team ends. This list includes draft rights that were acquired from trades with other teams.
Bold denotes still active with team.
Italic denotes still active but not with team. Points scored (regular season) (as of the end of the 2016 -- 17 season)
Other statistics (regular season) (as of April 18, 2017)
The flagship station for the Bulls is WLS AM 890. The Bulls signed a multiyear deal with Cumulus Media to broadcast regular and postseason games on WLS through 2021. Chuck Swirsky and Bill Wennington are the announcers. Spanish - language station WRTO also airs all Bulls games since 2009, with Omar Ramos as play - by - play announcer and Matt Moreno as color analyst.
The Bulls ' television broadcasts are split among NBC Sports Chicago, which broadcasts most of the games, WGN - TV, and WCIU - TV. The announcers are Neil Funk and Stacey King. Also worth noting is that WGN - TV does not air all of its Chicago Bulls games nationwide: only a select few, usually Saturday games, were nationally televised on WGN America from 1999 -- 2014. The rest are only available within the Chicago area.
|
who played casey jones on rizzoli and isles | Chris Vance (actor) - wikipedia
George Christopher Vance (born 30 December 1971) is an English actor. Vance is known for his roles as Jack Gallagher in the Fox series Mental and James Whistler in Prison Break. He is the second actor after Jason Statham to play Frank Martin (in TNT 's Transporter: The Series) and has guest - starred on Burn Notice (as Mason Gilroy) and Dexter. He had a recurring role as the love interest of Angie Harmon 's character on Rizzoli & Isles. He also appeared as Non on the CBS show Supergirl.
Vance attended St Bede 's Secondary School in Lawrence Weston, Bristol. He then attended Newcastle University, graduating with a degree in civil engineering.
Vance appeared in the British television series, The Bill, and in the Australian television series Stingers and Blue Heelers, before achieving success in the role of Sean Everleigh in All Saints. He appeared in the third and fourth seasons of Prison Break, as James Whistler.
In June 2008, Vance moved to Colombia in order to film the Fox series Mental, but the show was cancelled after only 13 episodes. He guest -- starred in an antagonistic role as Mason Gilroy on the USA series Burn Notice for five episodes toward the end of season 3. Vance also appeared on Showtime series ' fifth season of Dexter as Cole, chief of security for a popular motivational speaker who becomes implicated in a series of heinous female torture - murders in Southern Florida. From 2012, he has starred in Transporter: The Series, a French - Canadian production.
|
does snow white have a star on the hollywood walk of fame | List of fictional characters with stars on the Hollywood Walk of Fame - wikipedia
The Hollywood Walk of Fame awards people of notability in the film, television, music, radio and theatre industry. However, stars have also been awarded to fictional characters. The following is a list of fictional characters with stars on the Hollywood Walk of Fame, including the category and location of each honoree.
The following fictional characters have received stars on the Hollywood Walk of Fame; 13 in motion pictures and four in television.
Three live - action canines have stars on the Walk.
Several corporate entities are on the Walk. All are nominated in the "Special Recognition Category '', and as with Tom Bradley and Johnny Grant, many have special logos other than the five normal categories.
|
when does new season of jersey shore start | Jersey Shore: Family Vacation - wikipedia
Jersey Shore: Family Vacation is an American reality television series that premiered on MTV globally on April 5, 2018. The series follows seven housemates from the original Jersey Shore spending a month living together in Miami, Florida. On February 28, 2018, a second season had been ordered ahead of the series premiere, with the season set to be filmed in Las Vegas.
On November 27, 2017, MTV announced that the cast (with the exception of Sammi) would be reuniting in Miami, Florida for a new reunion season titled Jersey Shore: Family Vacation. The series premiered globally on April 5, 2018. According to MTV, it is considered a new series and not the seventh season of the original show.
|
who plays roy good in the netflix series godless | Godless (TV series) - wikipedia
Godless is an American television western drama miniseries created by Scott Frank for Netflix. The seven - episode limited series began production in Santa Fe, New Mexico in September 2016, and was released on Netflix globally on November 22, 2017. The series received positive reviews, and was named one of the year 's 10 best by The Washington Post and Vanity Fair.
The series received positive reviews. Review aggregator website Rotten Tomatoes gave it a 88 % "fresh '' rating and average rating of 7.79 out of 10, based on 60 reviews, with critics consensus, "Vistas and violence root Godless firmly in traditional Western territory, but its female - driven ensemble sets it apart in a male - dominated genre. '' On Metacritic, it has a score of 76 out of 100, based on 20 reviews, indicating "generally favorable reviews ''.
Alan Sepinwall from Uproxx reviewed it positively, saying, "Godless does n't quite find that happy middle, but the storytelling excesses created by this format make it more fun than the traditional movie version probably would have been. ''
Vanity Fair and The Washington Post included Godless on their "best shows of 2017 '' lists.
|
in kohlberg's theory of moral development which is the highest stage | Lawrence Kohlberg 's stages of moral Development - wikipedia
Lawrence Kohlberg 's stages of moral development constitute an adaptation of a psychological theory originally conceived by the Swiss psychologist Jean Piaget. Kohlberg began work on this topic while a psychology graduate student at the University of Chicago in 1958, and expanded upon the theory throughout his life.
The theory holds that moral reasoning, the basis for ethical behavior, has six identifiable developmental stages, each more adequate at responding to moral dilemmas than its predecessor. Kohlberg followed the development of moral judgment far beyond the ages studied earlier by Piaget, who also claimed that logic and morality develop through constructive stages. Expanding on Piaget 's work, Kohlberg determined that the process of moral development was principally concerned with justice, and that it continued throughout the individual 's lifetime, a notion that spawned dialogue on the philosophical implications of such research.
The six stages of moral development are grouped into three levels: pre-conventional morality, conventional morality, and post-conventional morality.
For his studies, Kohlberg relied on stories such as the Heinz dilemma, and was interested in how individuals would justify their actions if placed in similar moral dilemmas. He then analyzed the form of moral reasoning displayed, rather than its conclusion, and classified it as belonging to one of six distinct stages.
There have been critiques of the theory from several perspectives. Arguments include that it emphasizes justice to the exclusion of other moral values, such as caring; that there is such an overlap between stages that they should more properly be regarded as separate domains; or that evaluations of the reasons for moral choices are mostly post hoc rationalizations (by both decision makers and psychologists studying them) of essentially intuitive decisions.
Nevertheless, an entirely new field within psychology was created as a direct result of Kohlberg 's theory, and according to Haggbloom et al. 's study of the most eminent psychologists of the 20th century, Kohlberg was the 16th most frequently cited psychologist in introductory psychology textbooks throughout the century, as well as the 30th most eminent overall.
Kohlberg 's scale is about how people justify behaviors and his stages are not a method of ranking how moral someone 's behavior is. There should, however, be a correlation between how someone scores on the scale and how they behave, and the general hypothesis is that moral behaviour is more responsible, consistent and predictable from people at higher levels.
Kohlberg 's six stages can be more generally grouped into three levels of two stages each: pre-conventional, conventional and post-conventional. Following Piaget 's constructivist requirements for a stage model, as described in his theory of cognitive development, it is extremely rare to regress in stages -- to lose the use of higher stage abilities. Stages can not be skipped; each provides a new and necessary perspective, more comprehensive and differentiated than its predecessors but integrated with them.
The understanding gained in each stage is retained in later stages, but may be regarded by those in later stages as simplistic, lacking in sufficient attention to detail.
The pre-conventional level of moral reasoning is especially common in children, although adults can also exhibit this level of reasoning. Reasoners at this level judge the morality of an action by its direct consequences. The pre-conventional level consists of the first and second stages of moral development and is solely concerned with the self in an egocentric manner. A child with pre-conventional morality has not yet adopted or internalized society 's conventions regarding what is right or wrong but instead focuses largely on external consequences that certain actions may bring.
In Stage one (obedience and punishment driven), individuals focus on the direct consequences of their actions on themselves. For example, an action is perceived as morally wrong because the perpetrator is punished. "The last time I did that I got spanked, so I will not do it again. '' The worse the punishment for the act is, the more "bad '' the act is perceived to be. This can give rise to an inference that even innocent victims are guilty in proportion to their suffering. It is "egocentric '', lacking recognition that others ' points of view are different from one 's own. There is "deference to superior power or prestige ''.
An example of obedience and punishment driven morality would be a child refusing to do something because it is wrong and that the consequences could result in punishment. For example, a child 's classmate tries to dare the child to skip school. The child would apply obedience and punishment driven morality by refusing to skip school because he would get punished.
Stage two (self - interest driven) expresses the "what 's in it for me '' position, in which right behavior is defined by whatever the individual believes to be in their best interest but understood in a narrow way which does not consider one 's reputation or relationships to groups of people. Stage two reasoning shows a limited interest in the needs of others, but only to a point where it might further the individual 's own interests. As a result, concern for others is not based on loyalty or intrinsic respect, but rather a "You scratch my back, and I 'll scratch yours '' mentality. The lack of a societal perspective in the pre-conventional level is quite different from the social contract (stage five), as all actions at this stage have the purpose of serving the individual 's own needs or interests. For the stage two theorist, the world 's perspective is often seen as morally relative.
An example of self - interest driven is when a child is asked by his parents to do a chore. The child asks, "what 's in it for me? '' The parents offer the child an incentive by giving a child an allowance to pay them for their chores. The child is motivated by self - interest to do chores.
The conventional level of moral reasoning is typical of adolescents and adults. To reason in a conventional way is to judge the morality of actions by comparing them to society 's views and expectations. The conventional level consists of the third and fourth stages of moral development. Conventional morality is characterized by an acceptance of society 's conventions concerning right and wrong. At this level an individual obeys rules and follows society 's norms even when there are no consequences for obedience or disobedience. Adherence to rules and conventions is somewhat rigid, however, and a rule 's appropriateness or fairness is seldom questioned.
In Stage three (good intentions as determined by social consensus), the self enters society by conforming to social standards. Individuals are receptive to approval or disapproval from others as it reflects society 's views. They try to be a "good boy '' or "good girl '' to live up to these expectations, having learned that being regarded as good benefits the self. Stage three reasoning may judge the morality of an action by evaluating its consequences in terms of a person 's relationships, which now begin to include things like respect, gratitude, and the "golden rule ''. "I want to be liked and thought well of; apparently, not being naughty makes people like me. '' Conforming to the rules for one 's social role is not yet fully understood. The intentions of actors play a more significant role in reasoning at this stage; one may feel more forgiving if one thinks that "they mean well ''.
In Stage four (authority and social order obedience driven), it is important to obey laws, dictums, and social conventions because of their importance in maintaining a functioning society. Moral reasoning in stage four is thus beyond the need for individual approval exhibited in stage three. A central ideal or ideals often prescribe what is right and wrong. If one person violates a law, perhaps everyone would -- thus there is an obligation and a duty to uphold laws and rules. When someone does violate a law, it is morally wrong; culpability is thus a significant factor in this stage as it separates the bad domains from the good ones. Most active members of society remain at stage four, where morality is still predominantly dictated by an outside force.
The post-conventional level, also known as the principled level, is marked by a growing realization that individuals are separate entities from society, and that the individual 's own perspective may take precedence over society 's view; individuals may disobey rules inconsistent with their own principles. Post-conventional moralists live by their own ethical principles -- principles that typically include such basic human rights as life, liberty, and justice. People who exhibit post-conventional morality view rules as useful but changeable mechanisms -- ideally rules can maintain the general social order and protect human rights. Rules are not absolute dictates that must be obeyed without question. Because post-conventional individuals elevate their own moral evaluation of a situation over social conventions, their behavior, especially at stage six, can be confused with that of those at the pre-conventional level.
Some theorists have speculated that many people may never reach this level of abstract moral reasoning.
In Stage five (social contract driven), the world is viewed as holding different opinions, rights, and values. Such perspectives should be mutually respected as unique to each person or community. Laws are regarded as social contracts rather than rigid edicts. Those that do not promote the general welfare should be changed when necessary to meet "the greatest good for the greatest number of people ''. This is achieved through majority decision and inevitable compromise. Democratic government is ostensibly based on stage five reasoning.
In Stage six (universal ethical principles driven), moral reasoning is based on abstract reasoning using universal ethical principles. Laws are valid only insofar as they are grounded in justice, and a commitment to justice carries with it an obligation to disobey unjust laws. Legal rights are unnecessary, as social contracts are not essential for deontic moral action. Decisions are not reached hypothetically in a conditional way but rather categorically in an absolute way, as in the philosophy of Immanuel Kant. This involves an individual imagining what they would do in another 's shoes, if they believed what that other person imagines to be true. The resulting consensus is the action taken. In this way action is never a means but always an end in itself; the individual acts because it is right, and not because it avoids punishment, is in their best interest, expected, legal, or previously agreed upon. Although Kohlberg insisted that stage six exists, he found it difficult to identify individuals who consistently operated at that level.
In Kohlberg 's empirical studies of individuals throughout their life Kohlberg observed that some had apparently undergone moral stage regression. This could be resolved either by allowing for moral regression or by extending the theory. Kohlberg chose the latter, postulating the existence of sub-stages in which the emerging stage has not yet been fully integrated into the personality. In particular Kohlberg noted a stage 41⁄2 or 4 +, a transition from stage four to stage five, that shared characteristics of both. In this stage the individual is disaffected with the arbitrary nature of law and order reasoning; culpability is frequently turned from being defined by society to viewing society itself as culpable. This stage is often mistaken for the moral relativism of stage two, as the individual views those interests of society that conflict with their own as being relatively and morally wrong. Kohlberg noted that this was often observed in students entering college.
Kohlberg suggested that there may be a seventh stage -- Transcendental Morality, or Morality of Cosmic Orientation -- which linked religion with moral reasoning. Kohlberg 's difficulties in obtaining empirical evidence for even a sixth stage, however, led him to emphasize the speculative nature of his seventh stage.
Kohlberg 's stages of moral development are based on the assumption that humans are inherently communicative, capable of reason, and possess a desire to understand others and the world around them. The stages of this model relate to the qualitative moral reasonings adopted by individuals, and so do not translate directly into praise or blame of any individual 's actions or character. Arguing that his theory measures moral reasoning and not particular moral conclusions, Kohlberg insists that the form and structure of moral arguments is independent of the content of those arguments, a position he calls "formalism ''.
Kohlberg 's theory centers on the notion that justice is the essential characteristic of moral reasoning. Justice itself relies heavily upon the notion of sound reasoning based on principles. Despite being a justice - centered theory of morality, Kohlberg considered it to be compatible with plausible formulations of deontology and eudaimonia.
Kohlberg 's theory understands values as a critical component of the right. Whatever the right is, for Kohlberg, it must be universally valid across societies (a position known as "moral universalism ''): there can be no relativism. Moreover, morals are not natural features of the world; they are prescriptive. Nevertheless, moral judgments can be evaluated in logical terms of truth and falsity.
According to Kohlberg, someone progressing to a higher stage of moral reasoning can not skip stages. For example, an individual can not jump from being concerned mostly with peer judgments (stage three) to being a proponent of social contracts (stage five). On encountering a moral dilemma and finding their current level of moral reasoning unsatisfactory, however, an individual will look to the next level. Realizing the limitations of the current stage of thinking is the driving force behind moral development, as each progressive stage is more adequate than the last. The process is therefore considered to be constructive, as it is initiated by the conscious construction of the individual, and is not in any meaningful sense a component of the individual 's innate dispositions, or a result of past inductions.
Progress through Kohlberg 's stages happens as a result of the individual 's increasing competence, both psychologically and in balancing conflicting social - value claims. The process of resolving conflicting claims to reach an equilibrium is called "justice operation ''. Kohlberg identifies two of these justice operations: "equality '', which involves an impartial regard for persons, and "reciprocity '', which means a regard for the role of personal merit. For Kohlberg, the most adequate result of both operations is "reversibility '', in which a moral or dutiful act within a particular situation is evaluated in terms of whether or not the act would be satisfactory even if particular persons were to switch roles within that situation (also known colloquially as "moral musical chairs '').
Knowledge and learning contribute to moral development. Specifically important are the individual 's "view of persons '' and their "social perspective level '', each of which becomes more complex and mature with each advancing stage. The "view of persons '' can be understood as the individual 's grasp of the psychology of other persons; it may be pictured as a spectrum, with stage one having no view of other persons at all, and stage six being entirely socio - centric. Similarly, the social perspective level involves the understanding of the social universe, differing from the view of persons in that it involves an appreciation of social norms.
Kohlberg established the Moral Judgement Interview in his original 1958 dissertation. During the roughly 45 - minute tape recorded semi-structured interview, the interviewer uses moral dilemmas to determine which stage of moral reasoning a person uses. The dilemmas are fictional short stories that describe situations in which a person has to make a moral decision. The participant is asked a systemic series of open - ended questions, like what they think the right course of action is, as well as justifications as to why certain actions are right or wrong. The form and structure of these replies are scored and not the content; over a set of multiple moral dilemmas an overall score is derived.
A dilemma that Kohlberg used in his original research was the druggist 's dilemma: Heinz Steals the Drug In Europe.
One criticism of Kohlberg 's theory is that it emphasizes justice to the exclusion of other values, and so may not adequately address the arguments of those who value other moral aspects of actions. Carol Gilligan has argued that Kohlberg 's theory is overly androcentric. Kohlberg 's theory was initially developed based on empirical research using only male participants; Gilligan argued that it did not adequately describe the concerns of women. Kohlberg stated that women tend to get stuck at level 3, focusing on details of how to maintain relationships and promote the welfare of family and friends. Men are likely to move on to the abstract principles, and thus have less concern with the particulars of who is involved. Consistent with this observation, Gilligan 's theory of moral development does not focus on the value of justice. She developed an alternative theory of moral reasoning based on the ethics of caring. Critics such as Christina Hoff Sommers, however, argued that Gilligan 's research is ill - founded, and that no evidence exists to support her conclusion.
Kohlberg 's stages are not culturally neutral, as demonstrated by its application to a number of different cultures. Although they progress through the stages in the same order, individuals in different cultures seem to do so at different rates. Kohlberg has responded by saying that although different cultures do indeed inculcate different beliefs, his stages correspond to underlying modes of reasoning, rather than to those beliefs.
Another criticism of Kohlberg 's theory is that people frequently demonstrate significant inconsistency in their moral judgements. This often occurs in moral dilemmas involving drinking and driving and business situations where participants have been shown to reason at a subpar stage, typically using more self - interest driven reasoning (i.e., stage two) than authority and social order obedience driven reasoning (i.e., stage four). Kohlberg 's theory is generally considered to be incompatible with inconsistencies in moral reasoning. Carpendale has argued that Kohlberg 's theory should be modified to focus on the view that the process of moral reasoning involves integrating varying perspectives of a moral dilemma rather than simply fixating on applying rules. This view would allow for inconsistency in moral reasoning since individuals may be hampered by their inability to consider different perspectives. Krebs and Denton have also attempted to modify Kohlberg 's theory to account for a multitude of conflicting findings, but eventually concluded that the theory is not equipped to take into consideration how most individuals make moral decisions in their everyday lives.
Other psychologists have questioned the assumption that moral action is primarily a result of formal reasoning. Social intuitionists such as Jonathan Haidt, for example, argue that individuals often make moral judgments without weighing concerns such as fairness, law, human rights, or abstract ethical values. Thus the arguments analyzed by Kohlberg and other rationalist psychologists could be considered post hoc rationalizations of intuitive decisions; moral reasoning may be less relevant to moral action than Kohlberg 's theory suggests.
Kohlberg 's body of work on the stages of moral development has been utilized by others working in the field. One example is the Defining Issues Test (DIT) created in 1979 by James Rest, originally as a pencil - and - paper alternative to the Moral Judgement Interview. Heavily influenced by the six - stage model, it made efforts to improve the validity criteria by using a quantitative test, the Likert scale, to rate moral dilemmas similar to Kohlberg 's. It also used a large body of Kohlbergian theory such as the idea of "post-conventional thinking ''. In 1999 the DIT was revised as the DIT - 2; the test continues to be used in many areas where moral testing is required, such as divinity, politics, and medicine.
|
how many electrons are in a single double and triple bond | Carbon -- carbon bond - wikipedia
A carbon -- carbon bond is a covalent bond between two carbon atoms. The most common form is the single bond: a bond composed of two electrons, one from each of the two atoms. The carbon -- carbon single bond is a sigma bond and is formed between one hybridized orbital from each of the carbon atoms. In ethane, the orbitals are sp - hybridized orbitals, but single bonds formed between carbon atoms with other hybridisations do occur (e.g. sp to sp). In fact, the carbon atoms in the single bond need not be of the same hybridisation. Carbon atoms can also form double bonds in compounds called alkenes or triple bonds in compounds called alkynes. A double bond is formed with an sp - hybridized orbital and a p - orbital that is n't involved in the hybridization. A triple bond is formed with an sp - hybridized orbital and two p - orbitals from each atom. The use of the p - orbitals forms a pi bond.
Carbon is one of the few elements that can form long chains of its own atoms, a property called catenation. This coupled with the strength of the carbon -- carbon bond gives rise to an enormous number of molecular forms, many of which are important structural elements of life, so carbon compounds have their own field of study: organic chemistry.
Branching is also common in C − C skeletons. Different carbon atoms can be described by the number of carbon neighbors they each have:
In "structurally complex organic molecules '', it is the three - dimensional orientation of the carbon -- carbon bonds at quaternary loci which dictates the shape of the molecule. Further, quaternary loci are found in many biologically active small molecules, such as cortisone and morphine.
Carbon -- carbon bond - forming reactions are organic reactions in which a new carbon -- carbon bond is formed. They are important in the production of many man - made chemicals such as pharmaceuticals and plastics.
Some examples of reactions which form carbon -- carbon bonds are aldol reactions, Diels -- Alder reaction, the addition of a Grignard reagent to a carbonyl group, a Heck reaction, a Michael reaction and a Wittig reaction.
The directed synthesis of desired three - dimensional structures for tertiary carbons was largely solved during the late 20th century, but the same ability to direct quaternary carbon synthesis did not start to emerge until the first decade of the 21st century.
Relative to most bonds, a carbon -- carbon bond is very strong.
|
when did the us military first began hiring civilian employees | United States federal civil Service - Wikipedia
The United States federal civil service is the civilian workforce (i.e., non-elected and non-military, public sector employees) of the United States federal government 's departments and agencies. The federal civil service was established in 1871 (5 U.S.C. § 2101). U.S. state and local government entities often have competitive civil service systems that are modeled on the national system, in varying degrees.
According to the Office of Personnel Management, as of December 2011, there were approximately 2.79 million civil servants employed by the U.S. government. This includes employees in the departments and agencies run by any of the three branches government (the executive branch, legislative branch, and judicial branch), such as over 600,000 employees in the U.S. Postal Service.
The majority of civil service positions are classified as competitive service, meaning employees are selected based on merit after a competitive hiring process for positions that are open to all applicants. The Senior Executive Service (SES) is the classification for non-competitive, senior leadership positions filled by career employees or political appointments (e.g., Cabinet members, ambassadors, etc.). Excepted service positions (also known as unclassified service) are non-competitive jobs in certain federal agencies with security and intelligence functions (e.g., the CIA, FBI, State Department, etc.) that are authorized to create their own hiring policies and are not subject to most appointment, pay, and classification laws.
In the early 19th century, positions in the federal government were held at the pleasure of the president -- a person could be fired at any time. The spoils system meant that jobs were used to support the American political parties, though this was gradually changed by the Pendleton Civil Service Reform Act of 1883 and subsequent laws. By 1909, almost two - thirds of the U.S. federal workforce was appointed based on merit, that is, qualifications measured by tests. Certain senior civil service positions, including some heads of diplomatic missions and executive agencies, are filled by political appointees. Under the Hatch Act of 1939, civil servants are not allowed to engage in political activities while performing their duties. In some cases, an outgoing administration will give its political appointees positions with civil service protection in order to prevent them from being fired by the new administration; this is called "burrowing '' in civil service jargon.
Employees in the civil services work under one of the independent agencies or one of the 15 executive departments.
In addition to departments, there are a number of staff organizations grouped into the Executive Office of the President. These include the White House staff, the National Security Council, the Office of Management and Budget, the Council of Economic Advisers, the Office of the U.S. Trade Representative, the Office of National Drug Control Policy and the Office of Science and Technology Policy.
There are also independent agencies such as the United States Postal Service, the National Aeronautics and Space Administration (NASA), the Central Intelligence Agency (CIA), the Environmental Protection Agency (EPA), and the United States Agency for International Development (USAID). In addition, there are government - owned corporations such as the Federal Deposit Insurance Corporation (FDIC) and the National Railroad Passenger Corporation.
There were 456 federal agencies in 2009.
The pay system of the United States government civil service has evolved into a complex set of pay systems that include principally the General Schedule (GS) for white - collar employees, Federal Wage System (FWS) for blue - collar employees, Senior Executive System (SES) for Executive - level employees, Foreign Service Schedule (FS) for members of the Foreign Service and more than twelve alternate pay systems that are referred to as alternate or experimental pay systems such as the first experimental system China Lake Demonstration Project. The current system began as the Classification Act of 1923 and was refined into law with the Classification Act of 1949. These acts that provide the foundation of the current system have been amended through executive orders and through published amendments in the Federal Register that sets for approved changes in the regulatory structure of the federal pay system. The common goal among all pay systems is to achieve the goal of paying equitable salaries to all involved workers regardless of system, group or classification. This is referred to as pay equity or ("equal pay for equal work ''). Select careers in high demand may be subject to a special rate table, which can pay above the standard GS tables. These careers include certain engineering disciplines and patent examiners.
The General Schedule (GS) includes white collar workers at levels 1 through 15, most professional, technical, administrative, and clerical positions in the federal civil service. The Federal Wage System or Wage Grade (WG) schedule includes most federal blue - collar workers. As of September 2004, 71 % of federal civilian employees were paid under the GS; the remaining 29 % were paid under other systems such as the Federal Wage System for federal blue - collar civilian employees, the Senior Executive Service / Senior Level and the Executive Schedule for high - ranking federal employees, and the pay schedules for the United States Postal Service and the Foreign Service. In addition, some federal agencies -- such as the United States Securities and Exchange Commission, the Federal Reserve System, and the Federal Deposit Insurance Corporation -- have their own unique pay schedules.
All federal employees in the GS system receive a base pay that is adjusted for locality. Locality pay varies, but is at least 10 % of base salary in all parts of the United States. The following salary ranges represent the lowest and highest possible amounts a person can earn in base salary, without earning over-time pay or receiving a merit - based bonus. Actual salary ranges differ adjusted for increased locality pay (for instance a GS - 9, step 1 in rural Arkansas may start at $50,598 versus $61,084 in San Jose, California), but all base salaries lie within the parameters of the following ranges (effective January, 2018):
Nineteen percent of federal employees earned salaries of $100,000 or more in 2009. The average federal worker 's pay was $71,208 compared with $40,331 in the private sector, although under Office of Management and Budget Circular A-76, most menial or lower paying jobs have been outsourced to private contractors. In 2010, there were 82,034 workers, 3.9 % of the federal workforce, making more than $150,000 annually, compared to 7,240 in 2005. GS salaries are capped by law so that they do not exceed the salary for Executive Schedule IV positions. The increase in civil servants making more than $150,000 resulted mainly from an increase in Executive Schedule salary approved during the Administration of George W. Bush, which raised the salary cap for senior GS employees slightly above the $150,000 threshold.
Basic pay rates for Senior Executive Service (i.e. non-Presidentially appointed civil servants above GS - 15) will range from $119,554 to $179,700 in 2012.
As of January 2009, the Federal Government, excluding the Postal Service and soldiers, employed about 2 million civilian workers.
The Federal Government is the nation 's single largest employer. Although most federal agencies are based in the Washington, D.C. region, only about 16 % (or about 288,000) of the federal government workforce is employed in this region.
Public support in the United States for civil service reform strengthened following the assassination of President James Garfield. The United States Civil Service Commission was created by the Pendleton Civil Service Reform Act, which was passed into law on January 16, 1883. The commission was created to administer the civil service of the United States federal government. The law required federal government employees to be selected through competitive exams and basis of merit; it also prevented elected officials and political appointees from firing civil servants, removing civil servants from the influences of political patronage and partisan behavior. However, the law did not apply to state and municipal governments.
Effective January 1, 1978, the commission was renamed the Office of Personnel Management under the provisions of Reorganization Plan No. 2 of 1978 (43 F.R. 36037, 92 Stat. 3783) and the Civil Service Reform Act of 1978.
The United States Civil service exams have since been abolished for many positions, since statistics show that they do not accurately allow hiring of minorities according to the affirmative action guidelines.
This act abolished the United States Civil Service Commission and created the U.S. Office of Personnel Management (OPM), the Federal Labor Relations Authority (FLRA) and the U.S. Merit Systems Protection Board (MSPB). OPM primarily provides management guidance to the various agencies of the executive branch and issues regulations that control federal human resources. FLRA oversees the rights of federal employees to form collective bargaining units (unions) and to engage in collective bargaining with agencies. MSPB conducts studies of the federal civil service and mainly hears the appeals of federal employees who are disciplined or otherwise separated from their positions. This act was an effort to replace incompetent officials.
President Donald Trump signed three executive orders designed to enforce merit - system principles in the civil service and to improve efficiency, transparency, and accountability in the federal government. According to reports, the executive orders are expected to make civil service more accountable and are supposed to benefit taxpayers and federal workers alike.
|
how many times did the us women's soccer team win the world cup | United States Women 's national Soccer team - Wikipedia
The United States women 's national soccer team (USWNT) represents the United States in international women 's soccer. The team is the most successful in international women 's soccer, winning three Women 's World Cup titles (including the first ever Women 's World Cup in 1991), four Olympic women 's gold medals (including the first ever Olympic Women 's soccer tournament in 1996), seven CONCACAF Gold Cup wins, and ten Algarve Cups. It medaled in every single World Cup and Olympic tournament in women 's soccer history from 1991 to 2015, before being knocked out in the quarterfinal of the 2016 Summer Olympics. The team is governed by United States Soccer Federation and competes in CONCACAF (the Confederation of North, Central American and Caribbean Association Football).
After being ranked No. 2 on average from 2003 to 2008 in the FIFA Women 's World Rankings, the team was ranked No. 1 continuously from March 2008 to November 2014, falling back behind Germany, the only other team to occupy the No. 1 position in the ranking 's history. The team dropped to 2nd on March 24, 2017, due to its last - place finish in the 2017 SheBelieves Cup, then returned to 1st on June 23, 2017, after victories in friendlies against Russia, Sweden, and Norway. The team was selected as the U.S. Olympic Committee 's Team of the Year in 1997 and 1999, and Sports Illustrated chose the entire team as 1999 Sportswomen of the Year for its usual Sportsman of the Year honor. On April 5, 2017, U.S. Women 's Soccer and U.S. Soccer reached a deal on a new collective bargaining agreement that would, among other things, lead to a pay increase.
The team played its first match at the Mundialito tournament on August 18, 1985, coached by Mike Ryan, in which they lost 1 -- 0 to Italy.
The U.S. team 's first major victory came at the 1991 World Championship (retroactively named the 1991 Women 's World Cup). The U.S. cruised to lopsided victories in the quarterfinals and semifinals, before defeating Norway 2 -- 1 in the final. Michelle Akers was the team 's leading scorer with 10 goals, including both of the team 's goals in the final, and Carin Jennings won the Golden Ball as the tournament 's best player.
Julie Foudy, Kristine Lilly, and the rest of the 1999 team started a revolution towards women 's team sports in America. Arguably their most influential and memorable victory came in the 1999 World Cup when they defeated China 5 -- 4 in a penalty shoot - out following a 0 -- 0 draw after extended time. With this win they emerged onto the world stage and brought significant media attention to women 's soccer and athletics. On July 10, 1999, over 90,000 people (the largest ever for a women 's sporting event and one of the largest attendances in the world for a tournament game final) filled the Rose Bowl to watch the United States play China in the Final. After a back and forth game, the score was tied 0 -- 0 at full - time, and remained so after extra time, leading to a penalty kick shootout. With Briana Scurry 's save of China 's third kick, the score was 4 -- 4 with only Brandi Chastain left to shoot. She scored and won the game for the United States. Chastain famously dropped to her knees and whipped off her shirt, celebrating in her sports bra, which later made the cover of Sports Illustrated and the front pages of newspapers around the country and world. This win influenced many girls to want to play on a soccer team.
In the 2003 FIFA Women 's World Cup, the U.S. defeated Norway 1 -- 0 in the quarterfinals, but lost 0 -- 3 to Germany in the semifinals. The team then defeated Canada 3 -- 1 to claim third place. Abby Wambach was the team 's top scorer with three goals; Joy Fawcett and Shannon Boxx made the tournament 's all - star team.
At the 2007 FIFA Women 's World Cup, the U.S. defeated England 3 -- 0 in the quarterfinals but then suffered its most lopsided loss in team history when it lost to Brazil 0 -- 4 in the semifinals. The U.S. recovered to defeat Norway to take third place. Abby Wambach was the team 's leading scorer with 6 goals, and Kristine Lilly was the only American named to the tournament 's all - star team.
In the quarterfinal of the 2011 Women 's World Cup in Germany, the U.S. defeated Brazil 5 -- 3 on penalty kicks. Abby Wambach 's goal in the 122nd minute to tie the game 2 -- 2 has been voted the greatest goal in U.S. soccer history and the greatest goal in Women 's World Cup history. The U.S. then beat France 3 -- 1 in the semifinal, but lost to Japan 3 -- 1 on penalty kicks in the Final after drawing 1 -- 1 in regulation and 2 -- 2 in overtime. Hope Solo was named the tournament 's best goalkeeper and Abby Wambach won the silver ball as the tournament 's second best player.
In the 2012 Summer Olympics, the U.S. won the gold medal for the fourth time in five Olympics by defeating Japan 2 -- 1 in front of 80,203 fans at Wembley Stadium, a record for a women 's soccer game at the Olympics. The United States advanced to face Japan for the gold medal by winning the semifinal against Canada, a 4 -- 3 victory at the end of extra time. The 2012 London Olympics marked the first time the USWNT won every game en route to the gold medal and set an Olympic women 's team record of 16 goals scored.
The National Women 's Soccer League started in 2013, and provided competitive games as well as opportunities to players on the fringes of the squad. The U.S. had a 43 - game unbeaten streak that spanned two years -- the streak began with a 4 -- 0 win over Sweden in the 2012 Algarve Cup, and came to an end after a 1 -- 0 loss against Sweden in the 2014 Algarve Cup.
The USA defeated Japan 5 -- 2 in the final of the 2015 World Cup, becoming the first team in history to win three Women 's World Cup titles. In the 16th minute, Carli Lloyd achieved the fastest hat - trick from kick - off in World Cup history, and Abby Wambach was greeted with a standing ovation for her last World Cup match. Following their 2015 World Cup win, the team was honored with a ticker tape parade in New York City, the first for a women 's sports team. Sports Illustrated celebrated them with 25 covers of the magazine. President Barack Obama welcomed them to the White House, stating, "This team taught all of America 's children that ' playing like a girl ' means you 're a badass, '' before going on to say, "' playing like a girl ' means being the best. ''
On December 16, 2015, however, a 0 -- 1 loss to China in Wambach 's last game meant the team 's first home loss since 2004, ending their 104 - game home unbeaten streak.
In the 2016 Summer Olympics, the U.S. drew against Sweden in the quarterfinal; in the following penalty kick phase, Sweden won the game 4 -- 3. The loss marked the first time that the USWNT did not advance to the gold medal game of the Olympics, and the first time that the USWNT failed to advance to the semifinal round of a major tournament.
After the defeat in the 2016 Olympics, the USWNT underwent a year of experimentation which saw them losing 3 home games. If not for a comeback win against Brazil, the USWNT was on the brink of losing 4 home games in one year, a low never before seen by the USWNT. 2017 saw the USWNT play 12 games against teams ranked in the top - 15 in the world. The USWNT heads into World Cup Qualifying in fall of 2018.
U.S. TV coverage for the five Women 's World Cups from 1995 to 2011 was provided by ESPN / ABC and Univision, while coverage rights for the three Women 's World Cups from 2015 to 2023 were awarded to Fox Sports and Telemundo. In May 2014 a deal was signed to split TV coverage of other USWNT games between ESPN, Fox Sports, and Univision through the end of 2022. The USWNT games in the 2014 CONCACAF Women 's Championship and the 2015 Algarve Cup were broadcast by Fox Sports.
The 1999 World Cup final set the original record for largest US television audience for a women 's soccer match with 18 million viewers on average and was the most viewed English - language US broadcast of any soccer match until the 2015 FIFA Women 's World Cup Final between the United States and Japan.
The 2015 Women 's World Cup Final between the USA and Japan was the most watched soccer match -- men 's or women 's -- in American broadcast history. It averaged 23 million viewers and higher ratings than the NBA finals and the Stanley Cup finals. The final was also the most watched US - Spanish language broadcast of a FIFA Women 's World Cup match in history.
Overall, there were over 750 million viewers for the 2015 FIFA Women 's World Cup, making it the most watched Women 's World Cup in history. The FIFA Women 's World Cup is now the second most watched FIFA tournament, with only the men 's FIFA World Cup attracting more viewership.
The 1999 World Cup final, in which the USA defeated China, set a world attendance record for a women 's sporting event of 90,185 in a sellout at the Rose Bowl in Southern California. The record for Olympic women 's soccer attendance was set by the 2012 Olympic final between the USWNT and Japan, with 80,023 spectators at Wembley Stadium.
The following 23 players were named to the roster for the 2018 Tournament of Nations.
Caps and goals are current as of August 2, 2018, after match against Brazil.
The following players were also named to a squad in the last 12 months.
Notes:
The following is a list of match results in the last 12 months, as well as any future matches that have been scheduled.
Sources
The two highest - profile tournaments the U.S. team participates in are the quadrennial FIFA Women 's World Cup and the Olympic Games.
The team has participated in every World Cup through 2015 and won a medal in each.
The US team directly qualified for the 1999 FIFA Women 's World Cup as hosts of the event. Because of this, they did not participate in the 1998 CONCACAF Championship, which was the qualification tournament for the World Cup.
The team has participated in every Olympic tournament through 2016 and reached the gold medal game in each until 2016, when they were eliminated in the quarterfinals on a penalty shootout loss to Sweden.
The Algarve Cup is a global invitational tournament for national teams in women 's soccer hosted by the Portuguese Football Federation (FPF). Held annually in the Algarve region of Portugal since 1994, it is one of the most prestigious women 's football events, alongside the Women 's World Cup and Women 's Olympic Football.
The women 's national team boasts the first six players in the history of the game to have earned 200 caps. These players have since been joined in the 200 - cap club by several players from other national teams, as well as by five more Americans: Kate Markgraf, Abby Wambach, Heather O'Reilly, Carli Lloyd and Hope Solo. Kristine Lilly and Christie Rampone are the only players to earn more than 300 caps.
In March 2004, Mia Hamm and Michelle Akers were the only two women and the only two Americans named to the FIFA 100, a list of the 125 greatest living soccer players chosen by Pelé as part of FIFA 's centenary observances.
The USWNT All - Time Best XI was chosen In December 2013 by the United States Soccer Federation:
Source
Source
Source
The record for most goals scored in a match by a member of the USWNT is five, which has been accomplished by seven players.
|
when does chloe find out about clarks powers | Chloe Sullivan - wikipedia
Chloe Sullivan is a fictional character in the television series Smallville, which is based on the Superman and Superboy comics published by DC Comics. Portrayed by series regular Allison Mack, the character was created exclusively for Smallville by series developers Alfred Gough and Miles Millar. Other than main character Clark Kent, Chloe is the only main character to last the duration of the show, though Mack only signed on for five episodes in the tenth and final season. The character has also appeared in various literature based on Smallville, an internet series, and was then later adapted back into the original Superman comics which inspired Smallville.
In Smallville, Chloe is Clark Kent 's best friend, Lois Lane 's cousin, and the editor of the high school newspaper the Torch; she notices that the meteor rocks (kryptonite) are mutating the citizens of Smallville which she tracks on her "Wall of Weird ''. She generally teams up with Clark and Pete Ross in tracking and stopping meteor - infected people from harming other citizens. In the first five seasons, Chloe harbors an unrequited love for Clark, but eventually accepts her place as his best friend and nothing more. In later seasons, Chloe discovers she has a meteor rock power of her own, until she apparently loses them during an encounter with the alien supervillain Brainiac. In terms of romantic storylines, after Superman supporting character Jimmy Olsen is introduced to the show, he becomes Chloe 's boyfriend and later husband, but the pair later divorce. In the show 's final seasons, Chloe finds romance with Oliver Queen, otherwise known as the costumed vigilante - archer Green Arrow, whom she eventually marries and has a son with.
Chloe Sullivan has been characterized as independent, intelligent, curious and impulsive by both the writers and the actress that portrays her. The latter two characteristics often cause Chloe to get into trouble with both her friends and with local industrialists Lionel Luthor and his son Lex, two of the show 's primary antagonists. Mack has been recognized with multiple award nominations and wins for her portrayal of Chloe.
Introduced in the series pilot, Chloe spends much of season one helping her best friend Clark Kent (Tom Welling) stop the citizens of Smallville who have developed special abilities from genetic mutations, caused by the meteor rocks that fell to Smallville in 1989, from committing crimes. It is established that Chloe is the editor of the school newspaper the Torch at the start of the first season. Her journalistic curiosity -- always wanting to "expose falsehoods '' and "know the truth '' -- causes tension with her friends, especially when she is digging into Clark 's past in the season two episode "Lineage ''. In the early seasons, Chloe hides the fact that she is in love with Clark, although the feeling is not reciprocated; she confesses her true feelings to Clark in season two 's "Fever '' while he is sick, but he calls out Lana Lang 's name in his delirium. Her feelings for Clark get in the way of her better judgment as she betrays his trust in the season two finale, after witnessing him and Lana Lang (Kristin Kreuk) sharing a kiss in his barn, and agrees to uncover information on Clark for Lionel Luthor (John Glover) in exchange for a job at the Daily Planet.
Chloe and Clark patch their relationship in the season three episode "Whisper '', after Clark discovers that she has been helping Lionel. When Chloe stops her investigation, Lionel has her fired from the Daily Planet, and also fires her father from his job at LuthorCorp. In season three 's "Forsaken '', Chloe decides to assist Lex Luthor (Michael Rosenbaum), Lionel 's son, with getting Lionel arrested for the murder of Lex ' grandparents; Chloe 's hope is to get out from under Lionel 's control. In the season three finale, the F.B.I. place Chloe and her father in a safe - house until Lionel 's trial; unfortunately, the safe - house explodes once Chloe and her father enter and they are presumed dead. Chloe 's cousin, Lois Lane (Erica Durance), comes to Smallville to investigate Chloe 's death in the fourth season premiere. In season four 's "Gone '', Clark and Lois team - up and discover that Lex 's security team found the explosives in the safe - house, pulled Chloe and her father to safety before they detonated, and that he has been hiding her ever since. After Chloe 's testimony in the same episode, Lionel is convicted of murder and sentenced to life imprisonment. In the season four episode "Pariah '', Chloe discovers Clark 's secret when Alicia Baker (Sarah Carter), Clark 's girlfriend, decides that it needs to be exposed to the world in order for him to feel more comfortable about who he really is. Alicia hopes that Chloe will write a story exposing Clark, but Chloe decides that Clark kept his secret for a reason and decides not to write the story.
Chloe finally reveals to Clark in the season five premiere that she has known his secret, but that she wanted him to be comfortable enough to tell her on his own. At the same time, Clark reveals that he was not infected by the meteor rocks in Smallville, as Chloe initially suspected, but that he is in fact an alien who was sent to Earth as a baby during the meteor shower of 1989. In season five 's "Thirst '', Chloe earns her dream job at the Daily Planet, starting in the basement. In the season six episode "Justice '', Chloe begins assisting Green Arrow (Justin Hartley) and his team of superheroes under the codename "Watchtower ''. In "Freak '', she discovers she herself is meteor - infected, with an unknown ability, and begins to worry that she is a "time bomb '' heading towards insanity. She later discovers in "Progeny '' that her institutionalized mother, Moira Sullivan (Lynda Carter), is meteor - infected as well. In the season finale, Chloe learns that her special power lets her heal any wound and even reverse death, when it activates to save Lois. In season seven 's "Descent '', when Chloe attempts to keep information regarding "The Traveler '' a secret from Lex, who is unaware that "The Traveler '' is really Clark, he fires her from her job at the Daily Planet. When in "Sleeper '', Lana falls into a catatonic state having been attacked by the Kryptonian artificial intelligence known as Brainiac (James Marsters), Chloe takes over Lana 's Isis Foundation, a free clinic for individuals who have been infected by the meteor rocks. In the seventh season finale, Chloe is attacked by Brainiac, but her healing powers prevent him from harming her. When she returns home, Jimmy Olsen (Aaron Ashmore), her on - again - off - again boyfriend since season six, proposes marriage. Before Chloe can answer the Department of Domestic Security (DDS) appears and arrests her for hacking into the government database.
At the start of season eight, it is revealed that Chloe was not arrested by DDS, but Lex 's security personnel impersonating DDS agents. While subjected to their tests, Chloe discovers that her altercation with Brainiac has apparently caused to her to lose her meteor - related powers, but instilled two new abilities: vast super intelligence and technopathy. Returning to Smallville, Chloe reopens the Isis Foundation. Though she loves Jimmy, she finds herself attracted to paramedic Davis Bloome (Samuel Witwer). In the episode "Abyss '', Brainiac 's infestation causes Chloe to lose her memories. Clark takes Chloe to his biological father Jor - El, who restores her memories. After Chloe marries Jimmy in "Bride '', she is kidnapped by Doomsday, a genetically engineered killing machine bent on destroying Earth and becomes Brainiac 's vessel once again. Brainiac attempts to drain the world of all its human knowledge but is stopped and removed from Chloe 's body by the Legion, superheroes from the future, in "Legion ''. In "Hex '', Chloe assumes the codename Watchtower full time because she feels her life needs more meaning. Chloe discovers that Davis is Doomsday in "Eternal ''. She attempts to assist Davis ' suicide using kryptonite; when this fails, she stays by his side in order to keep Doomsday under control. In the episode "Beast '', she and Davis leave town together; Chloe reasons it will protect Clark. In the season eight finale, she uses black kryptonite to separate Davis from Doomsday; Clark buries Doomsday beneath Metropolis. However, when Davis discovers that Chloe is still in love with Jimmy, he stabs Jimmy and attempts to kill Chloe; Jimmy impales him on a metal rod, and they both die. Chloe vows to keep the Watchtower Jimmy gave her as a wedding gift open, in the hope that all lost heroes -- namely Oliver and his team -- will find their way home.
At the start of the ninth season, using Oliver 's money, Chloe transforms the Watchtower into an information fortress and superhero headquarters. In this capacity, she acquires a rival in Tess 's computer expert Stuart Campbell (Ryan McDonell); her status as superhero information broker also makes her a target for Checkmate bosses Amanda Waller (Pam Grier) and Maxwell Lord (Gil Bellows). Over the course of the season, she grows romantically close to Oliver. In the season ten première, when Oliver is kidnapped by Suicide Squad leader Rick Flag (Ted Whittall), Chloe risks her own sanity by putting on the helmet of Doctor Fate to learn his location. With the information acquired from Fate 's helmet, she organizes a switch for Oliver; in Flag 's captivity, Chloe fakes suicide and goes off - the - grid. Chloe returns in "Collateral '', and reveals that she has been helping Clark, Oliver, and the rest of the heroes while in hiding, having blackmailed the Suicide Squad into helping her. Afterward, she resumes her relationships with the show 's protagonists. In the episode "Fortune '', Chloe decides to move to Star City to return to journalism following her marriage to Oliver Queen. In a flashforward in the series finale, Chloe is now the mother to a young boy, but remains in touch with Clark and Lois.
Chloe Sullivan was introduced by the show 's creators to be a "Lois Lane archetype '', as well as be Smallville 's "outsider '', which series developers Gough and Millar felt the show needed in order to have a character that notices the strange happenings in Smallville. She is the original creation of Al Gough and Miles Miller, having not been produced first in the DC Comics Universe, unlike the other main characters Clark Kent, Lana Lang, Lex Luthor, and Pete Ross. When they first began developing the series, Gough and Millar had intended for Chloe to have an "ethnic background ''. After learning about Smallville from the show 's casting director, Dee Dee Bradley, Allison Mack toyed with the idea of auditioning for the role of Lana Lang, but chose instead to audition for the role of Chloe Sullivan. Gough and Millar felt she had a "rare ability to deliver large chunks of expositionary (sic) dialogue conversationally '', and decided to cast her against their initial intention to give the character an ethnic origin. According to Mack, the reason she got the role was because she went into her second audition with a "very flippant attitude ''. Kristen Bell also auditioned for the role of Chloe Sullivan; she would eventually go on to star in the television series Veronica Mars. Aside from Allison Mack, Roan Curtis portrayed Chloe as a child in the season six episode "Progeny '', with Victoria Duffield taking on the role in the eighth season episode "Abyss ''. Mack enjoys the fact that her character was created specifically for the show, because she feels like she does not have to worry about being compared to someone else in the same role, which she likens to people comparing Michael Rosenbaum 's performance as Lex Luthor to Gene Hackman 's portrayal in the Superman film series of the 1970s -- 1980s. Mack only signed on for five episodes of the tenth and final season.
Allison Mack was disappointed that the character "lost some of her backbone '' in the second season. The second season was about exploring Chloe 's heart, and the idea of her being this "lovelorn (...) angsty teenager ''. As Mack describes her, "(Chloe) was a little spineless and a little bit too much of a pushover (in season two). '' Mack does believe that by the end of the season Chloe manages to get some of that integrity back. The actress likes to make sure that her character is kept "smart '' and "ambitious '', but at the end of season two Chloe 's impulsiveness causes her to get stuck under Lionel 's control, when she "spitefully '' agrees to uncover Clark 's secrets for Lionel Luthor after Clark is not honest with her about his newly established relationship with Lana.
For season three, Mack wanted the character to be given a major obstacle to overcome, something that would help the character mature. The obstacle in question became Lionel 's control over Chloe, after she made a deal to spy on Clark. Allison Mack believes that Chloe is in her own comfort zone while she is working at the Torch, as she is in complete control, but likens Chloe being under Lionel 's control to that of a "caged animal ''. When she ruins the lives of a mother and her son in season three 's "Truth '', after exposing the mother as a fugitive from the law, Chloe is forced to look deeper into her own self. Mack believes that this event was a turning point for Chloe 's maturity; it is the moment that she realizes that there needs to be a line she should never cross. After it is revealed to Clark in the season five premiere that Chloe knows his secret, the character becomes a larger part of the storyline for the show. Knowing Clark 's secret allowed Chloe to finally come to terms with her feelings for Clark, and recognize where their relationship will always be; Chloe 's acceptance of her place in Clark 's life provides a means for the two to have a more meaningful friendship, without the concerns of Chloe 's unrequited love. According to Mack, Chloe has learned to evolve her love for Clark into something more "genuine '' and "selfless ''.
For the actress, having Chloe become part of the meteor infected community in season six allowed Mack 's character to continue to evolve. Mack views this transition as a means for her character to become more emotionally connected to those people -- the meteor infected -- she spent five seasons trying to expose to the public. Being infected by the meteors gives Chloe motivation to try to understand them and allows her to grow closer to Clark, as she can better understand what it feels like to live in a world where you have a special ability. Writer Holly Harold believes that, in addition to being infected by the meteor rocks, bringing Lois into the journalistic field also provides Chloe with a lot of ammunition for growth and development. Lois 's presence at the Daily Planet allows Chloe the chance to reflect upon herself, and discover what things are most important to her -- her career or her family and friends. The competition that Lois provides is beneficial, as it gives Chloe a chance to bring out the best in herself.
Allison Mack characterizes Chloe as being a "misfit '' during the first season; more of "a really smart girl with attitude ''. She goes on to describe Chloe as intelligent and independent. Another of Chloe 's defining characteristics is her need to "expose falsehoods '' and find the truth in every situation. The character is curious, and wants to be honest with people. She is always trying to make sense of the situation. Next to her curiosity, her impulsiveness is a key characteristic that eventually leaves her under the control of Lionel Luthor, when she offers to uncover information on Clark for Lionel. The reason for this betrayal is based on Chloe 's love for Clark. As Allison Mack explains, Chloe is so blinded by her love for Clark that she neglects to see all of the mistakes that he makes. It is this unrequited love for Clark that "drives (Chloe) to be as ambitious and as focused as she is ''.
A leading theory among audiences was that Chloe would eventually change her name to Lois Lane, Clark 's wife in the comics, as she embodies various characteristics that Lois Lane has in the comic books. The creative team removed the notion that Chloe was going to turn into Clark 's future wife when they introduced Lois Lane in season four. Though the characters share similarities, according to Mack, Chloe and Lois are more "different shades of the same color (...) Chloe is a softer version of Lois ''. Chloe 's upbringing allows her to be less jaded than Lois. Chloe also looks to the future, whereas Lois is more shortsighted.
The season six finale reveals that Chloe has the ability to heal others. Mack describes Chloe 's newfound meteor power as similar to "empathy ''. The actress further defines the power as the ability to heal others by taking their pain and making it her own. Writer Todd Slavkin contends that giving Chloe the power to heal was the best choice for the character. According to Slavkin, Chloe has sacrificed so much in her life for the greater good that it only seemed natural that her meteor power would reflect that. For the writer, it did not make sense for her ability to be something "malicious and evil and destructive ''. In season eight, Chloe discovers that she also has super-intelligence -- being able to solve complex algorithms faster than LuthorCorp 's most powerful supercomputer. She and Clark later deduce that her newfound intelligence was brought on during her encounter with Brainiac, who infected her with a part of himself during his attack.
One of Chloe 's key relationships is with the series protagonist, Clark Kent. Although believers in the "Chlois '' theory initially suspected that Chloe would eventually become Lois Lane, Clark 's future wife in the comics, Mack contends that Clark does not love Chloe in the way that she loves him. The actress does not believe that Clark 's feelings will ever change. Regardless of Clark 's feelings, Mack recognizes that Chloe is blinded by her love for Clark, which ultimately affects her judgment in not only seeing Clark 's faults, but making choices that place her character in danger. In season five, Clark finally discovers that Chloe knows his secret, and this revelation allows Chloe the opportunity to come to terms with her feelings for Clark; this also provided a means for the two have a more meaningful friendship, without the concerns of Chloe 's unrequited love.
Speaking on the evolving relationship of Clark and Chloe, Mack believes that the season six introduction of Jimmy Olsen into Chloe 's life increased her value to Clark. Before, Chloe would drop anything for Clark, but now that Chloe has other priorities, it makes Clark realize how valuable she is to him. The introduction of Jimmy Olsen also provides Chloe with someone she can finally have a romantic relationship with. The relationship is strained when Chloe has to lie to cover up Clark 's secret, as well as keeping the fact that she is meteor - infected hidden. Writer Holly Harold questions whether or not Jimmy has taken over the place in Chloe 's heart that Clark occupied for so long.
Chloe 's relationship with her mother is one tackled both off - screen and behind the scenes. In a brainstorming session, Mack, Gough and Millar came up with the idea that Chloe 's mother had left her at a young age. Mack wanted to make the character a "latchkey kid '', in an effort to explain why she is out all hours of the night. Mack feels that Chloe has real abandonment issues, which play on the fact that she never feels like she is good enough for anyone. These abandonment issues were meant to provide a reason for why the character is devastated by the fact that Clark does not love her the same way that she loves him, as well as the reason for why Chloe does not have many female friends. One of Chloe 's story arcs in season five involved her finding her mother in a mental institution, and living with the fear that she will have a mental breakdown of her own and end up in a psychiatric facility. This fear also affects Clark, who worries that keeping his secret will have negative effects on Chloe, like it did Pete.
Allison Mack has been nominated for a number of awards for her role as Chloe Sullivan. She was nominated for a Saturn Award as best supporting actress in a television program in 2006 and 2007. Mack has been nominated seven consecutive times -- between 2002 and 2009 -- for Teen Choice Award 's Choice Teen Sidekick; she won the award in 2006 and 2007.
Apart from her appearances on television, Chloe has also appeared in her own online spin - off, a series of young adult novels, a bi-monthly Smallville comic book, and been given a 2010 introduction into the official DC comics universe.
Apart from the television series Smallville, the character of Chloe Sullivan appeared in her own web - based spin - off series, titled Smallville: Chloe Chronicles. Allison Mack continued her duties as the investigative, high school reporter, with the series originally airing exclusively on AOL.com. The first volume aired between April 29, 2003 and May 20, 2003. The web series eventually made its way to Britain 's Channel 4 website. Smallville: Chloe Chronicles was created by Mark Warshaw, with the scripts written by Brice Tidwell; Allison Mack was given final script approval. This final approval allowed Mack to review and make changes to the script as she saw fit. Warshaw also communicated regularly with Gough and Millar so that he could find more unique ways to expand Smallville stories over to Chloe 's Chronicles.
-- Allison Mack describes Chloe Chronicles.
In the first volume, picking up some time after the events of season one 's "Jitters '', Chloe begins checking into the rumors of the "Level 3 '' facility at the Smallville LuthorCorp plant. Here, she starts investigating the death of LuthorCorp employee Earl Jenkins, which takes her to a research company known as Nu - Corp. Chloe interviews Nu - Corp 's Dr. Arthur Walsh, who reveals that he knows what really happened to Earl Jenkins while he was working at LuthorCorp. Walsh disappears before Chloe can get the all of information.
In volume two, Chloe is contacted by an ex-Navy SEAL, Bix, and former member of LuthorCorp 's "Deletion Group '' who has information regarding Dr. Walsh 's disappearance. Walsh begins sending Chloe videos, which lead Chloe to discover that Walsh was working with Donovan Jameson, the head of Nu - Corp, and Dr. Stephen Hamilton on experimentations involving the meteor rocks. Chloe and Pete Ross (Sam Jones III), who accompanies Chloe as her cameraman, learn that Jameson is experimenting on meteor infected people in order to steal their abilities. Jameson, exhibiting the same jitters as Earl Jenkins, attempts to kill Chloe and Pete to hide what he has been doing, but his jitters become uncontrollable and he kills himself in his lab. As Chloe and Pete leave the lab they come across Lionel Luthor, leading Chloe to realize that Lionel was funding Jameson 's efforts.
The third volume of the Chloe Chronicles, titled Vengeance Chronicles, features Chloe teaming up with the "Angel of Vengeance '' Andrea Rojas (Denise Quiñones), from season five 's "Vengeance '', to stop Lex Luthor. Andrea informs Chloe that Lex turned Lionel 's "Level 3 '' facility into his own "33.1 '' research lab. Rojas, working with meteor infected individuals Yang and Molly Griggs, wants Chloe 's help to expose LuthorCorp 's experimentation on the meteor infected.
Chloe 's first appearance in literature was in the Aspect published Smallville: Strange Visitors. Here, Chloe is conned into believing that Dr. Donald Jacobi, a "faith healer '' is interested in her research on the meteor rocks. She quickly realizes, after attending one of Jacobi 's shows, that he is nothing more than a con artist, which causes her to devote her time to proving that so no one will fall victim to his schemes. In Smallville: Dragon, Chloe attempts to solve the murder of one of her teachers, Mr. Tait, which she and Clark believe to be the work of recently released convict Ray Dansk. While attending a party put on by Lex, Chloe is injured during an attack on the crowd by Dansk, who has turned into a reptilian creature thanks to exposure to the meteor rocks.
In 2012, the Smallville series was continued through the comic book medium with Smallville: Season 11. Written by Bryan Q. Miller, who also wrote for the television series, the first issue reveals that Chloe and Oliver Queen are living in Star City. Chloe is working for the Star City Gazette, but remains a friend and ally to the heroes. She and Lois discover a spacecraft in Earth 's atmosphere, later revealing the pilot is Chloe 's counterpart from a parallel reality (the same universe where Clark Luthor and alternate - Lionel Luthor was from). She was sent by her cousin, Lois Queen (the alternate version of Lois Lane and Clark 's ally of that world), to warn Clark of the coming "Crisis '', which destroyed her world. She dies after Oliver and Chloe take her to a hospital. After Chloe asks Lois to steal the components and plans of Lionel Luthor 's memory device, Project Intercept, with Oliver, Chloe had Emil Hamilton build it with upgrades so she can find information from the remnants of her deceased counterpart 's memories. Inside what is left of her counterpart 's mind, Chloe finds a universe coming to an end, caused by an attack led by a powerful gargantuan being; she also witnesses Lois Queen 's death. She finds she now has some of the memories of her counterpart, and discovers her killer is one of the Multiverse 's guardians, the Monitors. After taking a leave of absence with Oliver, Chloe later return as Clark begins to gather everyone to make a stand against The Monitors. At this point, Chloe is now about nine months pregnant. After the Monitors ' defeat, Chloe and Oliver join the Department of Extranormal Operations. She later gives birth to a baby boy, whom she and Oliver named "Jonathan, '' after Clark 's late - adoptive father Jonathan Kent.
Although Chloe appeared alongside her television cohorts in the Smallville comic books, which featured tie - ins to the Chloe Chronicles webisodes, DC writers hoped to bring Chloe into DC continuity at least as early as 2007. The character ultimately made her first appearance in the mainstream DC Comics Universe in 2010. According to writer Kurt Busiek, the problem of bringing Chloe into the mainstream comic book universe, and keeping her television background, was that she would have filled two roles: "the Girl from Back Home and the Reporter ''. Those roles were already filled by the adult comic book versions of Lana Lang and Lois Lane, so the plan was to give the character a new background. Busiek hoped to make Chloe the younger sister of someone Clark had gone to school with, who was a now interning at the Daily Planet. Busiek believed that this would make her different from Lana and Lois, but still familiar to readers who also watched the show. Another distinguishing feature would be that this version of Chloe would not know Clark 's secret, nor would she be meteor infected. These ideas never came to fruition.
Chloe first appeared in "Jimmy Olsen 's Big Week '', a serialized Jimmy Olsen story written by Nick Spencer, beginning in Action Comics # 893 (November 2010). Spencer stated that introducing Chloe has been his first "positive contribution '' to the DC Universe. Because of the continuity differences between Smallville and the comic book Superman stories, Spencer chose to stay "as true to the character '' as he could by honoring her romantic history with Jimmy Olsen from later Smallville seasons, as well as her journalistic background from its early seasons. Spencer decided to introduce Chloe after he began conceiving of a clever, dogged female reporter for Jimmy Olsen to interact with, and realized that he had been subconsciously writing about Chloe.
Chloe Sullivan is mentioned during a flashback in a season three episode of Supergirl titled "Midvale '' where she helps a teenage Alex and Kara Danvers in solving a murder mystery of a classmate through email correspondence. Parallel to the original interpretation, she is referred as Clark Kent 's best friend and knows his secrets and even has a "Wall of Weird. ''
|
david bowie new career in a new town lyrics | A New Career in a New Town - Wikipedia
"A New Career in a New Town '' is an instrumental piece by David Bowie, recorded for his 1977 album Low.
The piece, which has no lyrics, is still autobiographical, like many of Bowie 's other pieces on this album. The title, "A New Career in a New Town, '' reflects Bowie 's move from the United States to Europe. Despite a distant tone, the upbeat nature of the piece presents a sort of optimism in having a chance to start over.
This song serves as the part of the "binding '' for the A-side of Low, which starts with a similarly upbeat instrumental, "Speed of Life ''. The song relies heavily on Brian Eno 's synthesizer collection and techniques. At the far front of the piece is Bowie 's harmonica solo, which echoes over the rest of the song and provides a contrast between its acoustic purity and the heavily electrified band. In fact, considering the use of the Harmonizer on Dennis Davis ' drum set and the heavy amplification of the other acoustic instruments, the harmonica is the most undoctored instrument on the recording.
The song 's harmonica melody is recalled in "I Ca n't Give Everything Away, '' the final track from Bowie 's final album, Blackstar.
No official released live versions of this recording exist, although it was often used in the A Reality Tour in 2003 as a pre-recorded live video intro while David Bowie and his band were walking on stage.
The song was performed live during the second encore of the Heathen Tour by Bowie in 2002, in which Bowie played the whole Low album.
|
when does episode 11 of marvels runaways come out | Runaways (TV series) - wikipedia
Marvel 's Runaways, or simply Runaways, is an American web television series created for Hulu by Josh Schwartz and Stephanie Savage, based on the Marvel Comics superhero team of the same name. It is set in the Marvel Cinematic Universe (MCU), sharing continuity with the films and other television series of the franchise. The series is produced by ABC Signature Studios, Marvel Television and Fake Empire Productions, with Schwartz and Savage serving as showrunners.
Rhenzy Feliz, Lyrica Okano, Virginia Gardner, Ariela Barer, Gregg Sulkin, and Allegra Acosta star as the Runaways, six teenagers from different backgrounds who unite against their parents, the Pride, portrayed by Angel Parker, Ryan Sands, Annie Wersching, Kip Pardue, Ever Carradine, James Marsters, Brigid Brannagh, Kevin Weisman, Brittany Ishibashi, and James Yaegashi. A film from Marvel Studios based on the Runaways began development in May 2008, before being shelved in 2013 due to the success of The Avengers. In August 2016, Marvel Television announced that Runaways had received a pilot order from Hulu, after being developed and written by Schwartz and Savage. Casting for the Runaways and the Pride were revealed in February 2017. Filming on the pilot began in Los Angeles in February 2017. The series was officially ordered by Hulu in May 2017.
The first season was released from November 21, 2017, to January 9, 2018. In January 2018, Runaways was renewed for a 13 - episode second season.
Six teenagers from different backgrounds unite against a common enemy -- their criminal parents, collectively known as the Pride.
Old Lace, a genetically engineered Deinonychus telepathically linked with Gert Yorkes, appears in the series. The character is portrayed by a puppet that was operated by six people, including one person pumping air through the puppet to show the dinosaur breathing. Barer called the puppet "incredible... You see her emotions. We do n't not make use of that. ''
Stan Lee makes a cameo appearance as a limo driver.
Brian K. Vaughan was hired to write a screenplay for Marvel Studios in May 2008, based on his Runaways. In April 2010, Marvel hired Peter Sollett to direct the film, and a month later Drew Pearce signed on to write a new script. Development on the film was put on hold the following October, and Pearce explained in September 2013 that the Runaways film had been shelved due to the success of The Avengers; the earliest the film could be made was for Phase Three of the Marvel Cinematic Universe. In October 2014, after announcing Marvel 's Phase Three films without Runaways, Marvel Studios president Kevin Feige said the project was "still an awesome script that exists in our script vault.... In our television and future film discussions, it 's always one that we talk about, because we have a solid draft there. (But) we ca n't make them all. ''
Marvel Television, based at ABC Studios, was waiting for the right showrunner before moving forward with a television take on the characters. Josh Schwartz and Stephanie Savage, whose company Fake Empire Productions had an overall deal with ABC, independently brought up the property during a general meeting with the studio, and, by August 2016, the pair had spent a year conversing with Marvel about turning Runaways into a television series. That month, Marvel 's Runaways was announced from Marvel Television, ABC Signature Studios, and Fake Empire Productions, with the streaming service Hulu ordering a pilot episode and scripts for a full season. Hulu was believed to already have "an eye toward a full - season greenlight. '' Executive producer Jeph Loeb felt "it was an easy decision '' to have Hulu air the series over the other networks Marvel Television works with, because "We were very excited about the possibility of joining a network that was young and growing in the same way that when we went to Netflix when it was young and growing on the original side. It really feels like we 're in the right place at the right time with the right show. '' Loeb and Marvel Television were also impressed by the success of Hulu 's The Handmaid 's Tale, which helped further justify the decision. Schwartz and Savage wrote the pilot, and serve as showrunners on the series, as well as executive producers alongside Loeb and Jim Chory. In May 2017, Runaways received a 10 - episode series order from Hulu at their annual advertising upfront presentation.
Fake Empire 's Lis Rowinski produces the series, and Vaughan serves as an executive consultant. On this, Vaughan noted he "did a little consulting early in the process, '' but felt the series "found the ideal ' foster parents ' in Josh Schwartz and Stephanie Savage... (who) lovingly adapted (the comics) into a stylish drama that feels like contemporary Los Angeles. '' He also praised the cast, crew and writers working on the series, and felt the pilot looked "like an Adrian Alphona comic '', referring to the artist who worked with Vaughan when he created the characters. Loeb said that it had been Schwartz and Savage who had asked that Vaughan be involved, and said that this was something that "a lot of showrunners do n't immediately gravitate towards. '' In discussions with Vaughan, Marvel found that he "really wanted to be involved and make sure that it was done, not just properly, but in a way that it would last 100 episodes. '' On January 8, 2018, Hulu renewed the series for a 13 - episode second season.
Schwartz was a fan of the Runaways comic for some time, and introduced it to Savage, saying, "When you 're a teenager, everything feels like life and death, and the stakes in this story -- really felt like that. '' Loeb described the series as The O.C. of the Marvel Cinematic Universe (MCU), which Schwartz said meant "treating the problems of teenagers as if they are adults '' and having the series "feel true and authentic to the teenage experience, even in this heightened context ''. Loeb noted that it would deal with modern political issues by saying, "This is a time when figures of authority are in question, and this is a story where teenagers are at that age where they see their parents as fallible and human. Just because someone is in charge, does n't mean that they 're here to do good. '' The producers did note that the series would explore the parents ' perspective as well, with the pilot telling the story from the Runaways ' perspective, and the second episode showing the same story from their parents ' -- the Pride 's -- perspective, with the two stories converging midway through the first season.
Schwartz likened the tone of Runaways to that of the comics it was based on, calling it "so distinct '', saying much of the tone Vaughn used when writing the comics overlapped with the tones Schwartz and Savage like to work in. The pair were excited by the freedom given to them by Hulu over the usual broadcasters they were used to working with, such as allowing the children to swear in the show, not having set lengths for each episode, and being able to explore the parents ' story; Hulu wanted "something that felt broad and where we could push the envelope in places ''. Schwartz described the series as a coming - of - age story and a family drama, with focus on the characters that can lead to long stretches of the series not featuring super powers, so "if you did n't see the show title, you would n't know that you were in a Marvel show for long stretches... That was our aesthetic starting place, but there are episodes where there 's some good (Marvel) stuff. ''
In February 2017, Marvel announced the casting of the Runaways, with Rhenzy Feliz as Alex Wilder, Lyrica Okano as Nico Minoru, Virginia Gardner as Karolina Dean, Ariela Barer as Gert Yorkes, Gregg Sulkin as Chase Stein, and Allegra Acosta as Molly Hernandez. Shortly after, Marvel announced the casting of the Pride, with Ryan Sands as Geoffrey Wilder, Angel Parker as Catherine Wilder, Brittany Ishibashi as Tina Minoru, James Yaegashi as Robert Minoru, Kevin Weisman as Dale Yorkes, Brigid Brannagh as Stacey Yorkes, Annie Wersching as Leslie Dean, Kip Pardue as Frank Dean, James Marsters as Victor Stein, and Ever Carradine as Janet Stein. Loeb praised casting director Patrick Rush, explaining that all of the series regulars for Runaways were the producers ' first choice for the role. The majority of the children are portrayed by "fresh faces '', which was an intentional choice.
By August 2017, Julian McMahon had been cast in the recurring role of Jonah.
Filming on the pilot began by February 10, 2017, in Los Angeles, under the working title Rugrats, and concluded on March 3. Director Brett Morgen was given free rein by Marvel and Hulu to establish the look of the series, and wanted to create a feel that was "very grounded and authentic ''. He also looked to differentiate between the hand - held, gritty world of the Runaways and the more stylistic world of the Pride. He felt the latter could be explored more in the series moving forward, but was not available to direct any more episodes of the season. Following completion of the pilot and the show 's pick - up to series, there was concern among the cast and crew that the impending writers ' strike would prevent the series to move forward. However, the strike did not happen, and filming on the rest of the season began at the end of June, again in Los Angeles. Production on the season had concluded by October 21.
In May 2017, Siddhartha Khosla was hired to compose the music for the series. Khosla said that, due to his history as a songwriter, his scoring process involves "working on these song - stories and weaving them through different episodes ''. He described the Runaways score as being "completely synthesized '', utilizing analog synthesizers from the 1980s, specifically the Roland Juno - 60 and Oberheim Electronics ' synths. Khosla compared the "alternative feel '' of his score to Depeche Mode, adding "There is an element of rebellion, so sonically going for something that is a little bit outside the box, non-traditional, I felt was an appropriate approach. I feel like I 'm making art on this show. '' Alex Patsavas serves as music supervisor, having done so on all of Schwartz and Savage 's previous series. On January 12, 2018, a soundtrack from the first season consisting of 12 licensed tracks plus two by Khosla, was released digitally. Additionally, Khosla 's original score for the series was released digitally on January 26.
Loeb confirmed in July 2017 that the series would be set in the MCU, but that the show 's characters would not be concerned with the actions of the Avengers, for example, saying, "Would you be following Iron Man (on social media) or would you be following someone your own age? The fact that they 've found each other and they 're going through this mystery together at the moment is what we 're concerned about, not what Captain America is doing. '' The showrunners considered the series ' connection to the MCU to be "liberating '', as it allowed them to set the series in a universe where superheroics and fantasy are already established and do not need to be explained to the audience. However, the series does not draw attention to "the idea that maybe other people have powers too ''. Schwartz said, "You could read it as because they live in California, and (The Avengers) took place in New York, or because it did n't take place in our world. Marvel 's position is everything is connected, but our show is trying to walk that line where the reality our kids are facing, they are facing for the first time. '' He also said they "were very capable of telling the story that we wanted to tell independent of any of the other Marvel stories that are out there. ''
Loeb added that there were no plans to crossover across networks with the similarly themed Marvel 's Cloak & Dagger on Freeform and Marvel 's New Warriors, as Marvel wanted the series to find its footing before further connecting with other elements of the universe, though "You 'll see things that comment on each other; we try to touch base wherever we can... things that are happening in L.A. are not exactly going to be affecting what 's happening in New Orleans (where Cloak & Dagger is set)... It 's being aware of it and trying to find a way (to connect) that makes sense. ''
Runaways premiered its first three episodes on Hulu in the United States on November 21, 2017, with the first season consisting of 10 episodes, and concluding on January 9, 2018. The series aired on Showcase in Canada, premiering on November 22, and will air on Syfy in the United Kingdom beginning on April 18, 2018.
Cast members and Schwartz and Savage appeared at New York Comic Con 2017 to promote the series, where a trailer for the series was revealed, along with a screening of the first episode. The series had its red carpet premiere at the Regency Bruin Theatre in Westwood, Los Angeles on November 16, 2017.
The review aggregator website Rotten Tomatoes reported an 82 % approval rating with an average rating of 7.63 / 10 based on 56 reviews. The website 's consensus reads, "Earnest, fun, and more balanced than its source material, Runaways finds strong footing in an over-saturated genre. '' Metacritic, which uses a weighted average, assigned a score of 68 out of 100 based on 22 critics, indicating "generally favorable reviews ''.
Reviewing the first two episodes of the series, Joseph Schmidt of ComicBook.com praised the show for its faithfulness to the comics, but also for some of the changes it made, appreciating the increased focus on the parents. He thought the cast portraying the Runaways was "pretty spot on '', but "many of the parents are scene stealers '', highlighting the performances of Marsters, Wersching, and Pardue.
|
explain what it means to live in a heterogeneous society in america | Melting pot - wikipedia
The melting pot is a metaphor for a heterogeneous society becoming more homogeneous, the different elements "melting together '' into a harmonious whole with a common culture or vice versa, for a homogeneous society becoming more heterogeneous through the influx of foreign elements with different cultural background with a potential creation of disharmony with the previous culture. Historically, it is often used to describe the assimilation of immigrants to the United States. The melting - together metaphor was in use by the 1780s. The exact term "melting pot '' came into general usage in the United States after it was used as a metaphor describing a fusion of nationalities, cultures and ethnicities in the 1908 play of the same name.
The desirability of assimilation and the melting pot model has been reconsidered by proponents of multiculturalism, who have suggested alternative metaphors to describe the current American society, such as a mosaic, salad bowl, or kaleidoscope, in which different cultures mix, but remain distinct in some aspects. Others argue that cultural assimilation is important to the maintenance of national unity, and should be promoted.
In the 18th and 19th centuries, the metaphor of a "crucible '' or "smelting pot '' was used to describe the fusion of different nationalities, ethnicities and cultures. It was used together with concepts of the United States as an ideal republic and a "city upon a hill '' or new promised land. It was a metaphor for the idealized process of immigration and colonization by which different nationalities, cultures and "races '' (a term that could encompass nationality, ethnicity and race proper) were to blend into a new, virtuous community, and it was connected to utopian visions of the emergence of an American "new man ''. While "melting '' was in common use the exact term "melting pot '' came into general usage in 1908, after the premiere of the play The Melting Pot by Israel Zangwill.
The first use in American literature of the concept of immigrants "melting '' into the receiving culture are found in the writings of J. Hector St. John de Crevecoeur. In his Letters from an American Farmer (1782) Crevecoeur writes, in response to his own question, "What then is the American, this new man? '' that the American is one who "leaving behind him all his ancient prejudices and manners, receives new ones from the new mode of life he has embraced, the government he obeys, and the new rank he holds. He becomes an American by being received in the broad lap of our great Alma Mater. Here individuals of all nations are melted into a new race of men, whose labors and posterity will one day cause great changes in the world. ''
... whence came all these people? They are a mixture of English, Scotch, Irish, French, Dutch, Germans, and Swedes... What, then, is the American, this new man? He is either an European or the descendant of an European; hence that strange mixture of blood, which you will find in no other country. I could point out to you a family whose grandfather was an Englishman, whose wife was Dutch, whose son married a French woman, and whose present four sons have now four wives of different nations. He is an American, who, leaving behind him all his ancient prejudices and manners, receives new ones from the new mode of life he has embraced, the new government he obeys, and the new rank he holds... The Americans were once scattered all over Europe; here they are incorporated into one of the finest systems of population which has ever appeared.
In 1845, Ralph Waldo Emerson, alluding to the development of European civilization out of the medieval Dark Ages, wrote in his private journal of America as the Utopian product of a culturally and racially mixed "smelting pot '', but only in 1912 were his remarks first published. In his writing, Emerson explicitly welcomed the racial intermixing of whites and non-whites, a highly controversial view during his lifetime.
A magazine article in 1876 used the metaphor explicitly:
The fusing process goes on as in a blast - furnace; one generation, a single year even -- transforms the English, the German, the Irish emigrant into an American. Uniform institutions, ideas, language, the influence of the majority, bring us soon to a similar complexion; the individuality of the immigrant, almost even his traits of race and religion, fuse down in the democratic alembic like chips of brass thrown into the melting pot.
In 1893, historian Frederick Jackson Turner also used the metaphor of immigrants melting into one American culture. In his essay The Significance of the Frontier in American History, he referred to the "composite nationality '' of the American people, arguing that the frontier had functioned as a "crucible '' where "the immigrants were Americanized, liberated and fused into a mixed race, English in neither nationality nor characteristics ''.
In his 1905 travel narrative The American Scene, Henry James discusses cultural intermixing in New York City as a "fusion, as of elements in solution in a vast hot pot ''.
The exact term "melting pot '' came into general usage in the United States after it was used as a metaphor describing a fusion of nationalities, cultures and ethnicities in the 1908 play of the same name, first performed in Washington, D.C., where the immigrant protagonist declared:
Understand that America is God 's Crucible, the great Melting - Pot where all the races of Europe are melting and re-forming! Here you stand, good folk, think I, when I see them at Ellis Island, here you stand in your fifty groups, your fifty languages, and histories, and your fifty blood hatreds and rivalries. But you wo n't be long like that, brothers, for these are the fires of God you 've come to -- these are fires of God. A fig for your feuds and vendettas! Germans and Frenchmen, Irishmen and Englishmen, Jews and Russians -- into the Crucible with you all! God is making the American.
In The Melting Pot (1908), Israel Zangwill combined a romantic denouement with an utopian celebration of complete cultural intermixing. The play was an adaptation of William Shakespeare 's Romeo and Juliet, set in New York City. The play 's immigrant protagonist David Quixano, a Russian Jew, falls in love with Vera, a fellow Russian immigrant who is Christian. Vera is an idealistic settlement house worker and David is a composer struggling to create an "American symphony '' to celebrate his adopted homeland. Together they manage to overcome the old world animosities that threaten to separate them. But then David discovers that Vera is the daughter of the Tsarist officer who directed the pogrom that forced him to flee Russia. Horrified, he breaks up with her, betraying his belief in the possibility of transcending religious and ethnic animosities. However, unlike Shakespeare 's tragedy, there is a happy ending. At the end of the play the lovers are reconciled.
Reunited with Vera and watching the setting sun gilding the Statue of Liberty, David Quixano has a prophetic vision: "It is the Fires of God round His Crucible. There she lies, the great Melting - Pot -- Listen! Ca n't you hear the roaring and the bubbling? There gapes her mouth, the harbor where a thousand mammoth feeders come from the ends of the world to pour in their human freight ''. David foresees how the American melting pot will make the nation 's immigrants transcend their old animosities and differences and will fuse them into one people: "Here shall they all unite to build the Republic of Man and the Kingdom of God. Ah, Vera, what is the glory of Rome and Jerusalem where all nations and races come to worship and look back, compared with the glory of America, where all races and nations come to labour and look forward! ''
Zangwill thus combined the metaphor of the "crucible '' or "melting pot '' with a celebration of the United States as an ideal republic and a new promised land. The prophetic words of his Jewish protagonist against the backdrop of the Statue of Liberty allude to Emma Lazarus 's famous poem The New Colossus (1883), which celebrated the statue as a symbol of American democracy and its identity as an immigrant nation.
Zangwill concludes his play by wishing, "Peace, peace, to all ye unborn millions, fated to fill this giant continent -- the God of our children give you Peace. '' Expressing his hope that through this forging process the "unborn millions '' who would become America 's future citizens would become a unified nation at peace with itself despite its ethnic and religious diversity.
In terms of immigrants to the United States, the "melting pot '' process has been equated with Americanization, that is, cultural assimilation and acculturation. The "melting pot '' metaphor implies both a melting of cultures and intermarriage of ethnicities, yet cultural assimilation or acculturation can also occur without intermarriage. Thus African - Americans are fully culturally integrated into American culture and institutions. Yet more than a century after the abolition of slavery, intermarriage between African - Americans and other ethnicities is much less common than between different white ethnicities, or between white and Asian ethnicities. Intermarriage between whites and non-whites, and especially African - Americans, has long been a taboo in the United States, and was illegal in many US states (see anti-miscegenation laws) until 1967.
The melting pot theory of ethnic relations, which sees American identity as centered upon the acculturation or assimilation and the intermarriage of white immigrant groups, has been analyzed by the emerging academic field of whiteness studies. This discipline examines the "social construction of whiteness '' and highlights the changing ways in which whiteness has been normative to American national identity from the 17th to the 20th century.
In the late 19th and early 20th centuries, European immigration to the United States became increasingly diverse and increased substantially in numbers. Beginning in the 1890s, large numbers of Southern and Eastern European immigrant groups such as the Italians, Jews, and Poles arrived. Many returned to Europe but those who remained merged into the cultural melting pot, adopting American lifestyles. By contrast, Chinese arrivals met intense hostility and new laws in the 1880s tried to exclude them, but many arrived illegally. Hostility forced them into "Chinatowns '' or ethnic enclaves in the larger cities, where they lived a culture apart and seldom assimilated. The acquisition of Hawaii in 1898, with full citizenship for the residents of all races, greatly increased the Asian American population.
In the early 20th century, the meaning of the recently popularized concept of the melting pot was subject to ongoing debate which centered on the issue of immigration. The debate surrounding the concept of the melting pot centered on how immigration impacted American society and on how immigrants should be approached. The melting pot was equated with either the acculturation or the total assimilation of European immigrants, and the debate centered on the differences between these two ways of approaching immigration: "Was the idea to melt down the immigrants and then pour the resulting, formless liquid into the preexisting cultural and social molds modeled on Anglo - Protestants like Henry Ford and Woodrow Wilson, or was the idea instead that everyone, Mayflower descendants and Sicilians, Ashkenazi and Slovaks, would act chemically upon each other so that all would be changed, and a new compound would emerge? ''
Nativists wanted to severely restrict access to the melting pot. They felt that far too many "undesirables, '' or in their view, culturally inferior immigrants from Southern and Eastern Europe had already arrived. The compromises that were reached in a series of immigration laws in the 1920s established the principle that the number of new arrivals should be small, and, apart from family reunification, the inflow of new immigrants should match the ethnic profile of the nation as it existed at that time. National quotas were established that discouraged immigration from Poland, Italy and Russia, and encouraged immigration from Britain, Ireland and Germany.
Intermarriage between Euro - American men and Native American women has been common since colonial days. In the 21st century some 7.5 million Americans claim Native American ancestry. In the 1920s the nation welcomed celebrities of Native American background, especially Will Rogers and Jim Thorpe, and elected as vice president in 1928 Senator Charles Curtis, who had been brought up on a reservation and identified with his Indian heritage.
The mixing of whites and blacks, resulting in multiracial children, for which the term "miscegenation '' was coined in 1863, was a taboo, and most whites opposed marriages between whites and blacks. In many states, marriage between whites and non-whites was even prohibited by state law through anti-miscegenation laws. As a result, two kinds of "mixture talk '' developed:
As the new word -- miscegenation -- became associated with black - white mixing, a preoccupation of the years after the Civil War, the residual European immigrant aspect of the question of (ethnoracial mixture) came to be more than ever a thing apart, discussed all the more easily without any reference to the African - American aspect of the question. This separation of mixture talk into two discourses facilitated, and was in turn reinforced by, the process Matthew Frye Jacobson has detailed whereby European immigrant groups became less ambiguously white and more definitely "not black ''.
By the early 21st century, many white Americans celebrated the impact of African - American culture, especially in sports and music. Marriages between white Americans and African - Americans were still problematic in both communities. Israel Zangwill saw this coming in the early 20th century: "However scrupulously and justifiably America avoids intermarriage with the negro, the comic spirit can not fail to note spiritual miscegenation which, while clothing, commercializing, and Christianizing the ex-African, has given ' rag - time ' and the sex - dances that go with it, first to white America and then to the whole white world. ''
White Americans long regarded some elements of African - American culture quintessentially "American '', while at the same time treating African Americans as second - class citizens. White appropriation, stereotyping and mimicking of black culture played an important role in the construction of an urban popular culture in which European immigrants could express themselves as Americans, through such traditions as blackface, minstrel shows and later in jazz and in early Hollywood cinema, notably in The Jazz Singer (1927).
Analyzing the "racial masquerade '' that was involved in creation of a white "melting pot '' culture through the stereotyping and imitation of black and other non-white cultures in the early 20th century, historian Michael Rogin has commented: "Repudiating 1920s nativism, these films (Rogin discusses The Jazz Singer, Old San Francisco (1927), Whoopee! (1930), King of Jazz (1930) celebrate the melting pot. Unlike other racially stigmatized groups, white immigrants can put on and take off their mask of difference. But the freedom promised immigrants to make themselves over points to the vacancy, the violence, the deception, and the melancholy at the core of American self - fashioning ''.
Since World War II, the idea of the melting pot has become more racially inclusive in the United States, gradually extending also to acceptance of marriage between whites and non-whites.
This trend towards greater acceptance of ethnic and racial minorities was evident in popular culture in the combat films of World War II, starting with Bataan (1943). This film celebrated solidarity and cooperation between Americans of all races and ethnicities through the depiction of a multiracial American unit. At the time blacks and Japanese in the armed forces were still segregated, while Chinese and Indians were in integrated units.
Historian Richard Slotkin sees Bataan and the combat genre that sprang from it as the source of the "melting pot platoon '', a cinematic and cultural convention symbolizing in the 1940s "an American community that did not yet exist '', and thus presenting an implicit protest against racial segregation. However, Slotkin points out that ethnic and racial harmony within this platoon is predicated upon racist hatred for the Japanese enemy: "the emotion which enables the platoon to transcend racial prejudice is itself a virulent expression of racial hatred... The final heat which blends the ingredients of the melting pot is rage against an enemy which is fully dehumanized as a race of ' dirty monkeys. ' '' He sees this racist rage as an expression of "the unresolved tension between racialism and civic egalitarianism in American life ''.
In Hawaii, as Rohrer (2008) argues, there are two dominant discourses of racial politics, both focused on "haole '' (white people or whiteness in Hawaii) in the islands. The first is the discourse of racial harmony representing Hawaii as an idyllic racial paradise with no conflict or inequality. There is also a competing discourse of discrimination against nonlocals, which contends that "haoles '' and nonlocal people of color are disrespected and treated unfairly in Hawaii. As negative referents for each other, these discourses work to reinforce one another and are historically linked. Rohrer proposes that the question of racial politics be reframed toward consideration of the processes of racialization themselves -- toward a new way of thinking about racial politics in Hawaii that breaks free of the not racist / racist dyad.
Throughout the history of the modern Olympic Games, the theme of the United States as a melting pot has been employed to explain American athletic success, becoming an important aspect of national self - image. The diversity of American athletes in the Olympic Games in the early 20th century was an important avenue for the country to redefine a national culture amid a massive influx of immigrants, as well as American Indians (represented by Jim Thorpe in 1912) and blacks (represented by Jesse Owens in 1936). In the 1968 Summer Olympics in Mexico City, two black American athletes with gold and bronze medals saluted the U.S. national anthem with a "Black Power '' salute that symbolized rejection of assimilation.
The international aspect of the games allowed the United States to define its pluralistic self - image against the monolithic traditions of other nations. American athletes served as cultural ambassadors of American exceptionalism, promoting the melting pot ideology and the image of America as a progressive nation based on middle - class culture. Journalists and other American analysts of the Olympics framed their comments with patriotic nationalism, stressing that the success of U.S. athletes, especially in the high - profile track - and - field events, stemmed not from simple athletic prowess but from the superiority of the civilization that spawned them.
Following the September 11, 2001 terrorist attacks, the 2002 Winter Games in Salt Lake City strongly revived the melting pot image, returning to a bedrock form of American nationalism and patriotism. The reemergence of Olympic melting pot discourse was driven especially by the unprecedented success of African Americans, Mexican Americans, Asian Americans, and Native Americans in events traditionally associated with Europeans and white North Americans such as speed skating and the bobsled. The 2002 Winter Olympics was also a showcase of American religious freedom and cultural tolerance of the history of Utah 's large majority population of Mormons, as well representation of Muslim Americans and other religious groups in the U.S. Olympic team.
The concept of multiculturalism was preceded by the concept of cultural pluralism, which was first developed in the 1910s and 1920s, and became widely popular during the 1940s. The concept of cultural pluralism first emerged in the 1910s and 1920s among intellectual circles out of the debates in the United States over how to approach issues of immigration and national identity.
The First World War heightened tensions between Anglo - American and German - Americans. The war and the Russian Revolution, which caused a "Red Scare '' in the US, which also fanned feelings of xenophobia. During and immediately after the First World War, the concept of the melting pot was equated by Nativists with complete cultural assimilation towards an Anglo - American norm ("Anglo - conformity '') on the part of immigrants, and immigrants who opposed such assimilation were accused of disloyalty to the United States.
The newly popularized concept of the melting pot was frequently equated with "Americanization '', meaning cultural assimilation, by many "old stock '' Americans. In Henry Ford 's Ford English School (established in 1914), the graduation ceremony for immigrant employees involved symbolically stepping off an immigrant ship and passing through the melting pot, entering at one end in costumes designating their nationality and emerging at the other end in identical suits and waving American flags.
Opposition to the absorption of million of immigrants from Southern and Eastern Europe was especially strong among such popular writers Madison Grant and Lothrop Stoddard, who believed in the "racial '' superiority of Americans of Northern European descent as member of the "Nordic race '', and therefore demanded immigration restrictions to stop a "degeneration '' of America 's white racial "stock ''. They believed that complete cultural assimilation of the immigrants from Southern and Eastern Europe was not a solution to the problem of immigration because intermarriage with these immigrants would endanger the racial purity of Anglo - America. The controversy over immigration faded away after immigration restrictions were put in place with the enactment of the Johnson - Reed Act in 1924.
In response to the pressure exerted on immigrants to culturally assimilate and also as a reaction against the denigration of the culture of non-Anglo white immigrants by Nativists, intellectuals on the left such as Horace Kallen, in Democracy Versus the Melting - Pot (1915), and Randolph Bourne, in Trans - National America (1916), laid the foundations for the concept of cultural pluralism. This term was coined by Kallen. Randolph Bourne, who objected to Kallen 's emphasis on the inherent value of ethnic and cultural difference, envisioned a "trans - national '' and cosmopolitan America. The concept of cultural pluralism was popularized in the 1940s by John Dewey.
In the United States, where the term melting pot is still commonly used, the ideas of cultural pluralism and multiculturalism have, in some circles, taken precedence over the idea of assimilation. Alternate models where immigrants retain their native cultures such as the "salad bowl '' or the "symphony '' are more often used by sociologists to describe how cultures and ethnicities mix in the United States. Nonetheless, the term assimilation is still used to describe the ways in which immigrants and their descendants adapt, such as by increasingly using the national language of the host society as their first language.
Since the 1960s, much research in Sociology and History has disregarded the melting pot theory for describing interethnic relations in the United States and other counties. The theory of multiculturalism offers alternative analogies for ethnic interaction including salad bowl theory, or, as it is known in Canada, the cultural mosaic. In the 1990s, political correctness in the United States emphasized that each ethnic and national group has the right to maintain and preserve its cultural distinction and integrity, and that one does not need to assimilate or abandon one 's heritage in order to blend in or merge into the majority Anglo - American society.
Nevertheless, some prominent scholars, such as Samuel P. Huntington in Who Are We? The Challenges to America 's National Identity, have expressed the view that the most accurate explanation for modern - day United States culture and inter-ethnic relations can be found somewhere in a fusion of some of the concepts and ideas contained in the melting pot, assimilation, and Anglo - conformity models. Under this theory, it is asserted that the United States has one of the most homogeneous cultures of any nation in the world. This line of thought holds that this American national culture derived most of its traits and characteristics from early colonial settlers from Britain, Ireland, and Germany. When more recent immigrants from Southern and Eastern Europe brought their various cultures to America at the beginning of the 20th century, they changed the American cultural landscape just very slightly, and, for the most part, assimilated into America 's pre-existing culture which had its origins in Northwestern Europe.
The decision of whether to support a melting - pot or multicultural approach has developed into an issue of much debate within some countries. For example, the French and British governments and populace are currently debating whether Islamic cultural practices and dress conflict with their attempts to form culturally unified countries.
Mexico has seen a variety of cultural influences over the years, and in its history has adopted a mixed assimilationist / multiculturalist policy. Mexico, beginning with the conquest of the Aztecs, had entered a new global empire based on trade and immigration. In the 16th and 17th centuries, waves of Spanish, and to a lesser extent, African and Filipino culture became embedded into the fabric of Mexican culture. It is important to note, however, that from a Mexican standpoint, the immigrants and their culture were no longer considered foreign, but Mexican in their entirety. The food, art, and even heritage were assimilated into a Mexican identity. Upon the independence of Mexico, Mexico began receiving immigrants from Central Europe, Eastern Europe and the Middle East, again, bringing many cultural influences but being quickly labeled as Mexican, unlike in the United States, where other culture is considered foreign. This assimilation is very evident, even in Mexican society today: for example, banda, a style of music originating in northern Mexico, is simply a Mexican take on Central European music brought by immigrants in the 18th century. Mexico 's thriving beer industry was also the result of German brewers finding refuge in Mexico. Many famous Mexicans are actually of Arab descent; Salma Hayek and Carlos Slim. The coastal states of Guerrero and Veracruz are inhabited by citizens of African descent. Mexico 's national policy is based on the concept of mestizaje, a word meaning "to mix ''. The immigrants are socially under pressure to adopt a Mexican nationality and become part of the broader culture (speaking Spanish, respect the Catholic heritage, help the society), while contributing useful cultural traits foreign to Mexican society.
As with other areas of new settlement such as Canada, Australia, the United States, Brazil, New Zealand, the United Kingdom and South Africa, Argentina is considered a country of immigrants. When it is considered that Argentina was second only to the United States (27 million of immigrants) in the number of immigrants received, even ahead of such other areas of newer settlement like Australia, Brazil, Canada, New Zealand, and The United Kingdom; and that the country was scarcely populated following its independence, the impact of the immigration to Argentina becomes evident.
Most Argentines are descended from colonial - era settlers and of the 19th and 20th century immigrants from Europe. An estimated 8 % of the population is Mestizo, and a further 4 % of Argentines are of Arab (in Argentina the Arab ethnicity is considered among the White people, just like in the US Census) or Asian heritage. In the last national census, based on self - identification, 600,000 Argentines (1.6 % of the population) declared to be Amerindians Although various genetic tests show that in average, Argentines have 20 to 30 % indigenous ancestry, which leads many who are culturally European, to identify as white, even though they are genetically mestizo. Most of the 6.2 million European immigrants arriving between 1850 and 1950, regardless of origin, settled in several regions of the country. Due to this large - scale European immigration, Argentina 's population more than doubled, although half ended up returning to Europe or settling in the United States.
The majority of these European immigrants came from Spain and Italy mostly, but to a lesser extent, Germany, France, and Russia. Small communities also descend from Switzerland, Wales, Scotland, Poland, Yugoslavia, Czechoslovakia, the Austro - Hungarian Empire, the Ottoman Empire, Ukraine, Denmark, Sweden, Finland, Norway, Belgium, Luxemburg, the Netherlands, Portugal, Romania, Bulgaria, Armenia, Greece, Lithuania, Estonia, Latvia and several other regions.
Italian population in Argentina arrived mainly from the northern Italian regions varying between Piedmont, Veneto and Lombardy, later from Campania and Calabria; Many Argentines have the gentilic of an Italian city, place, street or occupation of the immigrant as last name, many of them were not necessarily born Italians, but once they did the roles of immigration from Italy the name usually changed. Spanish immigrants were mainly Galicians and Basques. Millions of immigrants also came from France (notably Béarn and the Northern Basque Country), Germany, Switzerland, Denmark, Sweden, Norway, Ireland, Greece, Portugal, Finland, Russia and the United Kingdom. The Welsh settlement in Patagonia, known as Y Wladfa, began in 1865; mainly along the coast of Chubut Province. In addition to the main colony in Chubut, a smaller colony was set up in Santa Fe and another group settled at Coronel Suárez, southern Buenos Aires Province. Of the 50,000 Patagonians of Welsh descent, about 5,000 are Welsh speakers. The community is centered on the cities of Gaiman, Trelew and Trevelin.
Brazil has long been a melting pot for a wide range of cultures. From colonial times Portuguese Brazilians have favoured assimilation and tolerance for other peoples, and intermarriage was more acceptable in Brazil than in most other European colonies. However, Brazilian society has never been completely free of ethnic strife and exploitation, and some groups have chosen to remain separate from mainstream social life. Brazilians of mainly European descent (Portuguese, Italian, French, German, Austrian, Spanish, Polish, Ukrainian, Russian, Lithuanian, Hungarian etc.) account for more than half the population, although people of mixed ethnic backgrounds form an increasingly larger segment; roughly two - fifths of the total are mulattoes (mulattos; people of mixed African and European ancestry) and mestizos (mestiços, or caboclos; people of mixed European and Indian ancestry). Portuguese are the main European ethnic group in Brazil, and most Brazilians can trace their ancestry to an ethnic Portuguese or a mixed - race Portuguese. Among European descendants, Brazil has the largest Italian diaspora, the second largest German diaspora, as well as other European groups. The country is also home to the largest Japanese diaspora outside Japan, the largest Arab community outside the Arab World and one of the top 10 Jewish populations.
Colombia is a melting pot of races and ethnicities. The population is descended from three racial groups -- Native Americans, blacks, and whites -- that have mingled throughout the nearly 500 years of the country 's history. No official figures were available, since the Colombian government dropped any references to race in the census after 1918, but according to rough estimates in the late 1980s, mestizos (white and Native American mix) constituted approximately 50 % of the population, whites (predominantly Spanish origin, Italian, German, French, etc.) made a 25 %, mulattoes (black - white mix) 14 % and zambos (black and Native American mix) 4 %, blacks (pure or predominantly of African origin) 3 % percent, and Native Americans 1 %.
Costa Rican people is a very syncretic melting pot, because this country has been constituted in percentage since the 16th century by immigrants from all the European countries -- mostly Spaniards and Italians with a lot of Germans, British, Swedes, Swiss, French and Croats -- also as black people from Africa and Jamaica, Americans, Chinese, Lebanese and Latin Americans who have mestized over time with the large native populations creating the national average modern ethnic composition.
Nowadays a great part of the Costa Rican habitants are considered white (83.6 %), with minoritaries groups of mulato (6.7 %), indigenous (2.4 %), chinese (2 %) and black (1.1 %). Also, over 9 % ot the total population is foreign - born (specially from Nicaragua).
South Asia has a long history of inter-ethnic marriage dating back to ancient history. Various groups of people have been intermarrying for millennia in South Asia, including speakers of Dravidian, Indo - Aryan, Austroasiatic, and Tibeto - Burman languages. Greeks, Huns, Persians, Arab, Turkic, Mongols (Mughals), and European women were taken as wives by local Indian men and vice versa. On account of such diverse influences, South Asia in a nut - shell appears to be a cradle of human civilization. Despite invasions in its recent history it has succeeded in organically assimilating incoming influences, blunting their wills for imperialistic hegemony and maintaining its strong roots and culture. These invasions however brought their own racial mixing between diverse populations and South Asia is considered an exemplary "melting pot '' (and not a "salad bowl '') by many geneticists for exactly this reason. However, South Asian society has never been completely free of ethnic strife and exploitation, and some groups have chosen to remain separate from mainstream social life. The divisiveness of the caste system in India has permeated to every facet of the society. Ethnic conflicts in Pakistan between Baloch, Pashtun, Punjabis, and Sindhis, are other impediments to the melting pot thesis.
Afghanistan seems to be in the process of becoming a melting pot, as customs specific to particular ethnic groups are becoming summarily perceived as national traits of Afghanistan. The term Afghan was originally used to refer to the Pashtuns in the Middle Ages, and the intention behind the creation of the Afghan state was originally to be a Pashtun state, but later this policy changed, leading to the inclusion of non-Pashtuns in the state as Afghans. Today in Afghanistan, the development of a cultural melting pot is occurring, where different Afghanistan ethnic groups are mixing together to build a new Afghan ethnicity composed of preceding ethnicities in Afghanistan today, ultimately replacing the old Pashtun identity which stood for Afghan. With the churning growth of Persian, many ethnic groups, including de-tribalized Pashtuns, are adopting Dari Persian as their new native tongue. Many ethnic groups in Afghanistan tolerate each other, while the Hazara -- Pashtun conflict was notable, and often claimed as a Shia - Sunni conflict instead of an ethnic conflict, as this conflict was carried out by the Taliban. The Taliban, which are mostly ethnically Pashtun, have spurred Anti-Pashtunism across non-Pashtun Afghans. Pashtun -- Tajik rivalries have lingered about, but are much more mild. Reasons for this antipathy are criticism of Tajiks (for either their non-tribal culture or cultural rivalry in Afghanistan) by Pashtuns and criticism of Taliban (mostly composed of Pashtuns) by Tajiks. There have been rivalries between Pashtuns and Uzbeks as well, which is likely very similar to the Kyrgyzstan Crisis, which Pashtuns would likely take place as Kyrgyz (for having a similar nomadic culture), rivaling with Tajiks and Uzbeks (of sedentary culture), despite all being Sunni Muslims.
In the early years of the state of Israel, the term melting pot (כור היתוך), also known as "Ingathering of the Exiles '' (קיבוץ גלויות), was not a description of a process, but an official governmental doctrine of assimilating the Jewish immigrants that originally came from varying cultures (see Jewish ethnic divisions). This was performed on several levels, such as educating the younger generation (with the parents not having the final say) and (to mention an anecdotal one) encouraging and sometimes forcing the new citizens to adopt a Hebrew name.
Activists such as the Iraq - born Ella Shohat that an elite which developed in the early 20th century, out of the earlier - arrived Zionist Pioneers of the Second and Third Aliyas (immigration waves) -- and who gained a dominant position in the Yishuv (pre-state community) since the 1930s -- had formulated a new Hebrew culture, based on the values of Socialist Zionism, and imposed it on all later arrivals, at the cost of suppressing and erasing these later immigrants ' original culture.
Proponents of the Melting Pot policy asserted that it applied to all newcomers to Israel equally; specifically, that Eastern European Jews were pressured to discard their Yiddish - based culture as ruthlessly as Mizrahi Jews were pressured to give up the culture which they developed during centuries of life in Arab and Muslim countries. Critics respond, however, that a cultural change effected by a struggle within the Ashkenazi - East European community, with younger people voluntarily discarding their ancestral culture and formulating a new one, is not parallel to the subsequent exporting and imposing of this new culture on others, who had no part in formulating it. Also, it was asserted that extirpating the Yiddish culture had been in itself an act of oppression only compounding what was done to the Mizrahi immigrants.
Today the reaction to this doctrine is ambivalent; some say that it was a necessary measure in the founding years, while others claim that it amounted to cultural oppression. Others argue that the melting pot policy did not achieve its declared target: for example, the persons born in Israel are more similar from an economic point of view to their parents than to the rest of the population. The policy is generally not practised today though as there is less need for that -- the mass immigration waves at Israel 's founding have declined. Nevertheless, one fifth of current Israel 's Jewish population have immigrated from former Soviet Union in the last two decades. The Jewish population includes other minorities such as Haredi Jews; Furthermore, 20 % of Israel 's population is Arab. These factors as well as others contribute to the rise of pluralism as a common principle in the last years.
The expansion of the Grand Duchy of Moscow and later of the Russian Empire throughout 15th to 20th centuries created a unique melting pot. Though the majority of Russians had Slavic ancestry, different ethnicities were assimilated into the Russian melting pot through the period of expansion. Assimilation was a way for ethnic minorities to advance their standing within the Russian society and state -- as individuals or groups. It required adoption of Russian as a day - to - day language and Orthodox Christianity as religion of choice. The Roman Catholics (as in Poland and Lithuania) generally resisted assimilation. Throughout the centuries of eastward expansion of Russia Finno - Ugric and Turkic peoples were assimilated and included into the emerging Russian nation. This includes Mordvin, Udmurt, Mari, Tatar, Chuvash, Bashkir, and others. Surnames of many of Russia 's nobility (including Suvorov, Kutuzov, Yusupov, etc.) suggest their Turkic origin. Groups of later, 18th - and 19th - century migrants to Russia, from Europe (Germans, French, Italians, Poles, Serbs, Bulgarians, Greeks, Jews, etc.) or the Caucasus (Georgians, Armenians, Ossetians, Chechens, Azeris and Turks among them) also assimilated within several generations after settling among Russians in the expanding Russian Empire.
The Soviet people (Russian: Советский народ) was an ideological epithet for the population of the Soviet Union. The Soviet government promoted the doctrine of assimilating all peoples living in USSR into one Soviet people, accordingly to Marxist principle of Fraternity of peoples.
The effort lasted for the entire history of the Soviet Union, but did not succeed, as evidenced by developments in most national cultures in the territory after the dissolution of the Soviet Union in 1991.
The term has been used to describe a number of countries in Southeast Asia. Given the region 's location and importance to trade routes between China and the Western world, certain countries in the region have become ethnically diverse.
In the pre-Spanish era the Philippines was the trading nexus of various cultures and eventually became the melting pot of different nations. This primarily consisted of the Chinese, Indian and Arab cultures; followed by the Japanese, Koreans, and various Middle Eastern countries. This is also includes neighboring southeast Asian cultures. The cultures and races mixed with indigenous tribes, mainly of Austronesian descent (i.e. the Indonesians, Malays and Brunei) and the Negritos. The result was a mix of cultures and ideals. This melting pot of culture continued with the arrival of Europeans, mixing their western culture with the nation. The Spanish as well as the British colonized the Philippines for more than three centuries, and during the early 20th century, was both colonized and annexed by the Americans and the Japanese during World War II. In modern times, the Philippines has been the place of many retired Americans, Japanese expatriates and Korean students. It continues to uphold its status as a melting pot state today.
Man is the most composite of all creatures... Well, as in the old burning of the Temple at Corinth, by the melting and intermixture of silver and gold and other metals a new compound more precious than any, called Corinthian brass, was formed; so in this continent -- asylum of all nations -- the energy of Irish, Germans, Swedes, Poles, and Cossacks, and all the European tribes -- of the Africans, and of the Polynesians -- will construct a new race, a new religion, a new state, a new literature, which will be as vigorous as the new Europe which came out of the smelting - pot of the Dark Ages, or that which earlier emerged from the Pelasgic and Etruscan barbarism.
No reverberatory effect of The Great War has caused American public opinion more solicitude than the failure of the ' melting - pot. ' The discovery of diverse nationalistic feelings among our great alien population has come to most people as an intense shock.
Blacks, Chinese, Puerto Ricans, etcetera, could not melt into the pot. They could be used as wood to produce the fire for the pot, but they could not be used as material to be melted into the pot.
|
where did russia come in 2014 world cup | 2014 FIFA World Cup group H - wikipedia
Group H of the 2014 FIFA World Cup consisted of Belgium, Algeria, Russia and South Korea. Play began on 17 June and ended on 26 June 2014.
The two teams had met in two previous matches, both friendlies, most recently in 2003, won 3 -- 1 by Belgium.
Algeria took a one - goal lead in the first half after Sofiane Feghouli converted a penalty kick, awarded for a foul on him by Jan Vertonghen. Belgium came back with two goals in the second half, both scored by substitutes. The equaliser was scored by Marouane Fellaini, heading in a cross from the left by Kevin De Bruyne, followed by the game winner scored by Dries Mertens from a pass by Eden Hazard.
Feghouli 's goal snapped Algeria 's 506 - minute World Cup scoreless streak stretching back to 1986, second place at the time to the record of 517 minutes between 1930 and 1990 held by Bolivia.
Man of the Match: Kevin De Bruyne (Belgium)
Assistant referees: Marvin Torrentera (Mexico) Marcos Quintero (Mexico) Fourth official: Alireza Faghani (Iran) Fifth official: Hassan Kamranifar (Iran)
The two teams had met in one previous match, in a friendly in 2013.
After a goalless first half, the two teams traded goals by substitutes in the second half as the match finished 1 -- 1. First, Han Kook - young passed to Lee Keun - ho, and his long range shot was spilled by Russian goalkeeper Igor Akinfeev into the net. Russia equalised after Alan Dzagoev 's shot was parried by South Korean goalkeeper Jung Sung - ryong, the clearance hit Andrey Yeshchenko, and Aleksandr Kerzhakov scored from close range.
Man of the Match: Son Heung - min (South Korea)
Assistant referees: Hernán Maidana (Argentina) Juan Pablo Belatti (Argentina) Fourth official: Roberto Moreno (Panama) Fifth official: Eric Boria (United States)
The two teams had met in eight previous matches (including matches involving the Soviet Union), including four times in the FIFA World Cup (1970, group stage: Belgium 1 -- 4 Soviet Union; 1982, second group stage: Belgium 0 -- 1 Soviet Union; 1986, round of 16: Belgium 4 -- 3 (aet) Soviet Union; 2002, group stage: Belgium 3 -- 2 Russia).
Aleksandr Kokorin had Russia 's best chance in the first half, heading wide from six yards. Late in the second half, Belgian substitute Kevin Mirallas hit the post with his free kick, but Belgium did find the game - winner through another substitute, Divock Origi scoring from 8 yards out after Eden Hazard 's cut - back from the left.
Man of the Match: Eden Hazard (Belgium)
Assistant referees: Mark Borsch (Germany) Stefan Lupp (Germany) Fourth official: Carlos Vera (Ecuador) Fifth official: Byron Romero (Ecuador)
The two teams had met in two previous matches, both in friendlies both in 1985.
Algeria, which needed at least a point to stay alive in the competition, scored three goals in the first half to take a comfortable lead. First, Islam Slimani sped past two South Korean defenders to receive Carl Medjani 's long pass and slot home with his left foot past the advancing goalkeeper. Two minutes later, Rafik Halliche headed in Abdelmoumene Djabou 's corner from the left. Djabou scored himself later after he received a pass from Slimani, shooting low with his left foot from twelve yards out. Early in the second half, Son Heung - min controlled a long pass from Ki Sung - yueng to shoot with his left foot between the goalkeeper 's legs and reduce the deficit, but Yacine Brahimi restored Algeria 's three - goal lead after a one - two with Sofiane Feghouli to side foot home from inside the penalty area with his right foot. Koo Ja - cheol scored South Korea 's second goal after a pass from Lee Keun - ho from the left, but Algeria held on for its third ever World Cup victory, but its first since 24 June 1982.
Algeria became the first African team to score four goals in a World Cup match.
Man of the Match: Islam Slimani (Algeria)
Assistant referees: Eduardo Díaz (Colombia) Christian Lescano (Ecuador) Fourth official: Alireza Faghani (Iran) Fifth official: Hassan Kamranifar (Iran)
The two teams had met in three previous matches, including twice in the FIFA World Cup group stage (1990: South Korea 0 -- 2 Belgium; 1998: South Korea 1 -- 1 Belgium).
Belgium, which had already qualified for the knockout stage but still needed a point to finish as group winners, had Steven Defour sent off for a reckless tackle on Kim Shin - wook at the end of the first half. Belgium scored the only goal of the match in the second half, when substitute Divock Origi 's shot was parried by South Korea goalkeeper Kim Seung - gyu and Jan Vertonghen converted the rebound with his left foot.
Belgium 's win ensured that they topped their group, while South Korea, which had to win to have any chance for qualification, were eliminated.
South Korea 's elimination meant that all four Asian representatives finished last in their group with a combined record of zero wins, three draws and nine defeats, the worst showing by the Asian Football Confederation since the 1990 World Cup.
Man of the Match: Jan Vertonghen (Belgium)
Assistant referees: Matthew Cream (Australia) Hakan Anaz (Australia) Fourth official: Víctor Hugo Carrillo (Peru) Fifth official: Rodney Aquino (Paraguay)
The two teams had met in three previous matches (only involving matches during the time period of the Soviet Union).
Aleksandr Kokorin opened the scoring for Russia, which had to win to have chance of qualifying for the knockout stage, in the 6th minute when he scored with a header after a cross from Dmitri Kombarov from the left. Algeria equalised in the 60th minute when Islam Slimani scored with a header at the back post after a free kick from the left by Yacine Brahimi which was missed by Russian goalkeeper Igor Akinfeev. Algeria held on for the draw, and as South Korea lost to Belgium in the other match played at the same time, Algeria finished as group runners - up and reached the second round for the first time in their history (after unsuccessful campaigns in 1982, 1986, and 2010), while Russia failed to advance out of the group stage in all three tournaments since the break - up of the Soviet Union.
For Algeria 's goal, television replays showed that Akinfeev had a green laser light shining in his face during the play. After the match the Algerian Football Federation was fined 50,000 CHF by FIFA for the use of laser pointers, a prohibited item in the stadium according to FIFA Stadium Safety and Security Regulations, and other violations of the rules by Algerian fans.
With fellow African representative Nigeria also reaching the knockout stage earlier, this was the first time there were two teams from the Confederation of African Football in the knockout stage of a World Cup.
Man of the Match: Islam Slimani (Algeria)
Assistant referees: Bahattin Duran (Turkey) Tarık Ongun (Turkey) Fourth official: Joel Aguilar (El Salvador) Fifth official: Juan Zumba (El Salvador)
|
when did germany invade great britain in ww2 | Battle of Britain - wikipedia
2,550 serviceable aircraft.
Pacific War
Mediterranean and Middle East
Other campaigns
Contemporaneous wars
The Battle of Britain (German: Luftschlacht um England, literally "the air battle for England '') was a military campaign of the Second World War, in which the Royal Air Force (RAF) defended the United Kingdom (UK) against large - scale attacks by the German Air Force (Luftwaffe). It has been described as the first major military campaign fought entirely by air forces.
The British officially recognise the battle 's duration as being from 10 July until 31 October 1940, which overlaps the period of large - scale night attacks known as the Blitz, that lasted from 7 September 1940 to 11 May 1941. German historians do not accept this subdivision and regard the battle as a single campaign lasting from July 1940 to June 1941, including the Blitz.
The primary objective of the Nazi German forces was to compel Britain to agree to a negotiated peace settlement. In July 1940 the air and sea blockade began, with the Luftwaffe mainly targeting coastal - shipping convoys, ports and shipping centres, such as Portsmouth. On 1 August, the Luftwaffe was directed to achieve air superiority over the RAF with the aim of incapacitating RAF Fighter Command; 12 days later, it shifted the attacks to RAF airfields and infrastructure. As the battle progressed, the Luftwaffe also targeted factories involved in aircraft production and strategic infrastructure. Eventually it employed terror bombing on areas of political significance and on civilians.
The Germans had swiftly overwhelmed continental countries, and Britain now appeared to face a threat of invasion by sea, but the German high command knew the difficulties of an unprecedented seaborne attack and its impracticality while the Royal Navy commanded the seas. In early July 1940 the German High Command began planning the invasion of the Soviet Union. On 16 July Hitler ordered the preparation of Operation Sea Lion as a potential amphibious and airborne assault on Britain, to follow once the Luftwaffe had air superiority over the UK. In September RAF Bomber Command night raids disrupted the German preparation of converted barges, and the Luftwaffe failure to overwhelm the RAF forced Hitler to postpone and eventually cancel Operation Sea Lion.
Nazi Germany proved unable to sustain daylight raids, but their continued night - bombing operations on Britain became known as the Blitz. Steven Bungay regards the German failure to destroy Britain 's air defences to force an armistice (or even outright surrender) as the first major defeat of Nazi Germany in World War II and a crucial turning point in the conflict.
The Battle of Britain takes its name from a speech by Winston Churchill to the House of Commons on 18 June: "What General Weygand has called The Battle of France is over. The Battle of Britain is about to begin. ''
Strategic bombing during World War I introduced air attacks intended to panic civilian targets and led in 1918 to the amalgamation of British army and navy air services into the Royal Air Force. Its first Chief of the Air Staff Hugh Trenchard was among the military strategists in the 1920s like Giulio Douhet who saw air warfare as a new way to overcome the stalemate of trench warfare. Interception was nearly impossible with fighter planes no faster than bombers. Their view (expressed vividly in 1932) was that the bomber will always get through, and the only defence was a deterrent bomber force capable of matching retaliation. Predictions were made that a bomber offensive would quickly cause thousands of deaths and civilian hysteria leading to capitulation, but widespread pacifism contributed to a reluctance to provide resources.
Germany was forbidden to have a military air force by the 1919 Treaty of Versailles, but developed air crew training in civilian and sport flying. Following a 1923 memorandum, the Deutsche Luft Hansa airline developed designs which were claimed to be for passengers and freight, but which could in fact be readily adapted into bombers, including the Junkers Ju 52. In 1926 the secret Lipetsk fighter - pilot school began operating. Erhard Milch organised rapid expansion, and following the 1933 Nazi seizure of power his subordinate Robert Knauss formulated a deterrence theory incorporating Douhet 's ideas and Tirpitz 's "risk theory '', which proposed a fleet of heavy bombers to deter a preventive attack by France and Poland before Germany could fully rearm. A winter 1933 -- 34 war game indicated a need for fighters and anti-aircraft protection as well as bombers. On 1 March 1935 the Luftwaffe was formally announced, with Walther Wever as Chief of Staff. The 1935 Luftwaffe doctrine for "Conduct of the Air War '' (Die Luftkriegführung) set air power within the overall military strategy, with critical tasks of attaining (local and temporary) air superiority and providing battlefield support for army and naval forces. Strategic bombing of industries and transport could be decisive longer term options, dependent on opportunity or preparations by the army and navy, to overcome a stalemate or used when only destruction of the enemy 's economy would be conclusive. The list excluded bombing civilians to destroy homes or undermine morale, as that was considered a waste of strategic effort, but the doctrine allowed revenge attacks if German civilians were bombed. A revised edition was issued in 1940, and the continuing central principle of Luftwaffe doctrine was that destruction of enemy armed forces was of primary importance.
The RAF responded to Luftwaffe developments with its 1934 Expansion Plan A rearmament scheme, and in 1936 it was restructured into Bomber Command, Coastal Command, Training Command and Fighter Command. The latter was under Hugh Dowding, who opposed the doctrine that bombers were unstoppable: the invention of radar at that time could allow early detection, and prototype monoplane fighters were significantly faster. Priorities were disputed, but in December 1937 the Minister in charge of defence coordination Sir Thomas Inskip decided in Dowding 's favour, that "The role of our air force is not an early knock - out blow '' but rather was "to prevent the Germans from knocking us out '' and fighter squadrons were just as necessary as bomber squadrons.
In the Spanish Civil War, the Luftwaffe in the Condor Legion tried out air fighting tactics and their new aeroplanes. Wolfram von Richthofen become an exponent of air power providing ground support to other services. The difficulty of accurately hitting targets prompted Ernst Udet to require that all new bombers had to be dive bombers, and led to the development of the Knickebein system for night time navigation. Priority was given to producing large numbers of smaller aeroplanes, and plans for a long range four engined strategic bomber were delayed.
The early stages of World War II saw successful German invasions on the continent aided decisively by the air power of the Luftwaffe, which was able to establish tactical air superiority with great efficiency. The speed with which German forces defeated most of the defending armies in Norway in early 1940 created a significant political crisis in Britain. In early May 1940, the Norway Debate questioned the fitness for office of the British Prime Minister Neville Chamberlain. On 10 May, the same day Winston Churchill became British Prime Minister, the Germans initiated the Battle of France with an aggressive invasion of French territory. RAF Fighter Command was desperately short of trained pilots and aircraft, but despite the objections of its commander Hugh Dowding that the diversion of his forces would leave home defences under - strength, Churchill sent fighter squadrons, the Air Component of the British Expeditionary Force, to support operations in France, where the RAF suffered heavy losses.
After the evacuation of British and French soldiers from Dunkirk and the French surrender on 22 June 1940, Hitler mainly focused his energies on the possibility of invading the Soviet Union in the belief that the British, defeated on the continent and without European allies, would quickly come to terms. The Germans were so convinced of an imminent armistice that they began constructing street decorations for the homecoming parades of victorious troops. Although the British Foreign Secretary, Lord Halifax, and certain elements of the British public favoured a negotiated peace with an ascendant Germany, Churchill and a majority of his Cabinet refused to consider an armistice. Instead, Churchill used his skilful rhetoric to harden public opinion against capitulation and to prepare the British for a long war.
The Battle of Britain has the unusual distinction that it gained its name before being fought. The name is derived from the This was their finest hour speech delivered by Winston Churchill in the House of Commons on 18 June, more than three weeks prior to the generally accepted date for the start of the battle:
... What General Weygand has called The Battle of France is over. The battle of Britain is about to begin. Upon this battle depends the survival of Christian civilisation. Upon it depends our own British life and the long continuity of our institutions and our Empire. The whole fury and might of the enemy must very soon be turned on us. Hitler knows that he will have to break us in this island or lose the war. If we can stand up to him, all Europe may be free and the life of the world may move forward into broad, sunlit uplands. But if we fail, then the whole world, including the United States, including all that we have known and cared for, will sink into the abyss of a new Dark Age made more sinister, and perhaps more protracted, by the lights of a perverted science. Let us therefore brace ourselves to our duties, and so bear ourselves that, if the British Empire and its Commonwealth last for a thousand years, men will still say, "This was their finest hour ''.
From the outset of his rise to power, Hitler expressed admiration for Britain, and throughout the Battle period he sought neutrality or a peace treaty with Britain. In a secret conference on 23 May 1939, Hitler set out his rather contradictory strategy that an attack on Poland was essential and "will only be successful if the Western Powers keep out of it. If this is impossible, then it will be better to attack in the West and to settle Poland at the same time '' with a surprise attack. "If Holland and Belgium are successfully occupied and held, and if France is also defeated, the fundamental conditions for a successful war against England will have been secured. England can then be blockaded from Western France at close quarters by the Air Force, while the Navy with its submarines extend the range of the blockade. ''
When war commenced, Hitler and the OKW (Oberkommando der Wehrmacht or "High Command of the Armed Forces '') issued a series of Directives ordering planning and stating strategic objectives. "Directive No. 1 for the Conduct of the War '' dated 31 August 1939 instructed the invasion of Poland on 1 September as planned. Potentially, Luftwaffe "operations against England '' were to "dislocate English imports, the armaments industry, and the transport of troops to France. Any favourable opportunity of an effective attack on concentrated units of the English Navy, particularly on battleships or aircraft carriers, will be exploited. The decision regarding attacks on London is reserved to me. Attacks on the English homeland are to be prepared, bearing in mind that inconclusive results with insufficient forces are to be avoided in all circumstances. '' Both France and the UK declared war on Germany; on 9 October Hitler 's "Directive No. 6 '' planned the offensive to defeat these allies and "win as much territory as possible in Holland, Belgium, and northern France to serve as a base for the successful prosecution of the air and sea war against England ''. On 29 November OKW "Directive No. 9 -- Instructions For Warfare Against The Economy Of The Enemy '' stated that once this coastline had been secured, the Lufwaffe together with the Kriegsmarine (German Navy) was to blockade UK ports with sea mines, attack shipping and warships, and make air attacks on shore installations and industrial production. This directive remained in force in the first phase of the Battle of Britain. It was reinforced on 24 May during the Battle of France by "Directive No. 13 '' which authorised the Luftwaffe "to attack the English homeland in the fullest manner, as soon as sufficient forces are available. This attack will be opened by an annihilating reprisal for English attacks on the Ruhr Basin. ''
By the end of June 1940, Germany had defeated Britain 's allies on the continent, and on 30 June the OKW Chief of Staff Alfred Jodl issued his review of options to increase pressure on Britain to agree to a negotiated peace. The first priority was to eliminate the RAF and gain air supremacy. Intensified air attacks against shipping and the economy could affect food supplies and civilian morale in the long term. Reprisal attacks of terror bombing had the potential to cause quicker capitulation, but the effect on morale was uncertain. Once the Luftwaffe had control of the air, and the UK economy had been weakened, an invasion would be a last resort or a final strike ("Todesstoss '') after England had already been conquered, but could have a quick result. On the same day, the Luftwaffe Commander - in - Chief Hermann Göring issued his operational directive; to destroy the RAF, thus protecting German industry, and also to block overseas supplies to Britain. The German Supreme Command argued over the practicality of these options.
In "Directive No. 16 -- On preparations for a landing operation against England '' on 16 July, Hitler required readiness by mid-August for the possibility of an invasion he called Operation Sea Lion, unless the British agreed to negotiations. The Luftwaffe reported that it would be ready to launch its major attack early in August. The Kriegsmarine Commander - in - Chief, Grand Admiral Erich Raeder, continued to highlight the impracticality of these plans, and said sea invasion could not take place before the Spring of 1941. Hitler now argued that Britain was holding out in hope of assistance from Russia, and the Soviet Union was to be invaded by mid 1941. Göring met his air fleet commanders, and on 24 July issued "Tasks and Goals '' of firstly gaining air supremacy, secondly protecting invasion forces and attacking the Royal Navy 's ships. Thirdly, they were to blockade imports, bombing harbours and stores of supplies.
Hitler 's "Directive No. 17 -- For the conduct of air and sea warfare against England '' issued on 1 August attempted to keep all the options open. The Luftwaffe 's Adlertag campaign was to start around 5 August, subject to weather, with the aim of gaining air superiority over southern England as a necessary precondition of invasion, to give credibility to the threat and give Hitler the option of ordering the invasion. The intention was to incapacitate the RAF so much that the UK would feel open to air attack, and would begin peace negotiations. It was also to isolate the UK and damage war production, beginning an effective blockade. Following severe Luftwaffe losses, Hitler agreed at a 14 September OKW conference that the air campaign was to intensify regardless of invasion plans. On 16 September, Göring gave the order for this change in strategy, to the first independent strategic bombing campaign.
Adolf Hitler 's Mein Kampf of 1923 mostly set out his hatreds: he only admired ordinary German World War I soldiers and Britain, which he saw as an ally against communism. In 1935 Hermann Göring welcomed news that Britain as a potential ally was rearming. In 1936 he promised assistance to defend the British Empire, asking only a free hand in Eastern Europe, and repeated this to Lord Halifax in 1937. That year, von Ribbentrop met Churchill with a similar proposal; when rebuffed, he told Churchill that interference with German domination would mean war. To Hitler 's great annoyance, all his diplomacy failed to stop Britain from declaring war when he invaded Poland. During the fall of France, he repeatedly discussed peace efforts with his generals.
When Churchill came to power, there was still wide support for Halifax, who as Foreign Secretary openly argued for peace negotiations in the tradition of British diplomacy, to secure British independence without war. On 20 May, Halifax secretly requested a Swedish businessman to make contact with Göring to open negotiations. Shortly afterwards, in the May 1940 War Cabinet Crisis, Halifax argued for negotiations involving the Italians, but this was rejected by Churchill with majority support. An approach made through the Swedish ambassador on 22 June was reported to Hitler, making peace negotiations seem feasible. Throughout July, as the battle started, the Germans made wider attempts to find a diplomatic solution. On 2 July, the day the armed forces were asked to start preliminary planning for an invasion, Hitler got von Ribbentrop to draft a speech offering peace negotiations. On 19 July Hitler made this speech to the German Parliament in Berlin, appealing "to reason and common sense '', and said he could "see no reason why this war should go on ''. His sombre conclusion was received in silence, but he did not suggest negotiations and this was effectively an ultimatum which was rejected by the British government. Halifax kept trying to arrange peace until he was sent to Washington in December as ambassador, and in January 1941 Hitler expressed continued interest in negotiating peace with Britain.
A May 1939 planning exercise by Luftflotte 3 found that the Luftwaffe lacked the means to do much damage to Britain 's war economy beyond laying naval mines. The Head of Luftwaffe intelligence Joseph "Beppo '' Schmid presented a report on 22 November 1939, stating that "Of all Germany 's possible enemies, Britain is the most dangerous. '' This "Proposal for the Conduct of Air Warfare '' argued for a counter to the British blockade and said "Key is to paralyse the British trade ''. Instead of the Wehrmacht attacking the French, the Luftwaffe with naval assistance was to block imports to Britain and attack seaports. "Should the enemy resort to terror measures -- for example, to attack our towns in western Germany '' they could retaliate by bombing industrial centres and London. Parts of this appeared on 29 November in "Directive No. 9 '' as future actions once the coast had been conquered. On 24 May 1940 "Directive No. 13 '' authorised attacks on the blockade targets, as well as retaliation for RAF bombing of industrial targets in the Ruhr.
After the defeat of France the High Command (OKW) felt they had won the war, and some more pressure would persuade Britain. On 30 June the OKW Chief of Staff Alfred Jodl issued his paper setting out options: the first was to increase attacks on shipping, economic targets and the RAF: air attacks and food shortages were expected to break morale and lead to capitulation. Destruction of the RAF was the first priority, and invasion would be a last resort. Hermann Göring 's operational directive issued the same day ordered destruction of the RAF to clear the way for attacks cutting off seaborne supplies to Britain. It made no mention of invasion.
In November 1939, the OKW reviewed the potential for an air - and seaborne invasion of Britain: the Kriegsmarine (German Navy) was faced with the threat the Royal Navy 's larger Home Fleet posed to a crossing of the English Channel, and together with the German Army viewed control of airspace as a necessary precondition. The Luftwaffe said invasion could only be "the final act in an already victorious war. ''
Hitler first discussed the idea at a 21 May 1940 meeting with Grand Admiral Erich Raeder, who stressed the difficulties and his own preference for a blockade. OKW Chief of Staff Alfred Jodl 's 30 June report described invasion as a last resort once the British economy had been damaged and the Luftwaffe had full air superiority. On 2 July, OKW requested preliminary plans. In Britain, Churchill described "the great invasion scare '' as "serving a very useful purpose '' by "keeping every man and woman tuned to a high pitch of readiness ''. On 10 July, he advised the War Cabinet that invasion could be ignored, as it "would be a most hazardous and suicidal operation ''.
On 11 July, Hitler agreed with Raeder that invasion would be a last resort, and the Luftwaffe advised that gaining air superiority would take 14 to 28 days. Hitler met his army chiefs, von Brauchitsch and Halder, who presented detailed plans on the assumption that the navy would provide safe transport. Hitler showed no interest in the details, but on 16 July he issued Directive No. 16 ordering preparations for Operation Sea Lion.
The navy insisted on a narrow beachhead and an extended period for landing troops; the army rejected these plans: the Luftwaffe could begin an air attack in August. Hitler held a meeting of his army and navy chiefs on 31 July. The navy said 22 September was the earliest possible date, and proposed postponement until the spring, but Hitler preferred September. He then told von Brauchitsch and Halder that he would decide on the landing operation eight to fourteen days after the air attack began. On 1 August he issued Directive No. 17 for intensified air and sea warfare, to begin with Adlertag on or after 5 August subject to weather, keeping options open for negotiated peace or blockade and siege.
Under the continuing influence of the 1935 "Conduct of the Air War '' doctrine, the main focus of the Luftwaffe command (including Göring) was in concentrating attacks to destroy enemy armed forces on the battlefield, and "blitzkrieg '' close air support of the army succeeded brilliantly. They reserved strategic bombing for a stalemate situation or revenge attacks, but doubted if this could be decisive on its own and regarded bombing civilians to destroy homes or undermine morale as a waste of strategic effort.
The defeat of France in June 1940 introduced the prospect for the first time of independent air action against Britain. A July Fliegercorps I paper asserted that Germany was by definition an air power: "Its chief weapon against England is the Air Force, then the Navy, followed by the landing forces and the Army. '' In 1940, the Luftwaffe would undertake a "strategic offensive... on its own and independent of the other services '', according to an April 1944 German account of their military mission. Göring was convinced that strategic bombing could win objectives which were beyond the army and navy, and gain political advantages in the Third Reich for the Luftwaffe and himself. He expected air warfare to decisively force Britain to negotiate, as all in the OKW hoped, and the Luftwaffe took little interest to planning to support an invasion.
The Luftwaffe faced a more capable opponent than any it had previously met: a sizeable, highly coordinated, well - supplied, modern air force.
The Luftwaffe 's Messerschmitt Bf 109E and Bf 110C fought against the RAF 's workhorse Hurricane Mk I and the less numerous Spitfire Mk I; Hurricanes outnumbered Spitfires in RAF Fighter Command by about 2: 1 when war broke out. The Bf 109E had a better climb rate and was up to 40 mph faster in level flight than the Rotol (constant speed propellor) equipped Hurricane Mk I, depending on altitude. The speed and climb disparity with the original non-Rotol Hurricane was even greater. By the end of spring 1940, all RAF Spitfire and Hurricane fighter squadrons converted to 100 octane aviation fuel, which allowed their Merlin engines to generate significantly more power and an approximately 30 mph increase in speed at low altitudes through the use of an Emergency Boost Override. In September 1940, the more powerful Mk IIa series 1 Hurricanes started entering service in small numbers. This version was capable of a maximum speed of 342 mph (550 km / h), some 20 mph more than the original (non-Rotol) Mk I, though it was still 15 to 20 mph slower than a Bf 109 (depending on altitude).
The performance of the Spitfire over Dunkirk came as a surprise to the Jagdwaffe, although the German pilots retained a strong belief that the 109 was the superior fighter. The British fighters were equipped with eight Browning. 303 (7.7 mm) machine guns, while most Bf 109Es had two 7.92 mm machine guns supplemented by two 20mm cannons. The latter was much more effective than the. 303; during the Battle it was not unknown for damaged German bombers to limp home with up to two hundred. 303 hits. At some altitudes, the Bf 109 could outclimb the British fighter. It could also engage in vertical - plane negative - g manoeuvres without the engine cutting out because its DB 601 engine used fuel injection; this allowed the 109 to dive away from attackers more readily than the carburettor - equipped Merlin. On the other hand, the Bf 109E had a much larger turning circle than its two foes. In general, though, as Alfred Price noted in The Spitfire Story:
... the differences between the Spitfire and the Me 109 in performance and handling were only marginal, and in a combat they were almost always surmounted by tactical considerations of which side had seen the other first, which had the advantage of sun, altitude, numbers, pilot ability, tactical situation, tactical co-ordination, amount of fuel remaining, etc.
The Bf 109E was also used as a Jabo (jagdbomber, fighter - bomber) -- the E-4 / B and E-7 models could carry a 250 kg bomb underneath the fuselage, the later model arriving during the battle. The Bf 109, unlike the Stuka, could fight on equal terms with RAF fighters after releasing its ordnance.
At the start of the battle, the twin - engined Messerschmitt Bf 110C long range Zerstörer ("Destroyer '') was also expected to engage in air - to - air combat while escorting the Luftwaffe bomber fleet. Although the 110 was faster than the Hurricane and almost as fast as the Spitfire, its lack of manoeuvrability and acceleration meant that it was a failure as a long - range escort fighter. On 13 and 15 August, thirteen and thirty aircraft were lost, the equivalent of an entire Gruppe, and the type 's worst losses during the campaign. This trend continued with a further eight and fifteen lost on 16 and 17 August. Göring ordered the Bf 110 units to operate "where the range of the single - engined machines were not sufficient ''.
The most successful role of the Bf 110 during the battle was as a Schnellbomber (fast bomber). The Bf 110 usually used a shallow dive to bomb the target and escape at high speed. One unit, Erprobungsgruppe 210 -- initially formed as the service test unit (Erprobungskommando) for the emerging successor to the 110, the Me 210 -- proved that the Bf 110 could still be used to good effect in attacking small or "pinpoint '' targets.
The RAF 's Boulton Paul Defiant had some initial success over Dunkirk because of its resemblance to the Hurricane; Luftwaffe fighters attacking from the rear were surprised by its unusual gun turret. However, during the Battle of Britain, this single - engined two - seater proved hopelessly outclassed. For various reasons, the Defiant lacked any form of forward - firing armament, and the heavy turret and second crewman meant it could not outrun or outmanoeuvre either the Bf 109 or Bf 110. By the end of August, after disastrous losses, the aircraft was withdrawn from daylight service.
The Luftwaffe 's primary bombers were the Heinkel He 111, Dornier Do 17, and Junkers Ju 88 for level bombing at medium to high altitudes, and the Junkers Ju 87 Stuka for dive bombing tactics. The He 111 was used in greater numbers than the others during the conflict, and was better known, partly due to its distinctive wing shape. Each level bomber also had a few reconnaissance versions accompanying them that were used during the battle.
Although it was successful in previous Luftwaffe engagements, the Stuka suffered heavy losses in the Battle of Britain, particularly on 18 August, due to its slow speed and vulnerability to fighter interception after dive bombing a target. As the losses went up along with their limited payload and range, Stuka units were largely removed from operations over England and diverted to concentrate on shipping instead until they were eventually re-deployed to the Eastern Front in 1941. However, for some raids, they were called back, such as on 13 September to attack Tangmere airfield.
The remaining three bomber types differed in their capabilities; the Heinkel 111 was the slowest; the Ju 88 was the fastest once its mainly external bomb load was dropped; and the Do 17 had the smallest bomb load. All three bomber types suffered heavy losses from the home - based British fighters, but the Ju 88 disproportionately so. The German bombers required constant protection by the Luftwaffe 's fighter force. German escorts, however, were not sufficiently numerous. Bf 109Es were ordered to support more than 300 -- 400 bombers on any given day. Later in the conflict, when night bombing became more frequent, all three were used. However, due to its reduced bomb load, the lighter Do 17 was used less than the He 111 and Ju 88 for this purpose.
On the British side, three bomber types were mostly used on night operations against targets such as factories, invasion ports and railway centres; the Armstrong Whitworth Whitley, the Handley - Page Hampden and the Vickers Wellington were classified as heavy bombers by the RAF, although the Hampden was a medium bomber comparable to the He 111. The twin - engined Bristol Blenheim and the obsolescent single - engined Fairey Battle were both light bombers; the Blenheim was the most numerous of the aircraft equipping RAF Bomber Command and was used in attacks against shipping, ports, airfields and factories on the continent by day and by night. The Fairey Battle squadrons, which had suffered heavy losses in daylight attacks during the Battle of France, were brought up to strength with reserve aircraft and continued to operate at night in attacks against the invasion ports, until the Battle was withdrawn from UK front line service in October 1940.
Before the war, the RAF 's processes for selecting potential candidates were opened to men of all social classes through the creation in 1936 of the RAF Volunteer Reserve, which "... was designed to appeal, to... young men... without any class distinctions... '' The older squadrons of the Royal Auxiliary Air Force did retain some of their upper - class exclusiveness, but their numbers were soon swamped by the newcomers of the RAFVR; by 1 September 1939, 6,646 pilots had been trained through the RAFVR.
By summer 1940, there were about 9,000 pilots in the RAF to man about 5,000 aircraft, most of which were bombers. Fighter Command was never short of pilots, but the problem of finding sufficient numbers of fully trained fighter pilots became acute by mid-August 1940. With aircraft production running at 300 planes each week, only 200 pilots were trained in the same period. In addition, more pilots were allocated to squadrons than there were aircraft, as this allowed squadrons to maintain operational strength despite casualties and still provide for pilot leave. Another factor was that only about 30 % of the 9,000 pilots were assigned to operational squadrons; 20 % of the pilots were involved in conducting pilot training, and a further 20 % were undergoing further instruction, like those offered in Canada and in Southern Rhodesia to the Commonwealth trainees, although already qualified. The rest were assigned to staff positions, since RAF policy dictated that only pilots could make many staff and operational command decisions, even in engineering matters. At the height of fighting, and despite Churchill 's insistence, only 30 pilots were released to the front line from administrative duties.
For these reasons, and the permanent loss of 435 pilots during the Battle of France alone along with many more wounded, and others lost in Norway, the RAF had fewer experienced pilots at the start of the initial defence of their home. It was the lack of trained pilots in the fighting squadrons, rather than the lack of aircraft, that became the greatest concern for Air Chief Marshal Hugh Dowding, Commander of Fighter Command. Drawing from regular RAF forces, the Auxiliary Air Force and the Volunteer Reserve, the British were able to muster some 1,103 fighter pilots on 1 July. Replacement pilots, with little flight training and often no gunnery training, suffered high casualty rates, thus exacerbating the problem.
The Luftwaffe, on the other hand, were able to muster a larger number (1,450) of more experienced fighter pilots. Drawing from a cadre of Spanish Civil War veterans, these pilots already had comprehensive courses in aerial gunnery and instructions in tactics suited for fighter - versus - fighter combat. Training manuals discouraged heroism, stressing the importance of attacking only when the odds were in the pilot 's favour. Despite the high levels of experience, German fighter formations did not provide a sufficient reserve of pilots to allow for losses and leave, and the Luftwaffe was unable to produce enough pilots to prevent a decline in operational strength as the battle progressed.
The Royal Air Force roll of honour for the Battle of Britain recognises 595 non-British pilots (out of 2,936) as flying at least one authorised operational sortie with an eligible unit of the RAF or Fleet Air Arm between 10 July and 31 October 1940. These included 145 Poles, 127 New Zealanders, 112 Canadians, 88 Czechoslovaks, 10 Irish, 32 Australians, 28 Belgians, 25 South Africans, 13 French, 9 Americans, 3 Southern Rhodesians and one each from Jamaica and Mandatory Palestine. "Altogether in the fighter battles, the bombing raids, and the various patrols flown between 10 July and 31 October 1940 by the Royal Air Force, 1495 aircrew were killed, of whom 449 were fighter pilots, 718 aircrew from Bomber Command, and 280 from Coastal Command. Among those killed were 47 airmen from Canada, 24 from Australia, 17 from South Africa, 35 from Poland, 20 from Czechoslovakia and six from Belgium. Forty - seven New Zealanders lost their lives, including 15 fighter pilots, 24 bomber and eight coastal aircrew. The names of these Allied and Commonwealth airmen are inscribed in a memorial book which rests in the Battle of Britain Chapel in Westminster Abbey. In the chapel is a stained glass window which contains the badges of the fighter squadrons which operated during the battle and the flags of the nations to which the pilots and aircrew belonged. ''
An element of the Italian Royal Air Force (Regia Aeronautica) called the Italian Air Corps (Corpo Aereo Italiano or CAI) first saw action in late October 1940. It took part in the latter stages of the battle, but achieved limited success. The unit was redeployed in early 1941.
The high command 's indecision over which aim to pursue was reflected in shifts in Luftwaffe strategy. Their Air War doctrine of concentrated close air support of the army at the battlefront succeeded in the blitzkrieg offensives against Poland, Denmark and Norway, the Low Countries and France, but incurred significant losses. The Luftwaffe now had to establish or restore bases in the conquered territories, and rebuild their strength. In June 1940 they began regular armed reconnaissance flights and sporadic Störangriffe, nuisance raids of one or a few bombers, both day and night. These gave crews practice in navigation and avoiding air defences, and set off air raid alarms which disturbed civilian morale. Similar nuisance raids continued throughout the battle, into the winter months of 1940. Scattered naval mine -- laying sorties began at the outset, and increased gradually over the battle period.
Göring 's operational directive of 30 June ordered destruction of the RAF as a whole, including the aircraft industry, with the aims of ending RAF bombing raids on Germany and facilitating attacks on ports and storage in the Luftwaffe blockade of Britain. Attacks on Channel shipping in the Kanalkampf began on 4 July, and were formalised on 11 July in an order by Hans Jeschonnek which added the arms industry as a target.
On 16 July Directive No. 16 ordered preparations for Operation Sea Lion, and on the next day the luftwaffe was ordered to stand by in full readiness. Göring met his air fleet commanders, and on 24 July issued "Tasks and Goals '' of gaining air supremacy, protecting the army and navy if invasion went ahead, and attacking the Royal Navy 's ships as well as continuing the blockade. Once the RAF had been defeated, Luftwaffe bombers were to move forward beyond London without the need for fighter escort, destroying military and economic targets.
At a meeting on 1 August the command reviewed plans produced by each Fliegerkorps with differing proposals for targets including whether to bomb airfields, but failed to focus priorities. Intelligence reports gave Göring the impression that the RAF was almost defeated: the intent was that raids would attract British fighters for the Luftwaffe to shoot down. On 6 August he finalised plans for this "Operation Eagle Attack '' with Kesselring, Sperle and Stumpff: destruction of RAF Fighter Command across the south of England was to take four days, with lightly escorted small bomber raids leaving the main fighter force free to attack RAF fighters. Bombing of military and economic targets was then to systematically extend up to the Midlands until daylight attacks could proceed unhindered over the whole of Britain.
Bombing of London was to be held back while these night time "destroyer '' attacks proceeded over other urban areas, then in culmination of the campaign a major attack on the capital was intended to cause a crisis when refugees fled London just as the Operation Sea Lion invasion was to begin. With hopes fading for the possibility of invasion, on 4 September Hitler authorised a main focus on day and night attacks on tactical targets with London as the main target, in what the British called the Blitz. With increasing difficulty in defending bombers in day raids, the Lufwaffe shifted to a strategic bombing campaign of night raids aiming to overcome British resistance by damaging infrastructure and food stocks, though intentional terror bombing of civilians was not sanctioned.
The Luftwaffe was forced to regroup after the Battle of France into three Luftflotten (Air Fleets) on Britain 's southern and northern flanks. Luftflotte 2, commanded by Generalfeldmarschall Albert Kesselring, was responsible for the bombing of southeast England and the London area. Luftflotte 3, under Generalfeldmarschall Hugo Sperrle, targeted the West Country, Wales, the Midlands, and northwest England. Luftflotte 5, led by Generaloberst Hans - Jürgen Stumpff from his headquarters in Norway, targeted the north of England and Scotland. As the battle progressed, command responsibility shifted, with Luftflotte 3 taking more responsibility for the night - time Blitz attacks while the main daylight operations fell upon Luftflotte 2 's shoulders.
Initial Luftwaffe estimates were that it would take four days to defeat the RAF Fighter Command in southern England. This would be followed by a four - week offensive during which the bombers and long - range fighters would destroy all military installations throughout the country and wreck the British aircraft industry. The campaign was planned to begin with attacks on airfields near the coast, gradually moving inland to attack the ring of sector airfields defending London. Later reassessments gave the Luftwaffe five weeks, from 8 August to 15 September, to establish temporary air superiority over England. To achieve this goal, Fighter Command had to be destroyed, either on the ground or in the air, yet the Luftwaffe had to be able to preserve its own strength to be able to support the invasion; this meant that the Luftwaffe had to maintain a high "kill ratio '' over the RAF fighters. The only alternative to the goal of air superiority was a terror bombing campaign aimed at the civilian population, but this was considered a last resort and it was (at this stage of the battle) expressly forbidden by Hitler.
The Luftwaffe kept broadly to this scheme, but its commanders had differences of opinion on strategy. Sperrle wanted to eradicate the air defence infrastructure by bombing it. His counterpart, Kesselring, championed attacking London directly -- either to bombard the British government into submission, or to draw RAF fighters into a decisive battle. Göring did nothing to resolve this disagreement between his commanders, and only vague directives were set down during the initial stages of the battle, with Göring seemingly unable to decide upon which strategy to pursue. He seemed at times obsessed with maintaining his own power base in the Luftwaffe and indulging his outdated beliefs on air fighting, which would later lead to tactical and strategic errors.
Luftwaffe formations employed a loose section of two (nicknamed the Rotte (pack)), based on a leader (Rottenführer) followed at a distance of about 200 metres by his wingman (nicknamed the Rottenhund (pack dog) or Katschmarek), who also flew slightly higher and was trained always to stay with his leader. With more room between them, both pilots could spend less time maintaining formation and more time looking around and covering each other 's blind spots. Attacking aircraft could be sandwiched between the two 109s. The Rotte allowed the Rottenführer to concentrate on getting kills, but few wingmen had the chance, leading to some resentment in the lower ranks where it was felt that the high scores came at their expense. Two sections were usually teamed up into a Schwarm, where all the pilots could watch what was happening around them. Each Schwarm in a Staffel flew at staggered heights and with about 200 metres of room between them, making the formation difficult to spot at longer ranges and allowing for a great deal of flexibility. By using a tight "cross-over '' turn, a Schwarm could quickly change direction.
The Bf 110s adopted the same Schwarm formation as the 109s, but were seldom able to use this to the same advantage. The Bf 110 's most successful method of attack was the "bounce '' from above. When attacked, Zerstörergruppen increasingly resorted to forming large "defensive circles '', where each Bf 110 guarded the tail of the aircraft ahead of it. Göring ordered that they be renamed "offensive circles '' in a vain bid to improve rapidly declining morale. These conspicuous formations were often successful in attracting RAF fighters that were sometimes "bounced '' by high - flying Bf 109s. This led to the often repeated misconception that the Bf 110s were escorted by Bf 109s.
Luftwaffe tactics were influenced by their fighters. The Bf 110 proved too vulnerable to the nimble single - engined RAF fighters. This meant the bulk of fighter escort duties fell on the Bf 109. Fighter tactics were then complicated by bomber crews who demanded closer protection. After the hard - fought battles of 15 and 18 August, Göring met with his unit leaders. During this conference, the need for the fighters to meet up on time with the bombers was stressed. It was also decided that one bomber Gruppe could only be properly protected by several Gruppen of 109s. In addition, Göring stipulated that as many fighters as possible were to be left free for Freie Jagd ("Free Hunts '': a free - roving fighter sweep preceded a raid to try to sweep defenders out of the raid 's path). The Ju 87 units, which had suffered heavy casualties, were only to be used under favourable circumstances. In early September, due to increasing complaints from the bomber crews about RAF fighters seemingly able to get through the escort screen, Göring ordered an increase in close escort duties. This decision shackled many of the Bf 109s to the bombers and, although they were more successful at protecting the bomber forces, casualties amongst the fighters mounted primarily because they were forced to fly and manoeuvre at reduced speeds.
The Luftwaffe consistently varied its tactics in its attempts to break through the RAF defences. It launched many Freie Jagd to draw up RAF fighters. RAF fighter controllers, however, were often able to detect these and position squadrons to avoid them, keeping to Dowding 's plan to preserve fighter strength for the bomber formations. The Luftwaffe also tried using small formations of bombers as bait, covering them with large numbers of escorts. This was more successful, but escort duty tied the fighters to the bombers ' slow speed and made them more vulnerable.
By September, standard tactics for raids had become an amalgam of techniques. A Freie Jagd would precede the main attack formations. The bombers would fly in at altitudes between 16,000 feet (4,900 m) and 20,000 feet (6,100 m), closely escorted by fighters. Escorts were divided into two parts (usually Gruppen), some operating in close contact with the bombers, and others a few hundred yards away and a little above. If the formation was attacked from the starboard, the starboard section engaged the attackers, the top section moving to starboard and the port section to the top position. If the attack came from the port side the system was reversed. British fighters coming from the rear were engaged by the rear section and the two outside sections similarly moving to the rear. If the threat came from above, the top section went into action while the side sections gained height to be able to follow RAF fighters down as they broke away. If attacked, all sections flew in defensive circles. These tactics were skilfully evolved and carried out, and were difficult to counter.
Adolf Galland noted:
We had the impression that, whatever we did, we were bound to be wrong. Fighter protection for bombers created many problems which had to be solved in action. Bomber pilots preferred close screening in which their formation was surrounded by pairs of fighters pursuing a zigzag course. Obviously, the visible presence of the protective fighters gave the bomber pilots a greater sense of security. However, this was a faulty conclusion, because a fighter can only carry out this purely defensive task by taking the initiative in the offensive. He must never wait until attacked because he then loses the chance of acting.
We fighter pilots certainly preferred the free chase during the approach and over the target area. This gives the greatest relief and the best protection for the bomber force.
The biggest disadvantage faced by Bf 109 pilots was that without the benefit of long - range drop tanks (which were introduced in limited numbers in the late stages of the battle), usually of 300 litres (66 imp gal; 79 US gal) capacity, the 109s had an endurance of just over an hour and, for the 109E, a 600 km (370 mi) range. Once over Britain, a 109 pilot had to keep an eye on a red "low fuel '' light on the instrument panel: once this was illuminated, he was forced to turn back and head for France. With the prospect of two long flights over water, and knowing their range was substantially reduced when escorting bombers or during combat, the Jagdflieger coined the term Kanalkrankheit or "Channel sickness ''.
The Luftwaffe was ill - served by its lack of military intelligence about the British defences. The German intelligence services were fractured and plagued by rivalries; their performance was "amateurish ''. By 1940, there were few German agents operating in Great Britain and a handful of bungled attempts to insert spies into the country were foiled.
As a result of intercepted radio transmissions, the Germans began to realise that the RAF fighters were being controlled from ground facilities; in July and August 1939, for example, the airship Graf Zeppelin, which was packed with equipment for listening in on RAF radio and RDF transmissions, flew around the coasts of Britain. Although the Luftwaffe correctly interpreted these new ground control procedures, they were incorrectly assessed as being rigid and ineffectual. A British radar system was well known to the Luftwaffe from intelligence gathered before the war, but the highly developed "Dowding system '' linked with fighter control had been a well - kept secret. Even when good information existed, such as a November 1939 Abwehr assessment of Fighter Command strengths and capabilities by Abteilung V, it was ignored if it did not match conventional preconceptions.
On 16 July 1940, Abteilung V, commanded by Oberstleutnant "Beppo '' Schmid, produced a report on the RAF and on Britain 's defensive capabilities which was adopted by the frontline commanders as a basis for their operational plans. One of the most conspicuous failures of the report was the lack of information on the RAF 's RDF network and control systems capabilities; it was assumed that the system was rigid and inflexible, with the RAF fighters being "tied '' to their home bases. An optimistic and, as it turned out, erroneous conclusion reached was:
D. Supply Situation... At present the British aircraft industry produces about 180 to 300 first line fighters and 140 first line bombers a month. In view of the present conditions relating to production (the appearance of raw material difficulties, the disruption or breakdown of production at factories owing to air attacks, the increased vulnerability to air attack owing to the fundamental reorganisation of the aircraft industry now in progress), it is believed that for the time being output will decrease rather than increase.
In the event of an intensification of air warfare it is expected that the present strength of the RAF will fall, and this decline will be aggravated by the continued decrease in production.
Because of this statement, reinforced by another more detailed report, issued on 10 August, there was a mindset in the ranks of the Luftwaffe that the RAF would run out of frontline fighters. The Luftwaffe believed it was weakening Fighter Command at three times the actual attrition rate. Many times, the leadership believed Fighter Command 's strength had collapsed, only to discover that the RAF were able to send up defensive formations at will.
Throughout the battle, the Luftwaffe had to use numerous reconnaissance sorties to make up for the poor intelligence. Reconnaissance aircraft (initially mostly Dornier Do 17s, but increasingly Bf 110s) proved easy prey for British fighters, as it was seldom possible for them to be escorted by Bf 109s. Thus, the Luftwaffe operated "blind '' for much of the battle, unsure of its enemy 's true strengths, capabilities, and deployments. Many of the Fighter Command airfields were never attacked, while raids against supposed fighter airfields fell instead on bomber or coastal defence stations. The results of bombing and air fighting were consistently exaggerated, due to inaccurate claims, over-enthusiastic reports and the difficulty of confirmation over enemy territory. In the euphoric atmosphere of perceived victory, the Luftwaffe leadership became increasingly disconnected from reality. This lack of leadership and solid intelligence meant the Germans did not adopt consistent strategy, even when the RAF had its back to the wall. Moreover, there was never a systematic focus on one type of target (such as airbases, radar stations, or aircraft factories); consequently, the already haphazard effort was further diluted.
While the British were using radar for air defence more effectively than the Germans realised, the Luftwaffe attempted to press its own offensive with advanced radio navigation systems of which the British were initially not aware. One of these was Knickebein ("bent leg ''); this system was used at night and for raids where precision was required. It was rarely used during the Battle of Britain.
The Luftwaffe was much better prepared for the task of air - sea rescue than the RAF, specifically tasking the Seenotdienst unit, equipped with about 30 Heinkel He 59 floatplanes, with picking up downed aircrew from the North Sea, English Channel and the Dover Straits. In addition, Luftwaffe aircraft were equipped with life rafts and the aircrew were provided with sachets of a chemical called fluorescein which, on reacting with water, created a large, easy - to - see, bright green patch. In accordance with the Geneva Convention, the He 59s were unarmed and painted white with civilian registration markings and red crosses. Nevertheless, RAF aircraft attacked these aircraft, as some were escorted by Bf 109s.
After single He 59s were forced to land on the sea by RAF fighters, on 1 and 9 July respectively, a controversial order was issued to the RAF on 13 July; this stated that from 20 July, Seenotdienst aircraft were to be shot down. One of the reasons given by Churchill was:
We did not recognise this means of rescuing enemy pilots so they could come and bomb our civil population again... all German air ambulances were forced down or shot down by our fighters on definite orders approved by the War Cabinet.
The British also believed that their crews would report on convoys, the Air Ministry issuing a communiqué to the German government on 14 July that Britain was
unable, however, to grant immunity to such aircraft flying over areas in which operations are in progress on land or at sea, or approaching British or Allied territory, or territory in British occupation, or British or Allied ships. Ambulance aircraft which do not comply with the above will do so at their own risk and peril
The white He 59s were soon repainted in camouflage colours and armed with defensive machine guns. Although another four He 59s were shot down by RAF aircraft, the Seenotdienst continued to pick up downed Luftwaffe and Allied aircrew throughout the battle, earning praise from Adolf Galland for their bravery.
During early tests of the Chain Home system, the slow flow of information from the CH radars and observers to the aircraft often caused them to miss their "bandits ''. The solution, today known as the "Dowding system '', was to create a set of reporting chains to move information from the various observation points to the pilots in their fighters. It was named after its chief architect, "Stuffy '' Dowding.
Reports from CH radars and the Observer Corps were sent directly to Fighter Command Headquarters (FCHQ) at Bentley Priory where they were "filtered '' to combine multiple reports of the same formations into single tracks. Telephone operators would then forward only the information of interest to the Group headquarters, where the map would be re-created. This process was repeated to produce another version of the map at the Sector level, covering a much smaller area. Looking over their maps, Group level commanders could select squadrons to attack particular targets. From that point the Sector operators would give commands to the fighters to arrange an interception, as well as return them to base. Sector stations also controlled the anti-aircraft batteries in their area; an army officer sat beside each fighter controller and directed the gun crews when to open and cease fire.
The Dowding system dramatically improved the speed and accuracy of the information that flowed to the pilots. During the early war period it was expected that an average interception mission might have a 30 % chance of ever seeing their target. During the battle, the Dowding system maintained an average rate over 75 %, with several examples of 100 % rates -- every fighter dispatched found and intercepted its target. In contrast, Luftwaffe fighters attempting to intercept raids had to randomly seek their targets and often returned home having never seen enemy aircraft. The result is what is now known as an example of "force multiplication ''; RAF fighters were as effective as two or more Luftwaffe fighters, greatly offsetting, or overturning, the disparity in actual numbers.
While Luftwaffe intelligence reports underestimated British fighter forces and aircraft production, the British intelligence estimates went the other way: they overestimated German aircraft production, numbers and range of aircraft available, and numbers of Luftwaffe pilots. In action, the Luftwaffe believed from their pilot claims and the impression given by aerial reconnaissance that the RAF was close to defeat, and the British made strenuous efforts to overcome the perceived advantages held by their opponents.
It is unclear how much the British intercepts of the Enigma cipher, used for high - security German radio communications, affected the battle. Ultra, the information obtained from Enigma intercepts, gave the highest echelons of the British command a view of German intentions. According to F.W. Winterbotham, who was the senior Air Staff representative in the Secret Intelligence Service, Ultra helped establish the strength and composition of the Luftwaffe 's formations, the aims of the commanders and provided early warning of some raids. In early August it was decided that a small unit would be set up at FCHQ, which would process the flow of information from Bletchley and provide Dowding only with the most essential Ultra material; thus the Air Ministry did not have to send a continual flow of information to FCHQ, preserving secrecy, and Dowding was not inundated with non-essential information. Keith Park and his controllers were also told about Ultra. In a further attempt to camouflage the existence of Ultra, Dowding created a unit named No. 421 (Reconnaissance) Flight RAF. This unit (which later became No. 91 Squadron RAF), was equipped with Hurricanes and Spitfires and sent out aircraft to search for and report Luftwaffe formations approaching England. In addition the radio listening service (known as Y Service), monitoring the patterns of Luftwaffe radio traffic contributed considerably to the early warning of raids.
One of the biggest oversights of the entire system was the lack of adequate air - sea rescue organisation. The RAF had started organising a system in 1940 with High Speed Launches (HSLs) based on flying boat bases and at a number of overseas locations, but it was still believed that the amount of cross-Channel traffic meant that there was no need for a rescue service to cover these areas. Downed pilots and aircrew, it was hoped, would be picked up by any boats or ships which happened to be passing by. Otherwise the local life boat would be alerted, assuming someone had seen the pilot going into the water.
RAF aircrew were issued with a life jacket, nicknamed the "Mae West, '' but in 1940 it still required manual inflation, which was almost impossible for someone who was injured or in shock. The waters of the English Channel and Dover Straits are cold, even in the middle of summer, and clothing issued to RAF aircrew did little to insulate them against these freezing conditions. The RAF also imitated the German practice of issuing fluorescein. A conference in 1939 had placed air - sea rescue under Coastal Command. Because a number of pilots had been lost at sea during the "Channel Battle '', on 22 August, control of RAF rescue launches was passed to the local naval authorities and 12 Lysanders were given to Fighter Command to help look for pilots at sea. In all some 200 pilots and aircrew were lost at sea during the battle. No proper air - sea rescue service was formed until 1941.
In the late 1930s, Fighter Command expected to face only bombers over Britain, not single - engined fighters. A series of "Fighting Area Tactics '' were formulated and rigidly adhered to, involving a series of manoeuvres designed to concentrate a squadron 's firepower to bring down bombers. RAF fighters flew in tight, v - shaped sections ("vics '') of three aircraft, with four such "sections '' in tight formation. Only the squadron leader at the front was free to watch for the enemy; the other pilots had to concentrate on keeping station. Training also emphasised by - the - book attacks by sections breaking away in sequence. Fighter Command recognised the weaknesses of this structure early in the battle, but it was felt too risky to change tactics during the battle, because replacement pilots -- often with only minimal flying time -- could not be readily retrained, and inexperienced pilots needed firm leadership in the air only rigid formations could provide. German pilots dubbed the RAF formations Idiotenreihen ("rows of idiots '') because they left squadrons vulnerable to attack.
Front line RAF pilots were acutely aware of the inherent deficiencies of their own tactics. A compromise was adopted whereby squadron formations used much looser formations with one or two "weavers '' flying independently above and behind to provide increased observation and rear protection; these tended to be the least experienced men and were often the first to be shot down without the other pilots even noticing that they were under attack. During the battle, 74 Squadron under Squadron Leader Adolph "Sailor '' Malan adopted a variation of the German formation called the "fours in line astern '', which was a vast improvement on the old three aircraft "vic ''. Malan 's formation was later generally used by Fighter Command.
The weight of the battle fell upon 11 Group. Keith Park 's tactics were to dispatch individual squadrons to intercept raids. The intention was to subject incoming bombers to continual attacks by relatively small numbers of fighters and try to break up the tight German formations. Once formations had fallen apart, stragglers could be picked off one by one. Where multiple squadrons reached a raid the procedure was for the slower Hurricanes to tackle the bombers while the more agile Spitfires held up the fighter escort. This ideal was not always achieved, resulting in occasions when Spitfires and Hurricanes reversed roles. Park also issued instructions to his units to engage in frontal attacks against the bombers, which were more vulnerable to such attacks. Again, in the environment of fast moving, three - dimensional air battles, few RAF fighter units were able to attack the bombers from head - on.
During the battle, some commanders, notably Leigh - Mallory, proposed squadrons be formed into "Big Wings, '' consisting of at least three squadrons, to attack the enemy en masse, a method pioneered by Douglas Bader.
Proponents of this tactic claimed interceptions in large numbers caused greater enemy losses while reducing their own casualties. Opponents pointed out the big wings would take too long to form up, and the strategy ran a greater risk of fighters being caught on the ground refuelling. The big wing idea also caused pilots to overclaim their kills, due to the confusion of a more intense battle zone. This led to the belief big wings were far more effective than they were.
The issue caused intense friction between Park and Leigh - Mallory, as 12 Group was tasked with protecting 11 Group 's airfields whilst Park 's squadrons intercepted incoming raids. However, the delay in forming up Big Wings meant the formations often did not arrive at all or until after German bombers had hit 11 Group 's airfields. Dowding, to highlight the problem of the Big Wing 's performance, submitted a report compiled by Park to the Air Ministry on 15 November. In the report, he highlighted that during the period of 11 September -- 31 October, the extensive use of the Big Wing had resulted in just 10 interceptions and one German aircraft destroyed, but his report was ignored. Post-war analysis agrees Dowding and Park 's approach was best for 11 Group.
Dowding 's removal from his post in November 1940 has been blamed on this struggle between Park and Leigh - Mallory 's daylight strategy. However, the intensive raids and destruction wrought during the Blitz damaged both Dowding and Park in particular, for the failure to produce an effective night - fighter defence system, something for which the influential Leigh - Mallory had long criticised them.
Bomber Command and Coastal Command aircraft flew offensive sorties against targets in Germany and France during the battle.
An hour after the declaration of war, Bomber Command launched raids on warships and naval ports by day, and in night raids dropped leaflets as it was considered illegal to bomb targets which could affect civilians. After the initial disasters of the war, with Vickers Wellington bombers shot down in large numbers attacking Wilhelmshaven and the slaughter of the Fairey Battle squadrons sent to France, it became clear that they would have to operate mainly at night to avoid incurring very high losses. Churchill came to power on 10 May 1940, and night raids on German towns began with the bombing of Mönchen - Gladbach on the night of 11 May. The War Cabinet on 12 May agreed that German actions justified "unrestricted warfare '', and on 14 May they authorised an attack on the night of 14 / 15 May against oil and rail targets in Germany. At the urging of Clement Attlee, the Cabinet on 15 May authorised a full bombing strategy against "suitable military objectives '', even where there could be civilian casualties. That evening, a night time bomber campaign began against the German oil industry, communications, and forests / crops, mainly in the Ruhr area. The RAF lacked accurate night navigation, and carried small bomb loads. As the threat mounted, Bomber Command changed targeting priority on 3 June 1940 to attack the German aircraft industry. On 4 July, the Air Ministry gave Bomber Command orders to attack ports and shipping. By September, the build - up of invasion barges in the Channel ports had become a top priority target.
On 7 September, the government issued a warning that the invasion could be expected within the next few days and, that night, Bomber Command attacked the Channel ports and supply dumps. On 13 September, they carried out another large raid on the Channel ports, sinking 80 large barges in the port of Ostend. 84 barges were sunk in Dunkirk after another raid on 17 September and by 19 September, almost 200 barges had been sunk. The loss of these barges may have contributed to Hitler 's decision to postpone Operation Sea Lion indefinitely. The success of these raids was in part because the Germans had few Freya radar stations set up in France, so that air defences of the French harbours were not nearly as good as the air defences over Germany; Bomber Command had directed some 60 % of its strength against the Channel ports.
The Bristol Blenheim units also raided German - occupied airfields throughout July to December 1940, both during daylight hours and at night. Although most of these raids were unproductive, there were some successes; on 1 August, five out of twelve Blenheims sent to attack Haamstede and Evere (Brussels) were able to bomb, destroying or heavily damaging three Bf 109s of II. / JG 27 and apparently killing a Staffelkapitän identified as a Hauptmann Albrecht von Ankum - Frank. Two other 109s were claimed by Blenheim gunners. Another successful raid on Haamstede was made by a single Blenheim on 7 August which destroyed one 109 of 4. / JG 54, heavily damaged another and caused lighter damage to four more.
There were some missions which produced an almost 100 % casualty rate amongst the Blenheims; one such operation was mounted on 13 August 1940 against a Luftwaffe airfield near Aalborg in north - eastern Denmark by 12 aircraft of 82 Squadron. One Blenheim returned early (the pilot was later charged and due to appear before a court martial, but was killed on another operation); the other eleven, which reached Denmark, were shot down, five by flak and six by Bf 109s. Of the 33 crewmen who took part in the attack, 20 were killed and 13 captured.
As well as the bombing operations, Blenheim - equipped units had been formed to carry out long - range strategic reconnaissance missions over Germany and German - occupied territories. In this role, the Blenheims again proved to be too slow and vulnerable against Luftwaffe fighters, and they took constant casualties.
Coastal Command directed its attention towards the protection of British shipping, and the destruction of enemy shipping. As invasion became more likely, it participated in the strikes on French harbours and airfields, laying mines, and mounting numerous reconnaissance missions over the enemy - held coast. In all, some 9,180 sorties were flown by bombers from July to October 1940. Although this was much less than the 80,000 sorties flown by fighters, bomber crews suffered about half the total number of casualties borne by their fighter colleagues. The bomber contribution was, therefore, much more dangerous on a loss - per - sortie comparison.
Bomber, reconnaissance, and antisubmarine patrol operations continued throughout these months with little respite and none of the publicity accorded to Fighter Command. In his famous 20 August speech about "The Few '', praising Fighter Command, Churchill also made a point of mentioning Bomber Command 's contribution, adding that bombers were even then striking back at Germany; this part of the speech is often overlooked, even today. The Battle of Britain Chapel in Westminster Abbey lists in a roll of honour, 718 Bomber Command crew members, and 280 from Coastal Command who were killed between 10 July and 31 October.
Bomber and Coastal Command attacks against invasion barge concentrations in Channel ports were widely reported by the British media during September and October 1940. In what became known as ' the Battle of the Barges ' RAF attacks were claimed in British propaganda to have sunk large numbers of barges, and to have created widespread chaos and disruption to German invasion preparations. Given the volume of British propaganda interest in these bomber attacks during September and earlier October, it is striking how quickly this was overlooked once the Battle of Britain had been concluded. Even by mid-war the bomber pilots ' efforts had been largely eclipsed by a continuing focus on the Few, this a result of the Air Ministry 's continuing valorisation of the '' fighter boys '', beginning with the March 1941 Battle of Britain propaganda pamphlet.
The battle covered a shifting geographical area, and there have been differing opinions on significant dates: when the Air Ministry proposed 8 August as the start, Dowding responded that operations "merged into one another almost insensibly '', and proposed 10 July as the onset of increased attacks. With the caution that phases drifted into each other and dates are not firm, the Royal Air Force Museum states that five main phases can be identified:
The RAF night bombing campaign against military objectives in German towns began on 11 May. The small forces available were given ambitious objectives, but lacked night navigation capability and their isolated inaccurate attacks were thought by the Germans to be intended to terrorise civilians. From 4 July the RAF achieved some successes with raids on Channel ports, anticipating the build up for an invasion.
Following Germany 's rapid territorial gains in the Battle of France, the Luftwaffe had to reorganise its forces, set up bases along the coast, and rebuild after heavy losses. It began small scale bombing raids on Britain on the night of 5 / 6 June, and continued sporadic attacks throughout June and July. The first large - scale attack was at night, on 18 / 19 June, when small raids scattered between Yorkshire and Kent involved in total 100 bombers. These Störangriffe ("nuisance raids '') which involved only a few aeroplanes, sometimes just one, were used to train bomber crews in both day and night attacks, to test defences and try out methods, with most flights at night. They found that, rather than carrying small numbers of large high explosive bombs, it was more effective to use more small bombs, similarly incendiaries had to cover a large area to set effective fires. These training flights continued through August and into the first week of September. Against this, the raids also gave the British time to assess the German tactics, and invaluable time for the RAF fighters and anti-aircraft defences to prepare and gain practice.
The attacks were widespread: over the night of 30 June alarms were set off in 20 counties by just 20 bombers, then next day the first daylight raids occurred during 1 July, on both Hull in Yorkshire and Wick, Caithness. On 3 July most flights were reconnaissance sorties, but 15 civilians were killed when bombs hit Guildford in Surrey. Numerous small Störangriffe raids, both day and night, were made daily through August, September and into the winter, with aims including bringing RAF fighters up to battle, destruction of specific military and economic targets, and setting off air - raid warnings to affect civilian morale: four major air - raids in August involved hundreds of bombers, in the same month 1,062 small raids were made, spread across the whole of Britain.
The Kanalkampf comprised a series of running fights over convoys in the English Channel. It was launched partly because Kesselring and Sperrle were not sure about what else to do, and partly because it gave German aircrews some training and a chance to probe the British defences. Dowding could provide only minimal shipping protection, and these battles off the coast tended to favour the Germans, whose bomber escorts had the advantage of altitude and outnumbered the RAF fighters. From 9 July reconnaissance probing by Dornier Do 17 bombers put a severe strain on RAF pilots and machines, with high RAF losses to Bf 109s. When nine 141 Squadron Defiants went into action on 19 July six were lost to Bf 109s before a squadron of Hurricanes intervened. On 25 July a coal convoy and escorting destroyers suffered such heavy losses to attacks by Stuka dive bombers that the Admiralty decided convoys should travel at night: the RAF shot down 16 raiders but lost 7 aircraft. By 8 August 18 coal ships and 4 destroyers had been sunk, but the Navy was determined to send a convoy of 20 ships through rather than move the coal by railway. After repeated Stuka attacks that day, six ships were badly damaged, four were sunk and only four reached their destination. The RAF lost 19 fighters and shot down 31 German aircraft. The Navy now cancelled all further convoys through the Channel and sent the cargo by rail. Even so, these early combat encounters provided both sides with experience.
The main attack upon the RAF 's defences was code - named Adlerangriff ("Eagle Attack ''). Intelligence reports gave Göring the impression that the RAF was almost defeated, and raids would attract British fighters for the Luftwaffe to shoot down. The strategy agreed on 6 August was to destroy RAF Fighter Command across the south of England in four days, then bombing of military and economic targets was to systematically extend up to the Midlands until daylight attacks could proceed unhindered over the whole of Britain, culminating in a major bombing attack on London.
Poor weather delayed Adlertag ("Eagle Day '') until 13 August 1940. On 12 August, the first attempt was made to blind the Dowding system, when aircraft from the specialist fighter - bomber unit Erprobungsgruppe 210 attacked four radar stations. Three were briefly taken off the air but were back working within six hours. The raids appeared to show that British radars were difficult to knock out. The failure to mount follow - up attacks allowed the RAF to get the stations back on the air, and the Luftwaffe neglected strikes on the supporting infrastructure, such as phone lines and power stations, which could have rendered the radars useless, even if the towers themselves (which were very difficult to destroy) remained intact.
Adlertag opened with a series of attacks, led again by Epro 210, on coastal airfields used as forward landing grounds for the RAF fighters, as well as ' satellite airfields ' (including Manston and Hawkinge). As the week drew on, the airfield attacks moved further inland, and repeated raids were made on the radar chain. 15 August was "The Greatest Day '' when the Luftwaffe mounted the largest number of sorties of the campaign. Luftflotte 5 attacked the north of England. Believing Fighter Command strength to be concentrated in the south, raiding forces from Denmark and Norway ran into unexpectedly strong resistance. Inadequately escorted by Bf 110s, bombers were shot down in large numbers. North East England was attacked by 65 Heinkel 111s escorted by 34 Messerschmitt 110s, and RAF Great Driffield was attacked by 50 unescorted Junkers 88s. Out of 115 bombers and 35 fighters sent, 75 planes were destroyed and many others damaged beyond repair. Furthermore, due to early engagement by RAF fighters many of the bombers dropped their payloads ineffectivley early. As a result of these casualties, Luftflotte 5 did not appear in strength again in the campaign.
18 August, which had the greatest number of casualties to both sides, has been dubbed "The Hardest Day ''. Following this grinding battle, exhaustion and the weather reduced operations for most of a week, allowing the Luftwaffe to review their performance. "The Hardest Day '' had sounded the end for the Ju 87 in the campaign. This veteran of Blitzkrieg was too vulnerable to fighters to operate over Britain. So as to preserve the Stuka force, Göring withdrew them from the fighting. This removed the main Luftwaffe precision - bombing weapon and shifted the burden of pinpoint attacks on the already - stretched Erpro 210. The Bf 110 proved too clumsy for dogfighting with single - engined fighters, and its participation was scaled back. It would only be used when range required it or when sufficient single - engined escort could not be provided for the bombers.
Göring made yet another fateful decision: to order more bomber escorts at the expense of free - hunting sweeps. To achieve this, the weight of the attack now fell on Luftflotte 2, and the bulk of the Bf 109s in Luftflotte 3 were transferred to Kesselring 's command, reinforcing the fighter bases in the Pas - de-Calais. Stripped of its fighters, Luftflotte 3 would concentrate on the night bombing campaign. Göring, expressing disappointment with the fighter performance thus far in the campaign, also made sweeping changes in the command structure of the fighter units, replacing many Geschwaderkommodore with younger, more aggressive pilots like Adolf Galland and Werner Mölders.
Finally, Göring stopped the attacks on the radar chain. These were seen as unsuccessful, and neither the Reichsmarschall nor his subordinates realised how vital the Chain Home stations were to the defence systems. It was known that radar provided some early warning of raids, but the belief among German fighter pilots was that anything bringing up the "Tommies '' to fight was to be encouraged.
On the afternoon of 15 August, Hauptmann Walter Rubensdörffer leading Erprobungsgruppe 210 mistakenly bombed Croydon airfield (on the outskirts of London) instead of the intended target, RAF Kenley.
German intelligence reports made the Luftwaffe optimistic that the RAF, thought to be dependent on local air control, was struggling with supply problems and pilot losses. After a major raid attacking Biggin Hill on 18 August, Luftwaffe aircrew said they had been unopposed, the airfield was "completely destroyed '', and asked "Is England already finished? '' In accordance with the strategy agreed on 6 August, defeat of the RAF was to be followed by bombing military and economic targets, systematically extending up to the Midlands.
Göring ordered attacks on aircraft factories on 19 August 1940. Sixty raids on the night of 19 / 20 August targeted the aircraft industry and harbours, and bombs fell on suburban areas around London: Croydon, Wimbledon and the Maldens. Night raids were made on 21 / 22 August on Aberdeen, Bristol and South Wales. That morning, bombs were dropped on Harrow and Wealdstone, on the outskirts of London. Overnight on 22 / 23 August, the output of an aircraft factory at Filton near Bristol was drastically affected by a raid in which Ju88 bombers released over 16 tons of high explosive bombs. On the night of 23 / 24 August over 200 bombers attacked the Fort Dunlop tyre factory in Birmingham, with a significant effect on production. A sustained bombing campaign began on 24 August with the largest raid so far, killing 100 in Portsmouth, and that night, several areas of London were bombed; the East End was set ablaze and bombs landed on central London. Some historians believe that these bombs were dropped accidentally by a group of Heinkel He 111s which had failed to find their target; this account has been contested.
More night raids were made around London on 24 / 25 August, when bombs fell on Croydon, Banstead, Lewisham, Uxbridge, Harrow and Hayes. London was on red alert over the night of 28 / 29 August, with bombs reported in Finchley, St Pancras, Wembley, Wood Green, Southgate, Old Kent Road, Mill Hill, Ilford, Chigwell and Hendon.
Göring 's directive issued on 23 August 1940 ordered ceaseless attacks on the aircraft industry and on RAF ground organisation to force the RAF to use its fighters, continuing the tactic of luring them up to be destroyed, and added that focussed attacks were to be made on RAF airfields.
From 24 August onwards, the battle was a fight between Kesselring 's Luftflotte 2 and Park 's 11 Group. The Luftwaffe concentrated all their strength on knocking out Fighter Command and made repeated attacks on the airfields. Of the 33 heavy attacks in the following two weeks, 24 were against airfields. The key sector stations were hit repeatedly: Biggin Hill and Hornchurch four times each; Debden and North Weald twice each. Croydon, Gravesend, Rochford, Hawkinge and Manston were also attacked in strength. Coastal Command 's Eastchurch was bombed at least seven times because it was believed to be a Fighter Command aerodrome. At times these raids caused some damage to the sector stations, threatening the integrity of the Dowding system.
To offset some losses, some 58 Fleet Air Arm fighter pilot volunteers were seconded to RAF squadrons, and a similar number of former Fairey Battle pilots were used. Most replacements from Operational Training Units (OTUs) had as little as nine hours flying time and no gunnery or air - to - air combat training. At this point, the multinational nature of Fighter Command came to the fore. Many squadrons and personnel from the air forces of the Dominions were already attached to the RAF, including top level commanders -- Australians, Canadians, New Zealanders, Rhodesians and South Africans. In addition, there were other nationalities represented, including Free French, Belgian and a Jewish pilot from the British mandate of Palestine.
They were bolstered by the arrival of fresh Czechoslovak and Polish squadrons. These had been held back by Dowding, who mistakenly thought non-English speaking aircrew would have trouble working within his control system: Polish and Czech fliers proved to be especially effective. The pre-war Polish Air Force had lengthy and extensive training, and high standards; with Poland conquered and under brutal German occupation, the pilots of No. 303 (Polish) Squadron, the highest - scoring Allied unit, were strongly motivated. Josef František, a Czech regular airman who had flown from the occupation of his own country to join the Polish and then French air forces before arriving in Britain, flew as a guest of 303 Squadron and was ultimately credited with the highest "RAF score '' in the Battle of Britain.
The RAF had the advantage of fighting over home territory. Pilots who bailed out of their downed aircraft could be back at their airfields within hours, while if low on fuel and / or ammunition they could be immediately rearmed. One RAF pilot interviewed in late 1940 had been shot down five times during the Battle of Britain, but was able to crash land in Britain or bail out each time. For Luftwaffe aircrews, a bailout over England meant capture -- in the critical August period, almost exactly as many Luftwaffe pilots were taken prisoner as were killed -- while parachuting into the English Channel often meant drowning or death from exposure. Morale began to suffer, and (Kanalkrankheit) ("Channel sickness '') -- a form of combat fatigue -- began to appear among the German pilots. Their replacement problem became even worse than the British.
The effect of the German attacks on airfields is unclear. According to Stephen Bungay, Dowding, in a letter to Hugh Trenchard accompanying Park 's report on the period 8 August -- 10 September 1940, states that the Luftwaffe "achieved very little '' in the last week of August and the first week of September. The only Sector Station to be shut down operationally was Biggin Hill, and it was non-operational for just two hours. Dowding admitted 11 Group 's efficiency was impaired but, despite serious damage to some airfields, only two out of 13 heavily attacked airfields were down for more than a few hours. The German refocus on London was not critical.
Retired air marshal Peter Dye, head of the RAF Museum, discussed the logistics of the battle in 2000 and 2010, dealing specifically with the single - seat fighters. Dye contends that not only was British aircraft production replacing aircraft, but replacement pilots were keeping pace with losses. The number of pilots in RAF Fighter Command increased during July, August and September. The figures indicate the number of pilots available never decreased: from July 1,200 were available, and from 1 August, 1,400 were available. Just over that number were in the field by September. In October the figure was nearly 1,600. By 1 November 1,800 were available. Throughout the battle, the RAF had more fighter pilots available than the Luftwaffe. Although the RAF 's reserves of single seat fighters fell during July, the wastage was made up for by an efficient Civilian Repair Organisation (CRO), which by December had repaired and put back into service some 4,955 aircraft, and by aircraft held at Air Servicing Unit (ASU) airfields.
Richard Overy agrees with Dye and Bungay. Overy asserts only one airfield was temporarily put out of action and "only '' 103 pilots were lost. British fighter production produced 496 new aircraft in July and 467 in August, and another 467 in September (not counting repaired aircraft), covering the losses of August and September. Overy indicates the number of serviceable and total strength returns reveal an increase in fighters from 3 August to 7 September, 1,061 on strength and 708 serviceable to 1,161 on strength and 746 serviceable. Moreover, Overy points out that the number of RAF fighter pilots grew by one - third between June and August 1940. Personnel records show a constant supply of around 1,400 pilots in the crucial weeks of the battle. In the second half of September it reached 1,500. The shortfall of pilots was never above 10 %. The Germans never had more than between 1,100 and 1,200 pilots, a deficiency of up to one - third. "If Fighter Command were ' the few ', the German fighter pilots were fewer ''.
Other scholars assert that this period was the most dangerous of all. In The Narrow Margin, published in 1961, historians Derek Wood and Derek Dempster believed that the two weeks from 24 August to 6 September represented a real danger. According to them, from 24 August to 6 September 295 fighters had been totally destroyed and 171 badly damaged, against a total output of 269 new and repaired Spitfires and Hurricanes. They assert that 103 pilots were killed or missing and 128 were wounded, which represented a total wastage of 120 pilots per week out of a fighting strength of just fewer than 1,000. They conclude that during August no more than 260 fighter pilots were turned out by OTUs and casualties in the same month were just over 300. A full squadron establishment was 26 pilots whereas the average in August was 16. In their assessment, the RAF was losing the battle. Denis Richards, in his 1953 contribution to the official British account History of the Second World War, agreed that lack of pilots, especially experienced ones, was the RAF 's greatest problem. He states that between 8 and 18 August 154 RAF pilots were killed, severely wounded, or missing, while only 63 new pilots were trained. Availability of aircraft was also a serious issue. While its reserves during the Battle of Britain never declined to a half dozen planes as some later claimed, Richards describes 24 August to 6 September as the critical period because during these two weeks Germany destroyed far more aircraft through its attacks on 11 Group 's southeast bases than Britain was producing. Three more weeks of such a pace would indeed have exhausted aircraft reserves. Germany had seen heavy losses of pilots and aircraft as well however, thus its shift to night - time attacks in September. On 7 September RAF aircraft losses fell below British production and remained so until the end of the war.
Hitler 's "Directive No. 17 -- For the conduct of air and sea warfare against England '' issued on 1 August 1940, reserved to himself the right to decide on terror attacks as measures of reprisal. Hitler issued a directive that London was not to be bombed save on his sole instruction. In preparation, detailed target plans under the code name Operation Loge for raids on communications, power stations, armaments works and docks in the Port of London were distributed to the Fliegerkorps in July. The port areas were crowded next to residential housing and civilian casualties would be expected, but this would combine military and economic targets with indirect effects on morale. The strategy agreed on 6 August was for raids on military and economic targets in towns and cities to culminate in a major attack on London. In mid August raids were made on targets on the outskirts of London.
Luftwaffe doctrine included the possibility of retaliatory attacks on cities, and since 11 May small scale night raids by RAF Bomber Command had frequently bombed residential areas. The Germans assumed this was deliberate, and as the raids increased in frequency and scale the population grew impatient for measures of revenge. On 25 August 1940, 81 bombers of Bomber Command were sent out to raid industrial and commercial targets in Berlin. Clouds prevented accurate identification and the bombs fell across the city, causing some casualties among the civilian population as well as damage to residential areas. Continuing RAF raids on Berlin led to Hitler withdrawing his directive on 30 August, and giving the go - ahead to the planned bombing offensive. On 3 September Göring planned to bomb London daily, with General Albert Kesselring 's enthusiastic support, having received reports the average strength of RAF squadrons was down to five or seven fighters out of twelve and their airfields in the area were out of action. Hitler issued a directive on 5 September to attack cities including London. In his widely publicised speech delivered on 4 September 1940, Hitler condemned the bombing of Berlin and presented the planned attacks on London as reprisals. The first daylight raid was titled Vergeltungsangriff (revenge attack).
On 7 September, a massive series of raids involving nearly four hundred bombers and more than six hundred fighters targeted docks in the East End of London, day and night. The RAF anticipated attacks on airfields and 11 Group rose to meet them, in greater numbers than the Luftwaffe expected. The first official deployment of 12 Group 's Leigh - Mallory 's Big Wing took twenty minutes to form up, missing its intended target, but encountering another formation of bombers while still climbing. They returned, apologetic about their limited success, and blamed the delay on being scrambled too late.
The German press jubilantly announced that "one great cloud of smoke stretches tonight from the middle of London to the mouth of the Thames. '' Reports reflected the briefings given to crews before the raids -- "Everyone knew about the last cowardly attacks on German cities, and thought about wives, mothers and children. And then came that word ' Vengeance! ' '' Pilots reported seeing ruined airfields as they flew towards London, appearances which gave intelligence reports the impression of devastated defences. Göring maintained that the RAF was close to defeat, making invasion feasible.
Fighter Command had been at its lowest ebb, short of men and machines, and the break from airfield attacks allowed them to recover. 11 Group had considerable success in breaking up daytime raids. 12 Group repeatedly disobeyed orders and failed to meet requests to protect 11 Group airfields, but their experiments with increasingly large Big Wings had some success. The Luftwaffe began to abandon their morning raids, with attacks on London starting late in the afternoon for fifty - seven consecutive nights.
The most damaging aspect to the Luftwaffe of targeting London was the increase in range. The Bf 109E escorts had a limited fuel capacity resulting in only a 660 km (410 mile) maximum range solely on internal fuel, and when they arrived had only 10 minutes of flying time before turning for home, leaving the bombers undefended by fighter escorts. Its eventual stablemate, the Focke - Wulf Fw 190 A, was flying only in prototype form in the summer of 1940; the first 28 Fw 190s were not delivered until November 1940. The Fw 190A - 1 had a maximum range of 940 km (584 miles) on internal fuel, 40 % greater than the Bf 109E. The Messerschmitt Bf 109 E-7 corrected this deficiency by adding a ventral centre - line ordnance rack to take either an SC 250 bomb or a standard 300 litre Luftwaffe drop tank to double the range to 1,325 km (820 mi). The ordnance rack was not retrofitted to earlier Bf 109Es until October 1940.
On 14 September, Hitler chaired a meeting with the OKW staff. Göring was in France directing the decisive battle, so Erhard Milch deputised for him. Hitler asked "Should we call it off altogether? ''. General Hans Jeschonnek, Luftwaffe Chief of Staff, begged for a last chance to defeat the RAF and for permission to launch attacks on civilian residential areas to cause mass panic. Hitler refused the latter, perhaps unaware of how much damage had already been done to civilian targets. He reserved for himself the power to unleash the terror weapon. Instead political will was to be broken by destroying the material infrastructure, the weapons industry, and stocks of fuel and food. On 15 September, two massive waves of German attacks were decisively repulsed by the RAF by deploying every aircraft in 11 Group. Sixty German and 26 RAF aircraft were shot down.
Two days after the German defeat Hitler postponed preparations for the invasion of Britain. Henceforth, in the face of mounting losses in men, aircraft and the lack of adequate replacements, the Luftwaffe completed their gradual shift from daylight bomber raids and continued with nighttime bombing. 15 September is commemorated as Battle of Britain Day.
At the 14 September OKW conference, Hitler acknowledged that the Luftwaffe had still not gained the air superiority needed for the Operation Sealion invasion. In agreement with Raeder 's written recommendation, Hitler said the campaign was to intensify regardless of invasion plans: "The decisive thing is the ceaseless continuation of air attacks. '' Jeschonnek proposed attacking residential areas to cause "mass panic '', but Hitler turned this down: he reserved to himself the option of terror bombing. British morale was to be broken by destroying infrastructure, armaments manufacturing, fuel and food stocks. On 16 September, Göring gave the order for this change in strategy. This new phase was to be the first independent strategic bombing campaign, in hopes of a political success forcing the British to give up. Hitler hoped it might result in "eight million going mad '' (referring to the population of London in 1940), which would "cause a catastrophe '' for the British. In those circumstances, Hitler said, "even a small invasion might go a long way ''. Hitler was against cancelling the invasion as "the cancellation would reach the ears of the enemy and strengthen his resolve ''. On 19 September, Hitler ordered a reduction in work on Sealion. He doubted if strategic bombing could achieve its aims, but ending the air war would be an open admission of defeat. He had to maintain the appearance of concentration on defeating Britain, to conceal from Joseph Stalin his covert aim to invade the Soviet Union.
Throughout the battle, most Luftwaffe bombing raids had been at night. They increasingly suffered unsustainable losses in daylight raids, and the last massive daytime attacks were on 15 September. A raid of 70 bombers on 18 September also suffered badly, and day raids were gradually phased out leaving the main attacks at night. Fighter command still lacked any successful way of intercepting night - time raiders, the night fighter force was mostly Blenheims and Beaufighters, and lacked airborne radar so had no way of finding the bombers. Anti-aircraft guns were diverted to London 's defences, but had a much reduced success rate against night attacks.
From mid September, Luftwaffe daylight bombing was gradually taken over by a Bf 109 fighters, adapted to take one 250 kg bomb. Small groups of fighter - bombers would carry out Störangriffe raids escorted by large escort formations of about 200 to 300 combat fighters. They flew at altitudes over 20,000 feet (6,100 m) where the Bf109 had an advantage over RAF fighters, except the Spitfire. The raids disturbed civilians, and continued the war of attrition against Fighter Command. The raids were intended to carry out precision bombing on military or economic targets, but it was hard to achieve sufficient accuracy with the single bomb. Sometimes, when attacked, the fighter - bombers had to jettison the bomb to function as fighters. The RAF was at a disadvantage, and changed defensive tactics by introducing standing patrols of Spitfires at high altitude to monitor incoming raids. On a sighting, other patrols at lower altitude would fly up to join the battle.
A Junkers Ju 88 returning from a raid on London was shot down in Kent on 27 September resulting in the Battle of Graveney Marsh, the last action between British and foreign military forces on British mainland soil.
German bombing of Britain reached its peak in October and November 1940. In post war interrogation, Wilhelm Keitel described the aims as economic blockade, in conjunction with submarine warfare, and attrition of Britain 's military and economic resources. The Luftwaffe wanted to achieve victory on its own, and was reluctant to cooperate with the navy. Their strategy for blockade was to destroy ports and storage facilities in towns and cities. Priorities were based on the pattern of trade and distribution, so for these months London was the main target. In November their attention turned to other ports and industrial targets around Britain.
Hitler postponed the Sealion invasion on 13 October "until the spring of 1941 ''. It was not until Hitler 's Directive 21 was issued, on 18 December 1940, that the threat to Britain of invasion finally ended.
During the battle, and for the rest of the war, an important factor in keeping public morale high was the continued presence in London of King George VI and his wife Queen Elizabeth. When war broke out in 1939, the King and Queen decided to stay in London and not flee to Canada, as had been suggested. George VI and Elizabeth officially stayed in Buckingham Palace throughout the war, although they often spent weekends at Windsor Castle to visit their daughters, Elizabeth (the future queen) and Margaret. Buckingham Palace was damaged by bombs which landed in the grounds on 10 September and, on 13 September, more serious damage was caused by two bombs which destroyed the Royal Chapel. The royal couple were in a small sitting room about 80 yards from where the bombs exploded. On 24 September, in recognition of the bravery of civilians, King George VI inaugurated the award of the George Cross.
Overall, by 2 November, the RAF fielded 1,796 pilots, an increase of over 40 % from July 1940 's count of 1,259 pilots. Based on German sources (from a Luftwaffe intelligence officer Otto Bechtle attached to KG 2 in February 1944) translated by the Air Historical Branch, Stephen Bungay asserts German fighter and bomber "strength '' declined without recovery, and that from August -- December 1940, the German fighter and bomber strength declined by 30 and 25 percent. In contrast, Williamson Murray, argues (using translations by the Air Historical Branch) that 1,380 German bombers were on strength on 29 June 1940, 1,420 bombers on 28 September, 1,423 level bombers on 2 November and 1,393 bombers on 30 November 1940. In July -- September the number of Luftwaffe pilots available fell by 136, but the number of operational pilots had shrunk by 171 by September. The training organisation of the Luftwaffe was failing to replace losses. German fighter pilots, in contrast to popular perception, were not afforded training or rest rotations unlike their British counterparts. The first week of September accounted for 25 % of the Fighter Command, and 24 % of the Luftwaffe 's overall losses. Between the dates 26 August -- 6 September, on only one day (1 September) did the Germans destroy more aircraft than they lost. Losses were 325 German and 248 British.
Luftwaffe losses for August numbered 774 aircraft to all causes, representing 18.5 % of all combat aircraft at the beginning of the month. Fighter Command 's losses in August were 426 fighters destroyed, amounting to 40 per cent of 1,061 fighters available on 3 August. In addition, 99 German bombers and 27 other types were destroyed between 1 and 29 August.
From July to September, the Luftwaffe 's loss records indicate the loss of 1,636 aircraft, 1,184 to enemy action. This represented 47 % of the initial strength of single - engined fighters, 66 % of twin - engined fighters, and 45 % of bombers. This indicates the Germans were running out of aircrew as well as aircraft.
Throughout the battle, the Germans greatly underestimated the size of the RAF and the scale of British aircraft production. Across the Channel, the Air Intelligence division of the Air Ministry consistently overestimated the size of the German air enemy and the productive capacity of the German aviation industry. As the battle was fought, both sides exaggerated the losses inflicted on the other by an equally large margin. However, the intelligence picture formed before the battle encouraged the Luftwaffe to believe that such losses pushed Fighter Command to the very edge of defeat, while the exaggerated picture of German air strength persuaded the RAF that the threat it faced was larger and more dangerous than was the case. This led the British to the conclusion that another fortnight of attacks on airfields might force Fighter Command to withdraw their squadrons from the south of England. The German misconception, on the other hand, encouraged first complacency, then strategic misjudgement. The shift of targets from air bases to industry and communications was taken because it was assumed that Fighter Command was virtually eliminated.
Between the 24 August and 4 September, German serviceability rates, which were acceptable at Stuka units, were running at 75 % with Bf 109s, 70 % with bombers and 65 % with Bf 110s, indicating a shortage of spare parts. All units were well below established strength. The attrition was beginning to affect the fighters in particular. '' By 14 September, the Luftwaffe 's Bf 109 Geschwader possessed only 67 % of their operational crews against authorised aircraft. For Bf 110 units it was 46 per cent; and for bombers it was 59 per cent. A week later the figures had dropped to 64 per cent, 52 % and 52 per cent. Serviceability rates in Fighter Command 's fighter squadrons, between the 24 August and 7 September, were listed as: 64.8 % on 24 August; 64.7 % on 31 August and 64.25 % on 7 September 1940.
Due to the failure of the Luftwaffe to establish air supremacy, a conference assembled on 14 September at Hitler 's headquarters. Hitler concluded that air superiority had not yet been established and "promised to review the situation on 17 September for possible landings on 27 September or 8 October. Three days later, when the evidence was clear that the German Air Force had greatly exaggerated the extent of their successes against the RAF, Hitler postponed Sea Lion indefinitely. ''
Propaganda was an important element of the air war which began to develop over Britain from the 18th June 1940 onwards, when the Luftwaffe began small, probing daylight raids to test RAF defences. One of many examples of these small - scale raids was the destruction of a school at Polruan in Cornwall, by a single raider. Into early July, the British media 's focus on the air battles increased steadily, the press, magazines, BBC radio and newsreels daily conveying the contents of Air Ministry communiques. The German OKW communiques matched Britain 's efforts in claiming for the upper hand.
Central to the propaganda war on both sides of the Channel were aircraft claims, this discussed under ' Attrition statistics '. These daily claims were important both for sustaining British home front morale and persuading America to support Britain, and were produced by the Air Ministry 's Air Intelligence branch. Under pressure from American journalists and broadcasters to prove that the RAF 's claims were genuine, RAF intelligence compared pilots ' claims with actual aircraft wrecks and those seen to crash into the sea. It was soon realised that there was a discrepancy between the two, but the Air Ministry decided not to reveal this. In fact, it was not until May 1947 that the actual figures were released to the public, by which time it was of far less importance. Many though refused to believe the revised figures, including Douglas Bader.
The place of the Battle of Britain in British popular memory is due in no small part to the successful propaganda campaign waged by the Air Ministry, between July -- October 1940, but also in valorising the Few from March 1941 onwards. The publication of the immensely successful 3d pamphlet, The Battle of Britain, saw huge international sales, leading even Goebbels to admire its propaganda value. Focusing only upon the Few, with no mention of RAF bomber attacks against invasion barges, the Battle of Britain was soon established as a major victory for Fighter Command. This in turn inspired a wide range of feature films, books, magazines, works of art, poetry, radio plays and MOI short films.
It is notable that this most impressive of British victories had, in essence, been proclaimed within only five months of the cessation of large - scale daylight air battles, and without reference to Hitler and the OKW 's reasoning for not proceeding with Operation Sea Lion. The continuing post-war popularity of the Battle of Britain is in fact directly attributable to the Air Ministry 's latter - 1940 air communiques, the media in turn broadcasting or publishing RAF aircraft claims. Noted above, this in turn led to the March 1941 pamphlet, which inspired a wide range of cultural responses to the Few, and the Battle of Britain. The Air Ministry built upon this with the development of the Battle of Britain Sunday commemoration, also supported the Battle of Britain clasp for issue to the Few in 1945, and from 1945 Battle of Britain Week. The Battle of Britain window in Westminster Abbey was also encouraged by the Air Ministry, Lords Trenchard and Dowding on its committee. By July 1947 when the window was unveiled, the Battle of Britain had already attained central prominence as Fighter Command 's most notable victory, the Few alone credited with preventing invasion in 1940. Although given widespread media coverage in September and October 1940, RAF Bomber and Coastal Command raids against invasion barge concentrations had already been forgotten by war 's end.
The Battle of Britain marked the first major defeat of Hitler 's military forces, with air superiority seen as the key to victory. Pre-war theories had led to exaggerated fears of strategic bombing, and UK public opinion was buoyed by coming through the ordeal. For the RAF, Fighter Command had achieved a great victory in successfully carrying out Sir Thomas Inskip 's 1937 air policy of preventing the Germans from knocking Britain out of the war. Churchill concluded his famous 18 June ' Battle of Britain ' speech in the House of Commons by referring to pilots and aircrew who fought the Battle: "... if the British Empire and its Commonwealth lasts for a thousand years, men will still say, ' This was their finest hour. ' ''
The battle also significantly shifted American opinion. During the battle, many Americans accepted the view promoted by Joseph Kennedy, the American ambassador in London, who believed that the United Kingdom could not survive. Roosevelt wanted a second opinion, and sent "Wild Bill '' Donovan on a brief visit to the UK; he became convinced the UK would survive and should be supported in every possible way. Before the end of the year American journalist Ralph Ingersoll, after returning from Britain, published a book concluding that "Adolf Hitler met his first defeat in eight years '' in what might "go down in history as a battle as important as Waterloo or Gettysburg ''. The turning point was when the Germans reduced the intensity of the Blitz after 15 September. According to Ingersoll, "(a) majority of responsible British officers who fought through this battle believe that if Hitler and Göring had had the courage and the resources to lose 200 planes a day for the next five days, nothing could have saved London ''; instead, "(the Luftwaffe 's) morale in combat is definitely broken, and the RAF has been gaining in strength each week. ''
Both sides in the battle made exaggerated claims of numbers of enemy aircraft shot down. In general, claims were two to three times the actual numbers, because of the confusion of fighting in dynamic three - dimensional air battles. Postwar analysis of records has shown that between July and September, the RAF claimed 2,698 kills, while the Luftwaffe fighters claimed 3,198 RAF aircraft downed. Total losses, and start and end dates for recorded losses, vary for both sides. Luftwaffe losses from 10 July to 30 October 1940 total 1,977 aircraft, including 243 twin - and 569 single - engined fighters, 822 bombers and 343 non combat types. In the same period, RAF Fighter Command aircraft losses number 1,087, including 53 twin - engined fighters. To the RAF figure should be added 376 Bomber Command and 148 Coastal Command aircraft lost conducting bombing, mining, and reconnaissance operations in defence of the country.
There is a consensus among historians that the Luftwaffe were unable to crush the RAF. Stephen Bungay described Dowding and Park 's strategy of choosing when to engage the enemy whilst maintaining a coherent force as vindicated; their leadership, and the subsequent debates about strategy and tactics, however, had created enmity among RAF senior commanders and both were sacked from their posts in the immediate aftermath of the battle. All things considered, the RAF proved to be a robust and capable organisation which was to use all the modern resources available to it to the maximum advantage. Richard Evans wrote:
Irrespective of whether Hitler was really set on this course, he simply lacked the resources to establish the air superiority that was the sine qua non-of a successful crossing of the English Channel. A third of the initial strength of the German air force, the Luftwaffe, had been lost in the western campaign in the spring. The Germans lacked the trained pilots, the effective fighter aircraft, and the heavy bombers that would have been needed.
The Germans launched some spectacular attacks against important British industries, but they could not destroy the British industrial potential, and made little systematic effort to do so. Hindsight does not disguise the fact the threat to Fighter Command was very real, and for the participants it seemed as if there was a narrow margin between victory and defeat. Nevertheless, even if the German attacks on the 11 Group airfields which guarded southeast England and the approaches to London had continued, the RAF could have withdrawn to the Midlands out of German fighter range and continued the battle from there. The victory was as much psychological as physical. Writes Alfred Price:
The truth of the matter, borne out by the events of 18 August is more prosaic: neither by attacking the airfields, nor by attacking London, was the Luftwaffe likely to destroy Fighter Command. Given the size of the British fighter force and the general high quality of its equipment, training and morale, the Luftwaffe could have achieved no more than a Pyrrhic victory. During the action on 18 August it had cost the Luftwaffe five trained aircrew killed, wounded or taken prisoner, for each British fighter pilot killed or wounded; the ratio was similar on other days in the battle. And this ratio of 5: 1 was very close to that between the number of German aircrew involved in the battle and those in Fighter Command. In other words the two sides were suffering almost the same losses in trained aircrew, in proportion to their overall strengths. In the Battle of Britain, for the first time during the Second World War, the German war machine had set itself a major task which it patently failed to achieve, and so demonstrated that it was not invincible. In stiffening the resolve of those determined to resist Hitler the battle was an important turning point in the conflict.
The British victory in the Battle of Britain was achieved at a heavy cost. Total British civilian losses from July to December 1940 were 23,002 dead and 32,138 wounded, with one of the largest single raids on 19 December 1940, in which almost 3,000 civilians died. With the culmination of the concentrated daylight raids, Britain was able to rebuild its military forces and establish itself as an Allied stronghold, later serving as a base from which the Liberation of Western Europe was launched.
Winston Churchill summed up the effect of the battle and the contribution of RAF Fighter Command, RAF Bomber Command, RAF Coastal Command and the Fleet Air Arm with the words, "Never in the field of human conflict was so much owed by so many to so few ''. Pilots who fought in the battle have been known as The Few ever since; at times being specially commemorated on 15 September, "Battle of Britain Day ''. On this day in 1940, the Luftwaffe embarked on their largest bombing attack yet, forcing the engagement of the entirety of the RAF in defence of London and the South East, which resulted in a decisive British victory that proved to mark a turning point in Britain 's favour.
Within the Commonwealth, Battle of Britain Day has been observed more usually on the third Sunday in September, and even on the 2nd Thursday in September in some areas in the British Channel Islands.
The day has been observed by many artists over the years, often with works that show the battle itself. Many mixed media artists have also created pieces in honour of the Battle of Britain.
Plans for the Battle of Britain window in Westminster Abbey were begun during wartime, the committee chaired by Lords Trenchard and Dowding. Public donations paid for the window itself, this officially opened by King George VI on 10 July 1947. Although not actually an ' official ' memorial to the Battle of Britain in the sense that government paid for it, the window and chapel have since been viewed as such. During the late 1950s and 1960, various proposals were advanced for a national monument to the Battle of Britain, this also the focus of several letters in The Times. In 1960 the Conservative government decided against a further monument, taking the view that the credit should be shared more broadly than Fighter Command alone, and there was little public appetite for one. All subsequent memorials are the result of private subscription and initiative, as discussed below.
There are numerous memorials to the battle. The most important ones are the Battle of Britain Monument in London and the Battle of Britain Memorial at Capel - le - Ferne in Kent. Westminster Abbey and St James 's Church, Paddington both have memorial windows to the battle, replacing windows that were destroyed during the campaign. There is also a memorial at the former Croydon Airport, one of the RAF bases during the battle, and a memorial to the pilots at Armadale Castle on the Isle of Skye in Scotland, which is topped by a raven sculpture.
There are also two museums to the battle: one at Hawkinge in Kent and one at Stanmore in London, at the former RAF Bentley Priory.
In 2015 the RAF created an online ' Battle of Britain 75th Anniversary Commemorative Mosaic ' composed of pictures of "the few '' -- the pilots and aircrew who fought in the battle -- and "the many '' -- ' the often unsung others whose contribution during the Battle of Britain was also vital to the RAF 's victory in the skies above Britain ', submitted by participants and their families.
Victoria Embankment, London
Capel - le - Ferne, Kent
Armadale Castle
Westminster Abbey
St James 's Church, Paddington
Croydon Airport
Monument of Polish Pilots, Northolt
The story of the battle was documented in, amongst many others, the 1969 film Battle of Britain, which drew many respected British actors to act key figures of the battle, including Sir Laurence Olivier as Hugh Dowding and Trevor Howard as Keith Park. It also starred Michael Caine, Christopher Plummer and Robert Shaw as Squadron Leaders. Former participants of the battle served as technical advisors including Douglas Bader, James Lacey, Robert Stanford Tuck, Adolf Galland and Dowding himself. An Italian film around the same time titled Eagles Over London (1969) also featured the Battle of Britain. The 1988 ITV mini-series Piece of Cake, an aerial drama about a fictional Second World War RAF fighter squadron in 1940, features the battle. The Czech film Dark Blue World (2001) also featured the battle, focusing on the Czech Pilots who fought in the battle. An inaccurate and fictional version of the battle is shown in the 2001 movie, Pearl Harbor, in which the battle is depicted as still going on into 1941.
It has also been the subject of many documentaries, including the 1941 Allied propaganda film Churchill 's Island, winner of the first - ever Academy Award for Documentary Short Subject. There was also the 1943 The Battle of Britain in Frank Capra 's Why We Fight series. It was included in an episode of Battlefield Britain. In 2010, actor Julian Glover played a 101 - year - old Polish veteran RAF pilot in the short film Battle for Britain.
Books
General
|
the first european country to establish control over a colony in north africa was | First wave of European colonization - wikipedia
The first European colonization wave took place from the early 15th century (Portuguese conquest of Ceuta in 1415) until the early 19th - century (French invasion of Algeria in 1830), and primarily involved the European colonization of the Americas, though it also included the establishment of European colonies in India and in Maritime Southeast Asia. During this period, European interests in Africa primarily focused on the establishment of trading posts there, particularly for the African slave trade.
The time period in which much of the first wave of European colonization (and other exploratory ventures) occurred is often labeled the Age of Discovery. A later major phase of European colonization, which started in the late 19th - century and primarily focused on Africa and Asia, is known as the period of New Imperialism.
Religious zeal played a large role in Spanish and Portuguese overseas activities. While the Pope himself was a political power to be heeded (as evidenced by his authority to decree whole continents open to colonization by particular kings), the Church also sent missionaries to convert the indigenous peoples of other continents to the Catholic faith. Thus, the 1455 Papal Bull Romanus Pontifex granted the Portuguese all lands behind Cape Bojador and allowed them to reduce pagans and other enemies of Christ to perpetual slavery.
Later, the 1481 Papal Bull Aeterni regis granted all lands south of the Canary Islands to Portugal, while in May 1493 the Spanish - born Pope Alexander VI decreed in the Bull Inter caetera that all lands west of a meridian only 100 leagues west of the Cape Verde Islands should belong to Spain while new lands discovered east of that line would belong to Portugal. These arrangements were later precised with the 1494 Treaty of Tordesillas.
The Dominicans and Jesuits, notably Francis Xavier in Asia, were particularly active in this endeavour. Many buildings erected by the Jesuits still stand, such as the Cathedral of Saint Paul in Macau and the Santisima Trinidad de Paraná in Paraguay, an example of a Jesuit Reduction.
Spanish treatment of the indigenous populations provoked a fierce debate at home in 1550 -- 51, dubbed the Valladolid debate, over whether Indians possessed souls and if so, whether they were entitled to the basic rights of mankind. Bartolomé de Las Casas, author of A Short Account of the Destruction of the Indies, championed the cause of the natives, and was opposed by Sepúlveda, who claimed Amerindians were "natural slaves ''.
The School of Salamanca, which gathered theologians such as Francisco de Vitoria (1480 -- 1546) or Francisco Suárez (1548 -- 1617), argued in favor of the existence of natural law, which thus gave some rights to indigenous people. However, while the School of Salamanca limited Charles V 's imperial powers over colonized people, they also legitimized the conquest, defining the conditions of "Just War ''. For example, these theologians admitted the existence of the right for indigenous people to reject religious conversion, which was a novelty for Western philosophical thought. However, Suárez also conceived many particular cases -- a casuistry -- in which conquest was legitimized. Hence, war was justified if the indigenous people refused free transit and commerce to the Europeans; if they forced converts to return to idolatry; if there come to be a sufficient number of Christians in the newly discovered land that they wish to receive from the Pope a Christian government; if the indigenous people lacked just laws, magistrates, agricultural techniques, etc. In any case, title taken according to this principle must be exercised with Christian charity, warned Suárez, and for the advantage of the Indians. Henceforth, the School of Salamanca legitimized the conquest while at the same time limiting the absolute power of the sovereign, which was celebrated in others parts of Europe under the notion of the divine right of kings.
In the 1970s, the Jesuits would become a main proponent of the Liberation theology which openly supported anti-imperialist movements. It was officially condemned in 1984 and in 1986 by then Cardinal Ratzinger (later Pope Benedict XVI) as the head of the Congregation for the Doctrine of the Faith, under charges of Marxist tendencies, while Leonardo Boff was suspended.
It was not long before the exclusivity of Iberian claims to the Americas was challenged by other European powers, primarily the Netherlands, France and England: the view taken by the rulers of these nations is epitomized by the quotation attributed to Francis I of France demanding to be shown the clause in Adam 's will excluding his authority from the New World.
This challenge initially took the form of privateering raids (such as that led by Francis Drake) on Spanish treasure fleets or coastal settlements, but later, Northern European countries began establishing settlements of their own, primarily in areas that were outside of Spanish interests, such as what is now the eastern seaboard of the United States and Canada, or islands in the Caribbean, such as Aruba, Martinique and Barbados, that had been abandoned by the Spanish in favour of the mainland and larger islands.
Whereas Spanish colonialism was based on the religious conversion and exploitation of local populations via encomiendas (many Spaniards emigrated to the Americas to elevate their social status, and were not interested in manual labour), Northern European colonialism was frequently bolstered by people fleeing religious persecution or intolerance (for example, the Mayflower voyage). The motive for emigration was not to become an aristocrat nor to spread one 's faith but to start afresh in a new society, where life would be hard but one would be free to exercise one 's religious beliefs. The most populous emigration of the 17th century was that of the English, and after a series of wars with the Dutch and the French the English overseas possessions came to dominate the east coast of North America, an area stretching from Virginia northwards to New England and Newfoundland, although during the 17th century an even greater number of English emigrants settled in the West Indies.
However, the English, French and Dutch were no more averse to making a profit than the Spanish and Portuguese, and whilst their areas of settlement in the Americas proved to be devoid of the precious metals found by the Spanish, trade in other commodities and products that could be sold at massive profit in Europe provided another reason for crossing the Atlantic, in particular furs from Canada, tobacco and cotton grown in Virginia and sugar in the islands of the Caribbean and Brazil. Due to the massive depletion of indigenous labour, plantation owners had to look elsewhere for manpower for these labour - intensive crops. They turned to the centuries - old slave trade of west Africa and began transporting humans across the Atlantic on a massive scale -- historians estimate that the Atlantic slave trade brought between 10 and 12 million individuals to the New World. The islands of the Caribbean soon came to be populated by slaves of African descent, ruled over by a white minority of plantation owners interested in making a fortune and then returning to their home country to spend it.
The January 27, 1512 Leyes de Burgos codified the government of the indigenous people of the New World, since the common law of Spain was n't applied in these recently discovered territories. The scope of the laws were originally restricted to the island of Hispaniola, but were later extended to Puerto Rico and Jamaica. They authorized and legalized the colonial practice of creating encomiendas, where Indians were grouped together to work under colonial masters, limiting the size of these establishments to a minimum of 40 and a maximum of 150 people. The document finally prohibited the use of any form of punishment by the encomenderos, reserving it for officials established in each town for the implementation of the laws. It also ordered that the Indians be catechesized, outlawed bigamy, and required that the huts and cabins of the Indians be built together with those of the Spanish. It respected, in some ways, the traditional authorities, granting chiefs exemptions from ordinary jobs and granting them various Indians as servants. The poor fulfilment of the laws in many cases lead to inummerable protests and claims. In fact, the laws were so often poorly applied that they were seen as simply a legalization of the previous poor situation. This would create momentum for reform, carried out through the Leyes Nuevas ("New Laws '') in 1542. Ten years later, Dominican friar Bartolomé de las Casas would publish A Short Account of the Destruction of the Indies, in the midst of the Valladolid Controversy, a debate about the existence or not of souls in Amerindians bodies. Las Casas, bishop of Chiapas, was opposed to Sepúlveda, who claimed Amerindians were "natural slaves ''...
In the French empire, slave trade and other colonial rules were regulated by Louis XIV 's 1689 Code Noir.
From its very outset, Western colonialism was operated as a joint public - private venture. Columbus ' voyages to the Americas were partially funded by Italian investors, but whereas the Spanish state maintained a tight rein on trade with its colonies (by law, the colonies could only trade with one designated port in the mother country and treasure was brought back in special convoys), the English, French and Dutch granted what were effectively trade monopolies to joint - stock companies such as the East India Companies and the Hudson 's Bay Company. The Massachusetts Bay Company, founded in 1628 / 9, swiftly established a form of self - governance following the Cambridge Agreement of August 1629, whereby subsequent meetings of the board of governors took place in Massachusetts itself.
In 1498, the Portuguese arrived in Goa. Rivalry among reigning European powers saw the entry of the Dutch, British, French, Danish among others. The fractured debilitated kingdoms of India were gradually taken over by the Europeans and indirectly controlled by puppet rulers. In 1600, Queen Elizabeth I accorded a charter, forming the East India Company to trade with India and eastern Asia. The British landed in India in Surat in 1624. By the 19th century, they had assumed direct and indirect control over most of India.
The arrival of the conquistadores caused the annihilation of most of the Amerindians. However, contemporary historians now generally reject the Black Legend according to which the brutality of the European colonists accounted for most of the deaths. It is now generally believed that diseases, such as the smallpox, brought upon by the Columbian Exchange, were the greatest destroyer, although the brutality of the conquest itself is n't contested. Genocidal policies were more common in post-colonial states, notably in 19th century United States and Argentina where indigenous populations were systematically exterminated. For example, Juan Manuel de Rosas, Argentinian caudillo from 1829 to 1852, openly pursued the extermination of the local population, an event related by Darwin in The Voyage of the Beagle (1839). He was then followed by the "Conquest of the Desert '' in the 1870 -- 80s. The result was the death of a lot of the mapuche and arauccan population in the Patagonia. After the Amerindians ' quasi-total disparition, the mines and the sugar cane plantations thus led to the booming of the Atlantic slave trade, especially apparent in the Caribbean where the largest ethnic group is of African descent.
Contemporary historians debate the legitimacy of calling the quasi-disparition of the Amerindians a "genocide ''. Estimates of pre-Columbian population have ranged from a low of 8.4 million to a high of 112.5 million persons; in 1976, geographer William Denevan derived a "consensus count '' of about 54 million people.
David Stannard has argued that "The destruction of the Indians of the Americas was, far and away, the most massive act of genocide in the history of the world '', with almost 100 million Amerindians killed in what he calls the American Holocaust. Like Ward Churchill, he believes that the American natives were deliberately and systematically exterminated over the course of several centuries, and that the process continues to the present day.
Stannard 's claim of 100 million deaths has been disputed because he makes no distinction between death from violence and death from disease. In response, political scientist R.J. Rummel has instead estimated that over the centuries of European colonization about 2 million to 15 million American indigenous people were the victims of what he calls democide. "Even if these figures are remotely true '', writes Rummel, "then this still make this subjugation of the Americas one of the bloodier, centuries long, democides in world history. ''
Spain and Portugal sought the utilization of foreign and indigenous peoples during post colonial contact with the New World. The Portuguese and Spanish use of slavery in Latin America was seen as a lucrative business which ultimately led to internal and external development in gaining economic influence at any cost. The economic pursuits of the Spanish and Portuguese empires ushered in the era of the Atlantic Slave Trade.
In the fifteenth century, Portugal shifted its attention to the latter economic endeavor. Their ships sailed from the borders of the Sahara desert to the entirety of the West African coast. At the outset of the Atlantic Slave Trade, Manuel Bautista Pérez, a notable Portuguese slave trader, gives insight to the amount and treatment of the African slaves. Pérez and his men conducted slave trading in which thousands of African peoples were bought from local tribal leaders and transported across the Atlantic to South America. In contrast from popular belief, Portuguese slave traders did not acquire slaves in a forceful manner. According to documents written by Manuel Pérez, slaves were only made available by certain conditions. The most notable condition was bartering "items that the leaders wanted and were interested in ''. Items such as bread, coal, precious stones, and firearms were provided in exchange for slaves. Furthermore, local tribal leaders did not simply give up their own people for the aforementioned commodities but rather through intertribal wars, debts, and civil crime offenders.
Labor in the Spanish and Portuguese colonies became scarce. European diseases and forced labor began killing the indigenous people in insurmountable numbers. Therefore, slaves were seen only as a business venture due to the labor shortages. These slaves were forced to work in jobs such as agriculture and mining. According to David Eltis, areas controlled by the Spanish such as Mexico, Peru, and large parts of Central America used forced slave labor in "mining activities ''. In 1494 the Pope ushered in the Treaty of Tordesillas, granting Spain and Portugal two separate parts of the world. Due to this treaty, Portugal had the monopoly on acquiring the slaves from Africa. However, Spain, like Portugal, needed the labor force to pursue their personal economic gains. This gave Portugal an increased revenue stream. African slaves were sold to the Spanish colonies through an internal reform known as the asiento; which gave the right, by the Spanish Crown, for acquiring African slaves from the Portuguese traders.
In terms of the treatment of slaves, Portuguese external policies on the acquisition of slaves depict a malicious attempt to obtaining economic wealth. Nearly 3,600 slaves a year were traded by a single trader. This latter statement illuminates that traders tried to get as many slaves as they could in the shortest amount of time. Consequently, this led to the deaths of thousands of African peoples. Newly bought slaves were kept tightly packed in highly flammable huts in order to wait for transport. Once aboard the ships many hundreds of people would once again be shoved into lower ship compartments, collectively chained up, and given little to eat. By these actions "nearly a quarter of the slaves transported died before ever reaching the destination ''. Many of them suffocated in the lower compartments as the hatches upon the deck remained closed; restricting the circulation of air. Slaves were often branded with a mark upon their skin to identify either the ship they arrived on or the company that purchased them. In addition, the slaves were seen as a "potentially economic utility ''; therefore they were often equated to cattle when moved about. Many African people died in large numbers in order to meet the demand for Spanish and Portuguese labor requirements.
Both Spain and Portugal share a similar history in regards to the treatment of slaves in their colonies. As time progressed and new generations of slaves lived under imperial rule, Spanish and Portuguese internal reforms dealt with African slaves in areas such as, "the purchasing and selling of slaves, legal ownership, succession upon death of owner, the rights of slaves to buy their liberty, and penalties to those who ran away ''. There was a constant strict social control amongst the slave population. Nevertheless, the goal was to create and sustain a labor force that would yield maximum economic output. The lucrative business the Portuguese sought on the West African coast ushered in an era in which human labor, at any cost, was used for the extraction of wealth.
|
list of maharatna navratna and miniratna companies of india | Public sector undertakings in India - Wikipedia
A state - owned enterprise in India is called a public sector undertaking (PSU) or a public sector enterprise. These companies are owned by the union government of India, or one of the many state or territorial governments, or both. The company stock needs to be majority - owned by the government to be a PSU. PSUs may be classified as central public sector enterprises (CPSEs) or state level public enterprises (SLPEs). In 1951 there were just five enterprises in the public sector in India, but in March 1991 this had increased to 246.
CPSEs are companies in which the direct holding of the Central Government or other CPSEs is 51 % or more. They are administered by the Ministry of Heavy Industries and Public Enterprises.
When India achieved independence in 1947, it was primarily an agricultural country with a weak industrial base. The national consensus was in favour of rapid industrialisation of the economy which was seen as the key to economic development, improving living standards and economic sovereignty. Building upon the Bombay Plan, which noted the requirement of government intervention and regulation, the first Industrial Policy Resolution announced in 1948 laid down broad contours of the strategy of industrial development. Subsequently, the Planning Commission was constituted in March 1950 and the Industrial (Development and Regulation) Act was enacted in 1951 with the objective of empowering the government to take necessary steps to regulate industrial development. Prime Minister Jawaharlal Nehru promoted an economic policy based on import substitution industrialisation and advocated a mixed economy. He believed that the establishment of basic and heavy industry was fundamental to the development and modernisation of the Indian economy. India 's second five year plan (1956 -- 60) and the Industrial Policy Resolution of 1956 emphasised the development of public sector enterprises to meet Nehru 's national industrialisation policy. Indian statistician Prasanta Chandra Mahalanobis was instrumental to its formulation, which was subsequently termed the Feldman -- Mahalanobis model.
The major consideration for the setting up of PSUs was to accelerate the growth of core sectors of the economy; to serve the equipment needs of strategically important sectors, and to generate employment and income. A large number of "sick units '' were taken over from the private sector. Additionally, Indira Gandhi 's government nationalised fourteen of India 's largest private banks in 1969, and an additional six in 1980. This government - led industrial policy, with corresponding restrictions on private enterprise, was the dominant pattern of Indian economic development until the 1991 Indian economic crisis. After the crisis, the government began dis - investing its ownership of several PSUs to raise capital and privatise companies facing poor financial performance and low efficiency.
Many private have been awarded additional financial autonomy. These companies are "public sector companies that have comparative advantages '', giving them greater autonomy to compete in the global market so as to "support (them) in their drive to become global giants ''. Financial autonomy was initially awarded to nine PSUs as Navratna status in 1997. Originally, the term Navaratna meant a talisman composed of nine precious gems. Later, this term was adopted in the courts of Gupta emperor Vikramaditya and Mughal emperor Akbar, as the collective name for nine extraordinary courtiers at their respective courts.
In 2010, the government established the higher Maharatna category, which raises a company 's investment ceiling from Rs. 1,000 crore to Rs. 5,000 crore. The Maharatna firms can now decide on investments of up to 15 per cent of their net worth in a project while the Navaratna companies could invest up to Rs 1,000 crore without explicit government approval. Two categories of Miniratnas afford less extensive financial autonomy.
Guidelines for awarding Ratna status are as follows:
Average annual Net worth of Rs. 10,000 crore for 3 years, OR Average annual Turnover of Rs. 20,000 crore for 3 years (against Rs 25,000 crore prescribed earlier)
A company must first be a Miniratna and have 4 independent directors on its board before it can be made a Navratna.
PSUs in India are also categorised based on their special non-financial objectives and are registered under Section 8 of Companies Act, 2013 (erstwhile Section 25 of Companies Act, 1956).
As on 13 September 2017 there are 8 Maharatnas, 16 Navratnas and 74 Miniratnas. There are nearly 300 CPSEs (central public sector enterprises) in total.
List of Maharatnas
List of Navratnas
List of Miniratna - I
List of Miniratna - II
Others: Power System Operation Corporation Ltd.
|
who did mandy moore play in princess diaries | The Princess Diaries (film) - Wikipedia
The Princess Diaries is a 2001 American teen romantic comedy film directed by Garry Marshall and written by Gina Wendkos, based on Meg Cabot 's 2000 novel of the same name. It stars Anne Hathaway (in her film debut) as Mia Thermopolis, a teenager who discovers that she is the heir to the throne of the fictional Kingdom of Genovia, ruled by her grandmother Queen dowager Clarisse Renaldi (Julie Andrews). The film also stars Heather Matarazzo, Héctor Elizondo, Mandy Moore, and Robert Schwartzman.
Released in North America on August 3, 2001, the film was a commercial success, grossing $165.3 million worldwide. The film was followed by a sequel, The Princess Diaries 2: Royal Engagement, released in August 2004.
Teenager Mia Thermopolis lives with her artist mother, Helen, and her cat, Fat Louie in a remodeled San Francisco firehouse. A somewhat awkward and unpopular girl, Mia has a fear of public speaking, and often wishes to be "invisible ''. She has a crush on Josh Bryant, but is frequently teased by both him and his cheerleader girlfriend, Lana Thomas. Mia 's only friendships are in the form of the equally unpopular Lilly Moscovitz and Lilly 's brother, Michael, who secretly has a crush on Mia.
Just before her sixteenth birthday, Mia learns that her paternal grandmother, Clarisse, is visiting from Genovia, a small European kingdom. When Mia goes to meet her grandmother at a large house (later revealed to be the Genovian consulate), Clarisse reveals she is actually Queen Clarisse Renaldi, and that her son, Mia 's late father, was Crown Prince of Genovia. Mia is stunned to learn she is a princess and heir to the Genovian throne. In shock, Mia runs home and angrily confronts her mother, who explains she had planned to tell Mia on her eighteenth birthday, but that her father 's death has forced the matter. Queen Clarisse visits and explains that if Mia refuses the throne, Genovia will be without a ruler (a subplot involves a scheming baron and his unsightly baroness quietly rooting for Mia 's downfall). Helen persuades a hesitant Mia to attend "princess lessons '' with the Queen, telling her she does not have to make her decision until the upcoming Genovian Independence Day ball.
Mia is given a glamorous makeover, the use of a limousine, and a bodyguard (the Queen 's head of security, Joe). This and Mia 's frequent absences from the lessons make Lilly suspicious and jealous, and she accuses Mia of trying to be like the popular girls. Mia breaks down and tells Lilly everything and swears her to secrecy. However, the San Francisco Chronicle learns that Mia is the Genovian Crown Princess after royal hairdresser Paolo breaks his confidentiality agreement (so his work would be known), causing a press frenzy, and a sudden surge in popularity at school for Mia. In a craven urge for fame, many of her classmates bluff that they are friends of the princess to reporters.
At a state dinner, Mia embarrasses herself with her clumsiness, delighting her rivals for the crown. However, all is not lost, as the situation amuses a stuffy diplomat, and the Queen tells Mia the next day that she found it fun. Deciding it is time the two bonded as grandmother and granddaughter, the Queen allows Mia to take her out for the day to the Musée Mécanique, an amusement arcade. The day almost ends terribly when Mia 's car stalls on a hill and rams backward into a cable car, but Queen Clarisse saves the day by "appointing '' the attending police officer and the tram driver to the Genovian "Order of the Rose '' (something she clearly made up on the spot), flattering them into dropping any charges. Mia sees this and is impressed with her grandmother.
Later, Mia is delighted when Josh Bryant invites her to a beach party, but her acceptance hurts Lilly and Michael, with whom she had plans. Things go awry when the press arrive, tipped off by Lana. Josh uses Mia to get his fifteen minutes of fame by publicly kissing her, while Lana tricks Mia into changing in a tent, pulling it away as the paparazzi arrive, giving them a scandalous shot of Mia in a towel. The photos appear on tabloid covers the next day, leaving Queen Clarisse furious at Mia. A humiliated Mia tells Clarisse that she is renouncing the throne, feeling she is nowhere near ready to be a true princess. Joe later reminds the Queen that although Mia is a princess, she is still a teenager, and her granddaughter.
Back at school, Mia rescues her friendships with Lilly and Michael by inviting them to the Genovian Independence Day Ball, and gets back at Josh by hitting a baseball into his groin during gym class. She finally stands up to Lana in defense of Jeremiah, whom Lana was mocking, by smearing ice cream on Lana 's cheerleader outfit and declaring that, while Mia has a chance to grow out of her awkward ways, Lana will always be a jerk. The teachers do not interfere, knowing full well that Lana deserved it. While Lilly is excited at the prospect of attending a royal ball, Michael, heartbroken over Mia 's initial feelings for Josh, turns her down. Clarisse apologizes to Mia for being furious at her over the beach incident, and states that she must publicly announce her decision to renounce becoming princess of Genovia. Mia, terrified at this large responsibility placed upon her, plans to run away. However, when she finds a letter from her late father, his touching words make her change her mind, and she makes her way to the ball. Mia 's car breaks down in the rain, but she is rescued by Joe, who had suspected she was going to run.
When they arrive, a drenched and untidy Mia voices her acceptance of her role as Princess of Genovia. After changing into an opulent ballgown, Mia accompanies Clarisse to the ballroom, where she is formally introduced and invited to dance. Michael, accepting an apologetic gift from Mia (a pizza with M & M candies cleverly topped to say "sorry ''), arrives at the ball, and after a quick dance, they adjourn to the courtyard. Mia confesses her feelings to him, stating that even when she was constantly teased and embarrassed at school, Michael liked her for whom she truly was. Mia shares her first kiss with Michael, while Clarisse and Joe are seen holding hands. In the final scene, Mia is shown on a private plane with Fat Louie, writing in her diary, explaining that she is moving with her mother to Genovia, just as the beautiful royal palace and landscape come into view below.
Filming took place from September 18 to December 8, 2000. The film was produced by Whitney Houston and Debra Martin Chase and directed by Garry Marshall. Anne Hathaway was cast for the role of Mia because Garry Marshall 's granddaughters saw her audition tape and said she had the best "princess hair. '' According to Hathaway, the first choice for the role of Mia Thermopolis was Liv Tyler, but the studio preferred to cast unfamiliar faces.
Héctor Elizondo, who appears in all the films which Marshall directs, plays Joe, the head of Genovian security. Marshall 's daughter, Kathleen, plays Clarisse 's secretary Charlotte Kutaway. Charlotte 's surname is mentioned only in the credits, and Garry Marshall says it is a reference to how she is often used in cutaway shots. In one scene, Robert Schwartzman 's real - life group Rooney makes a cameo playing a garage band named Flypaper, whose lead singer is Michael, played by Schwartzman. The Cable car tourist was portrayed by Kathy Garver.
The book was set in New York City, but the film 's location was changed to San Francisco. West Coast radio personalities Mark & Brian appear as themselves.
The film opened in 2,537 theaters in North America and grossed $22,862,269 in its opening weekend. It grossed $165,335,153 worldwide -- $108,248,956 in North America and $57,086,197 in other territories.
Review aggregator Rotten Tomatoes reports that 47 % of 113 film critics gave the film a positive review, with a rating average of 5.2 / 10. The site 's consensus reads, "A charming, if familiar, makeover movie for young teenage girls. '' Metacritic, which assigns a weighted average score out of 100 to reviews from mainstream critics, gives the film a score of 52 based on 27 reviews.
A sequel, The Princess Diaries 2: Royal Engagement, was released on August 11, 2004, with Garry Marshall returning to direct and Debra Martin Chase to produce the sequel. Unlike the first film, it is not based on any of the books. Most of the cast returned for the sequel, including Anne Hathaway, Julie Andrews, Héctor Elizondo, Heather Matarazzo, and Larry Miller. New cast and characters include Viscount Mabrey (John Rhys - Davies), Lord Nicholas Devereaux (Chris Pine), and Andrew Jacoby (Callum Blue).
|
when did the beatles do the rooftop concert | The Beatles ' rooftop concert - wikipedia
The Beatles ' rooftop concert was the final public performance of the English rock band the Beatles. On 30 January 1969, the band, with keyboardist Billy Preston, surprised a central London office and fashion district with an impromptu concert from the roof of the headquarters of the band 's multimedia corporation Apple Corps at 3 Savile Row. In a 42 - minute set, the Beatles were heard playing nine takes of five songs before the Metropolitan Police Service asked them to reduce the volume. Footage from the performance was later used in the 1970 documentary film Let It Be.
Although the concert was unannounced, the Beatles had planned on performing live during their Get Back sessions earlier in January. It is uncertain who had the idea for a rooftop concert, but the suggestion was conceived just days before the actual event. George Harrison brought in keyboardist Billy Preston as an additional musician, in the hope that a talented outside observer would encourage the band to be tight and focused. Ringo Starr remembered:
"There was a plan to play live somewhere. We were wondering where we could go -- ' Oh, the Palladium or the Sahara '. But we would have had to take all the stuff, so we decided, ' Let 's get up on the roof ' ''.
The audio was recorded onto two eight - track recorders in the basement of Apple by engineer Alan Parsons, and film director Michael Lindsay - Hogg brought in a camera crew to capture several angles of the performance -- including reactions from people on the street.
When the Beatles first started playing, there was some confusion from spectators watching five stories below, many of whom were on their lunch break. As the news of the event spread, crowds of onlookers began to congregate in the streets and on the roofs of local buildings. While most responded positively to the concert, the Metropolitan Police Service grew concerned about noise and traffic issues. Apple employees initially refused to let police inside, ultimately reconsidering when threatened with arrest.
As police ascended to the roof, the Beatles realised that the concert would eventually be shut down, but continued to play for several more minutes. Paul McCartney improvised the lyrics of his song "Get Back '' to reflect the situation, "You 've been playing on the roofs again, and you know your Momma does n't like it, she 's gon na have you arrested! '' The concert came to an end with the conclusion of "Get Back '', with John Lennon saying, "I 'd like to say thank you on behalf of the group and ourselves and I hope we 've passed the audition ''.
The rooftop concert consisted of nine takes of five Beatles songs, including:
The first performance of "I 've Got a Feeling '', and the recordings of "One After 909 '', and "Dig a Pony '' were later used for the album Let It Be. In 1996, a "rooftop '' version of "Get Back '', which was the last song of the Beatles ' final live performance, was included in Anthology 3. An edit of the two takes of "Do n't Let Me Down '' was included on Let It Be... Naked. There was also a brief jam of "I Want You (She 's So Heavy) '' while the cameramen changed film.
The Beatles ' rooftop concert marked the end of an era for many fans. They did record one more album, Abbey Road, but by September 1969 the Beatles had unofficially disbanded. Several of the rooftop performances, particularly that of "Dig a Pony '', showed the Beatles once again in top form, if only temporarily. Fans believed the rooftop concert might have been a try - out for a return to live performances and touring.
The Rutles ' "Get Up and Go '' sequence in the film All You Need Is Cash mimics the footage of the rooftop concert, and uses similar camera angles. In January 2009, tribute band the Bootleg Beatles attempted to stage a 40th anniversary concert in the same location, but were refused permission by Westminster City Council due to licensing problems.
In The Simpsons fifth season episode "Homer 's Barbershop Quartet '', the Be Sharps (Homer, Apu, Barney and Principal Skinner) perform a rendition of one of their previous hits on a rooftop. George Harrison, who guest - starred in the episode, is shown saying dismissively "It 's been done! '' As the song ends and the credits begin, Homer repeats John Lennon 's phrase about passing the audition and everyone laughs, including Barney until he says, "I do n't get it. ''
In the 2007 film Across The Universe, a musical made up entirely of Beatles ' music, Sadie 's band performs a rooftop concert in New York City which mimics the original. It is interrupted and closed down by the New York Police Department.
U2 also referenced the concert in their video for "Where the Streets Have No Name '', which featured a similar rooftop concert in Los Angeles.
McCartney played a surprise mini-concert in midtown Manhattan on 15 July 2009 from the top of the marquee of the Ed Sullivan Theater, where he was recording a performance for Late Show with David Letterman. News of the event spread via Twitter and word of mouth, and nearby street corners were closed off to accommodate fans for the set, which duplicated the original Beatles gig.
|
ll cool j i'm gonna knock you out lyrics | Mama Said Knock You Out - wikipedia
Mama Said Knock You Out is the fourth studio album by American rapper LL Cool J. It was produced mostly by Marley Marl and recorded at his "House of Hits '' home studio in Chestnut Ridge and at Chung King House of Metal in New York City. After the disappointing reception of LL Cool 's 1989 album Walking with a Panther, Mama Said Knock You Out was released by Def Jam Recordings in 1990 to commercial and critical success.
Mama Said Knock You Out was released on August 27, 1990, by Def Jam Recordings. It was promoted with five singles, four of which became hits: "The Boomin ' System, '' "Around the Way Girl, '' the title track, and "6 Minutes of Pleasure. '' The album was certified double platinum in the United States, having shipped two million copies. According to Yahoo! Music 's Frank Meyer, Mama Said Knock You Out "seemed to set the world on fire in 1990 '', helped by its hit title track and LL Cool J 's "sweaty performance '' on MTV Unplugged. The title song reached number 17 on the Billboard Hot 100 and was certified gold by the RIAA. LL Cool J won Best Rap Solo Performance at the Grammy Awards of 1992.
In The New York Times, Jon Pareles wrote that Mama Said Knock You Out reestablished LL Cool J as "the most articulate of the homeboys '', sounding "tougher and funnier '' rapping about "crass materialism '' and "simple pleasures ''. In Mark Cooper 's review for Q, he wrote, "This 22 - year - old veteran has lost neither his eye for everyday detail nor his sheer relish for words. '' Select magazine 's Richard Cook said, "LL 's stack of samples add the icing to a cake that is all dark, remorseless rhythm, a lo - fi drum beat shadowed by a crude bass rumble. It could be Jamaican dub they 're making here, if it were n't for LL 's slipper lip. '' Mama Said Knock You Out was voted the ninth best record of 1990 in the Pazz & Jop, an annual poll of American critics published by The Village Voice.
The album was included in Hip Hop Connection 's The phat forty, a rundown of rap 's greatest albums. "The LP 's title track proved to be the single of the year and probably LL 's best record since ' I 'm Bad ', '' HHC said, "while ' Eat ' Em Up L Chill ' and ' To Da Break Of Dawn ' was (sic) the sound of Cool J getting his own back -- and in style. '' In 1998, it was listed in The Source 's 100 Best Rap Albums. In 2005, comedian Chris Rock listed it as the sixth greatest hip - hop album ever in a guest article for Rolling Stone.
The single version of the track "Jingling Baby (Remixed but Still Jingling) '' was remixed by Marley Marl.
Credits are adapted from AllMusic.
sales figures based on certification alone shipments figures based on certification alone
|
office of the secretary general's envoy on youth | Office of the Secretary - General 's Envoy on Youth - wikipedia
The Secretary - General 's Envoy on Youth serves as a global advocate for addressing the needs and rights of young people, as well as for bringing the United Nations closer to them. The Envoy 's Office is part of the United Nations Secretariat and supports multi-stakeholder partnerships related to the United Nations system - wide action plan on youth and to youth volunteer initiatives. The office also promotes the empowerment and foster the leadership of youth at the national, regional, and global levels, including through exploring and encourages mechanisms for young people 's participation in the work of the United Nations and in political and economic processes with a special focus on the most marginalized and vulnerable youth.
The United Nations Secretary - General identified working with and for young people as one of the Organization 's top priorities. Ahmad Alhendawi was appointed the first - ever Envoy on Youth, and served in this position from 2013 until 2017. During his tenure, he tasked the UN Volunteer program to establish a Youth Volunteer Programme and the UN Inter-Agency Network on Youth Development (IANYD) to develop a System - Wide Action Plan on Youth. On 20 June 2017, Jayathma Wickramanayake took up the position of Envoy on Youth.
The Envoy on Youth is mandated with the task of bringing the voices of young people to the United Nations System. Moreover, the Envoy on Youth also works with different UN agencies, governments, civil society, academia and media stakeholders towards enhancing, empowering and strengthening the position of young people within and outside of the United Nations system. The role of the Envoy on Youth is also described by the UN Secretary - General as a "harmoniser between all UN agencies '' bringing them together to explore cooperation opportunities for working with and for young people.
The work - plan of the Office of the Secretary General 's Envoy on Youth responds to the UN Secretary - General 's Five - year Action Agenda, and is guided by the World Programme of Action for Youth. It outlines 4 priority areas; Participation, Advocacy, Partnerships and Harmonisation. In addition, the focus of the Envoy 's office is placed on employment and civic engagement while ensuring the integration of a gender perspective across all work areas. In parallel, the Envoy 's office supports the Education First Initiative and the planned activities in relation to youth and education.
Under each priority area of the work plan, the Office of the Secretary General 's Envoy on Youth has outlined a set of goals and actions.
Firstly in relation to the Participation priority, the main goal is to increase youth accessibility to the UN through promoting structured mechanisms. To this end the office is promoting the establishment of the UN Panel on Youth. It is supporting the first ever Regional Economic and Social Committee (ECOSOC) Youth Forums, and the Global ECOSOC Youth Forum. In addition, the Envoy works on encouraging the Resident Coordinators and UN Country Teams to establish National Youth Advisory Groups to engage youth in the preparation of the UN Development Assistance Framework (UNDAF), and regularly partners with the United Nations Major Group for Children and Youth. The Office of the Secretary General 's Envoy on Youth encourages more governments to participate in the UN Youth Delegate Programme. The Office of the Secretary General 's Envoy on Youth also provides support for the new United Nations Volunteers (UNV) Youth modality. Further, the Office of the Secretary General 's Envoy on Youth works on creating and maintaining active channels of communication between youth - led organisations and the United Nations, as well as to enhance youth access to information related to the United Nations ' work on youth.
Secondly, under the Advocacy priority area, the Office of the Secretary General 's Envoy on Youth promotes stronger youth participation in setting, implementing and evaluating the various development frameworks and increase international awareness and attention to youth issues. The Envoy has pledged to advocate for a youth - friendly Post-2015 Development Agenda, and is utilising various platforms to advocate for a stronger youth agenda at the national, regional and international levels. In addition, the Envoy 's Office has deployed traditional and new media tools to advocate for stronger youth participation with a special focus on marginalized youth and young women and girls.
Thirdly, in regard to the priority area of Partnerships, the Office of the Secretary General 's Envoy on Youth engages with Member States, the private sector, academic institutions, media and civil society, including youth - led organisations in the UN programmes on youth and facilitates multi-stakeholder partnerships on youth issues. The Office of the Secretary General 's Envoy on Youth coordinates closely with Member States to further support youth issues and reinforce a youth perspective in relevant resolutions. The Envoys office also supports evidence - based research on youth issues, networking of youth - led organisations and works on building a global coalition for youth rights.
Fourthly, in terms of the Harmonisation priority area, the Office of the Secretary General 's Envoy on Youth works as a catalyst to enhance the coordination and harmonisation of youth programming among UN agencies. The actions toward this priority include the promotion of the implementation of the World Programme of Action for Youth, working closely with the UN Inter-Agency Network on Youth Development and support for the establishment of inter-agency networks at the regional and national levels. Moreover, the Envoys Office supports the implementation of the System Wide Action Plan on Youth, and enhances the communications and the flow of information between UN agencies and youth.
The Office of the Secretary - General 's Envoy on Youth was envisioned by Secretary - General of the United Nations, Ban Ki - moon. On January 17, 2013, Ahmad Alhendawi became the first Youth Envoy to be appointed by the Secretary - General. As Envoy on Youth, he is responsible for several reforms on youth which includes Participation, Advocacy, Partnership and Harmonisation. The Office of the Secretary - General 's Envoy on Youth through the leadership of Ahmad Alhendawi pioneered the vision of the 17 UN Young Leaders which sees individuals between the age group of 18 - 30 from different countries who advocate the Sustainable Development Goals (SDGs) through their innovative work. The Young Leaders initiative has been in existence from September 2016 in hopes of promoting SDGs through individual grass - roots projects.
|
who says cry havoc and let slip the dogs of war | The dogs of war (phrase) - wikipedia
In English, the dogs of war is a phrase spoken by Mark Antony in Act 3, Scene 1, line 273 of William Shakespeare 's Julius Caesar: "Cry ' Havoc! ', and let slip the dogs of war ''.
In the scene, Mark Antony is alone with Julius Caesar 's body, shortly after Caesar 's assassination. In a soliloquy, he reveals his intention to incite the crowd at Caesar 's funeral to rise up against the assassins. Foreseeing violence throughout Italy, Antony even imagines Caesar 's spirit joining in the exhortations: "ranging for revenge, with Ate by his side come hot from hell, shall in these confines with a Monarch 's voice cry "Havok! '' and let slip the dogs of war. ''
In a literal reading, "dogs '' are the familiar animals, trained for warfare; "havoc '' is a military order permitting the seizure of spoil after a victory and "let slip '' is to release from the leash. Shakespeare 's source for Julius Caesar was The Life of Marcus Brutus from Plutarch 's Lives, and the concept of the war dog appears in that work, in the section devoted to the Greek warrior Aratus.
Apart from the literal meaning, a parallel can be drawn with the prologue to Henry V, where the warlike king is described as having at his heels, awaiting employment, the hounds "famine, sword and fire ''.
Along those lines, an alternative proposed meaning is that "the dogs of war '' refers figuratively to the wild pack of soldiers "let slip '' by war 's breakdown of civilized behavior and / or their commanders ' orders to wreak "havoc '', i.e., rape, pillage, and plunder.
Yet another reading interprets "dog '' in its mechanical sense ("any of various usually simple mechanical devices for holding, gripping, or fastening that consist of a spike, bar, or hook ''). The "dogs '' are "let slip '' -- referring to the act of releasing. Thus, the "dogs of war '' are the political and societal restraints against war that operate during times of peace.
Victor Hugo used "dogs of war '' as a metaphor for cannon fire in chapter XIV of Les Misérables:
Another cannonade was audible at some distance. At the same time that the two guns were furiously attacking the redoubt from the Rue de la Chanvrerie, two other cannons, trained one from the Rue Saint - Denis, the other from the Rue Aubry - le - Boucher, were riddling the Saint - Merry barricade. The four cannons echoed each other mournfully. The barking of these sombre dogs of war replied to each other.
In modern English usage "dogs of war '' is used to describe mercenaries.
The phrase has entered so far into general usage - in books, music, film and television - that it is now regarded as a cliché.
|
what was the government's response to shays rebellion | Shays ' rebellion - wikipedia
United States
Shays ' Rebellion was an armed uprising in Massachusetts (mostly in and around Springfield) during 1786 and 1787. Revolutionary War veteran Daniel Shays led four thousand rebels (called Shaysites) in an uprising against perceived economic and civil rights injustices. In 1787, the rebels marched on the United States ' Armory at Springfield in an unsuccessful attempt to seize its weaponry and overthrow the government.
The rebellion took place in a political climate where reform of the country 's governing document, the Articles of Confederation, was widely seen as necessary. The events of the rebellion served as a catalyst for the calling of the U.S. Constitutional Convention, and ultimately the shape of the new government.
The shock of Shays ' Rebellion drew retired General George Washington back into public life, leading to his two terms as the United States ' first President. The exact nature and consequence of the rebellion 's influence on the content of the Constitution and the ratification debates continues to be a subject of historical discussion and debate.
In the rural parts of New England, particularly in the hill - towns of central and western Massachusetts, the economy during the American Revolutionary War had been one of little more than subsistence agriculture. Some residents in these areas had little in the way of assets beyond their land and bartered with one another for goods or services. In lean times, farmers might obtain goods on credit from suppliers in local market towns who would be paid when times were better.
In the more economically developed coastal areas of Massachusetts Bay and in the fertile Connecticut River Valley, the economy was basically a market economy, driven by the activities of wholesale merchants dealing with Europe, the West Indies and elsewhere on the North American coast. The state government was dominated by this merchant class.
When the Revolutionary War ended in 1783, the European business partners of Massachusetts merchants refused to extend lines of credit to them and insisted that they pay for goods with hard currency. Despite the continent - wide shortage of such currency, merchants began to demand the same from their local business partners, including those merchants operating in the market towns in the state 's interior. Many of these merchants passed on this demand to their customers, although the popular governor, John Hancock, did not impose hard currency demands on poorer borrowers and refused to actively prosecute the collection of delinquent taxes.
The rural farming population was generally unable to meet the demands being made of them by merchants or the civil authorities, and individuals began to lose their land and other possessions when they were unable to fulfill their debt and tax obligations. This led to strong resentments against tax collectors and the courts, where creditors obtained and enforced judgments against debtors, and where tax collectors obtained judgments authorizing property seizures.
At a meeting convened by aggrieved commoners, a farmer identified as "Plough Jogger '', encapsulated the situation:
"I have been greatly abused, have been obliged to do more than my part in the war, been loaded with class rates, town rates, province rates, Continental rates and all rates... been pulled and hauled by sheriffs, constables and collectors, and had my cattle sold for less than they were worth... The great men are going to get all we have and I think it is time for us to rise and put a stop to it, and have no more courts, nor sheriffs, nor collectors nor lawyers. ''
Overlaid upon these financial issues was the fact that veterans of the war had received little pay during the war and faced difficulty collecting pay owed them from the State or the Congress of the Confederation. Some of the soldiers, Daniel Shays among them, began to organize protests against these oppressive economic conditions. Shays was a farmhand from Massachusetts when the Revolution broke out; he joined the Continental Army, saw action at the Battles of Lexington and Concord, Bunker Hill and Saratoga, and was eventually wounded in action. In 1780, he resigned from the army unpaid and went home to find himself in court for nonpayment of debts. He soon realized that he was not alone in his inability to pay his debts and began organizing for debt relief.
One early protest against the government was led by Job Shattuck of Groton on the New Hampshire border, who in 1782 organized residents there to physically prevent tax collectors from doing their work. A second, larger - scale protest took place in the Massachusetts town of Uxbridge, in Worcester County on the Rhode Island border. On Feb. 3, 1783, a mob seized property that had been confiscated by a local constable and returned it to its owners. Governor Hancock ordered the sheriff to suppress these actions.
Most rural communities attempted to use the legislative process to gain relief. Petitions and proposals were repeatedly submitted to the state legislature to issue paper currency. Such inflationary issues would depreciate the currency, making it possible to meet obligations made at high values with lower - valued paper. The merchants, among them James Bowdoin, were opposed to the idea, since they were generally lenders who stood to lose from such proposals. As a result, these proposals were repeatedly rejected.
Governor Hancock, accused by some of anticipating trouble, resigned citing health reasons in early 1785. When Bowdoin (a perennial loser to Hancock in earlier elections) was elected governor that year, matters became more severe. Bowdoin stepped up civil actions to collect back taxes, and the legislature exacerbated the situation by levying an additional property tax to raise funds for the state 's portion of foreign debt payments. Even comparatively conservative commentators such as John Adams observed that these levies were "heavier than the People could bear. ''
Protests in rural Massachusetts turned into direct action in August 1786, after the state legislature adjourned without considering the many petitions that had been sent to Boston. On August 29 a well - organized force of protestors formed in Northampton and successfully prevented the county court from sitting. The leaders of this and later forces proclaimed that they were seeking relief from the burdensome judicial processes that were depriving the people of their land and possessions. They called themselves Regulators, a reference to the Regulator movement of North Carolina that sought to reform corrupt practices in the late 1760s.
On September 2 Governor Bowdoin issued a proclamation denouncing such mob action, but took no military measures beyond planning militia response to future actions. When the court in Worcester was shut down by similar action on September 5, the county militia (composed mainly of men sympathetic to the protestors) refused to turn out, much to Bowdoin 's amazement. Governors of the neighboring states where similar protests took place acted decisively, calling out the militia to hunt down the ringleaders after the first such protests.
In Rhode Island, matters were resolved without violence because the "country party '' gained control of the legislature in 1786 and enacted measures forcing its merchant elites to trade debt instruments for devalued currency. The impact of this was not lost on Boston 's merchants, especially Bowdoin, who held more than £ 3,000 in Massachusetts notes.
Daniel Shays, who had participated in the Northampton action, began to take a more active role in the uprising in November, though he firmly denied that he was one of its leaders. On September 19, the Supreme Judicial Court of Massachusetts indicted eleven leaders of the rebellion as "disorderly, riotous, and seditious persons. '' When the supreme judicial court was next scheduled to meet in Springfield on September 26, Shays in Hampshire County and Luke Day in what is now Hampden County (but was then part of Hampshire County) organized an attempt to shut it down.
They were anticipated by William Shepard, the local militia commander, who began gathering government - supporting militia the Saturday before the court was to sit. By the time the court was ready to open, Shepard had 300 men protecting the Springfield courthouse. Shays and Day were able to recruit a similar number, but chose only to demonstrate, exercising their troops outside Shepard 's lines, rather than attempt to seize the building. The judges first postponed the hearings, and then adjourned on the 28th without hearing any cases. Shepard withdrew his force, which had grown to some 800 men (to the Regulators ' 1,200), to the federal armory, which was then only rumored to be the target of seizure by the activists.
Protests in Great Barrington, Concord, and Taunton were also successful in shutting courts down in those communities in September and October. James Warren wrote to John Adams on October 22, "We are now in a state of Anarchy and Confusion bordering on Civil War. '' Courts in the larger towns and cities were able to meet, but required protection of the militia, which Bowdoin called out for the purpose.
The Boston elites were mortified at this resistance. Governor Bowdoin commanded the legislature to "vindicate the insulted dignity of government. '' Samuel Adams claimed that foreigners ("British emissaries '') were instigating treason among the commoners, and he helped draw up a Riot Act, and a resolution suspending habeas corpus in order to permit the authorities to keep people in jail without trial. Adams proposed a new legal distinction: that rebellion in a republic, unlike in a monarchy, should be punished by execution.
The legislature also moved to make some concessions to the upset farmers, saying certain old taxes could now be paid in goods instead of hard currency. These measures were followed up by one prohibiting speech critical of the government, and offering pardons to protestors willing to take an oath of allegiance. These legislative actions were unsuccessful in quelling the protests, and the suspension of habeas corpus alarmed many.
In late November warrants were issued for the arrest of several of the protest ringleaders. On November 28 a posse of some 300 men rode to Groton to arrest Job Shattuck and other rebel leaders in the area. Shattuck was chased down and arrested on the 30th, and was wounded by a sword slash in the process. This action and the arrest of other protest leaders in the eastern parts of the state radicalized those in the west, and they began to organize an overthrow of the state government. "The seeds of war are now sown '', wrote one correspondent in Shrewsbury, and by mid-January rebel leaders spoke of smashing the "tyrannical government of Massachusetts. ''
Since the federal government had been unable to recruit soldiers for the army (primarily because of a lack of funding), the Massachusetts elites determined to act independently. On January 4, 1787, Governor Bowdoin proposed creation of a privately funded militia army. Former Continental Army General Benjamin Lincoln solicited funds, and had by the end of January raised more than £ 6,000 from more than 125 merchants. The 3,000 militia that were recruited into this army were almost entirely from the eastern counties of Massachusetts, and marched to Worcester on January 19.
While the government forces assembled, rebel leaders in the west -- including Shays and Day -- organized their forces establishing regional regimental organizations that were run by democratically - elected committees. Their first major target was the federal armory in Springfield. General Shepard had taken possession of the armoury, pursuant to orders from Governor Bowdoin, and used its arsenal to arm a force of some 1,200 militia. He had done this despite the fact that the armory was federal, not state, property, and that he did not have permission from Secretary at War Henry Knox to do so.
The insurgents were organized into three major groups, and intended to surround and simultaneously attack the armory. Shays had one group east of Springfield near Palmer, Luke Day had a second force across the Connecticut River in West Springfield, and the third force, under Eli Parsons, was to the north at Chicopee. The rebels had planned their assault for January 25. Luke Day changed this at the last minute, sending Shays a message indicating he would not be ready to attack until the 26th. Day 's message was intercepted by Shepard 's men, so the militia of Shays and Parsons, some 1,500 men, approached the armory on the 25th not knowing they would have no support from the west.
When Shays and his forces neared the armory, they found Shepard 's militia waiting for them. Shepard first ordered warning shots fired over the approaching Shaysites ' heads, and then ordered two cannons to fire grape shot at Shays ' men. Four Shaysites were killed and twenty wounded. There was no musket fire from either side, and the rebel advance collapsed. Most of the rebel force fled north, eventually regrouping at Amherst. On the opposite side of the river, Day 's forces also fled north, also eventually reaching Amherst.
General Lincoln, when he heard of the Springfield incident, immediately began marching west from Worcester with the 3,000 men that had been mustered. The rebels moved generally north and east to avoid Lincoln, eventually establishing a camp at Petersham; along the way they raided the shops of local merchants for supplies, taking some of them hostage. Lincoln pursued them, reaching Pelham, some 30 miles (48 km) from Petersham, on February 2.
On the night of February 3 -- 4, he led his militia on a forced march to Petersham through a bitter snowstorm. Arriving early in the morning, they surprised the rebel camp so thoroughly that they scattered "without time to call in their out parties or even their guards. '' Although Lincoln claimed to capture 150 men, none of them were officers, leading historian Leonard Richards to suspect the veracity of the report. Most of the leadership escaped north into New Hampshire and Vermont, where they were sheltered despite repeated demands that they be returned to Massachusetts for trial.
Lincoln 's march marked the end of large - scale organized resistance. Ringleaders who eluded capture fled to neighboring states, and pockets of local resistance continued. Some rebel leaders approached Lord Dorchester, the British governor of Quebec for assistance, who was reported to promise assistance in the form of Mohawk warriors led by Joseph Brant. (Dorchester 's proposal was vetoed in London, and no assistance came to the rebels.)
The same day that Lincoln arrived at Petersham, the state legislature passed bills authorizing a state of martial law, giving the governor broad powers to act against the rebels. The bills also authorized state payments to reimburse Lincoln and the merchants who had funded the army, and authorized the recruitment of additional militia. On February 12 the legislature passed the Disqualification Act, seeking to prevent a legislative response by rebel sympathizers. This bill expressly forbade any acknowledged rebels from holding a variety of elected and appointed offices.
Most of Lincoln 's army melted away in late February as enlistments expired; by the end of the month he commanded but thirty men at a base in Pittsfield. In the meantime some 120 rebels had regrouped in New Lebanon, New York, and on February 27 they crossed the border. Marching first on Stockbridge, a major market town in the southwestern corner of the state, they raided the shops of merchants and the homes of merchants and local professionals. This came to the attention of Brigadier John Ashley, who mustered a force of some 80 men, and caught up with the rebels in nearby Sheffield late in the day. In the bloodiest encounter of the rebellion, 30 rebels were wounded (one mortally), at least one government soldier was killed, and many were wounded. Ashley, who was further reinforced after the encounter, reported taking 150 prisoners.
Four thousand people signed confessions acknowledging participation in the events of the rebellion in exchange for amnesty. Several hundred participants were eventually indicted on charges relating to the rebellion. Most of these were pardoned under a general amnesty that only excluded a few ringleaders. Eighteen men were convicted and sentenced to death. Most of these were either overturned on appeal, pardoned, or had their sentences commuted. Two of the condemned men, John Bly and Charles Rose, were hanged on December 6, 1787.
Bly and Rose were also accused of common - law crime as both were looters. Shays was pardoned in 1788 and he returned to Massachusetts from hiding in the Vermont woods. He was vilified by the Boston press, who painted him as an archetypal anarchist opposed to the government. He later moved to the Conesus, New York area, where he lived until he died poor and obscure in 1825.
The crushing of the rebellion and the harsh terms of reconciliation imposed by the Disqualification Act all worked against Governor Bowdoin politically. In the gubernatorial election held in April 1787, Bowdoin received few votes from the rural parts of the state, and was trounced by John Hancock. The military victory was tempered by tax changes in subsequent years. The legislature elected in 1787 cut taxes and placed a moratorium on debts. It also refocused state spending away from interest payments, resulting in a 30 % decline in the value of Massachusetts securities as those payments fell in arrears.
Vermont, then an unrecognized independent republic that had been seeking statehood independent from New York 's claims to the territory, became an unexpected beneficiary of the rebellion due to its sheltering of the rebel ringleaders. Alexander Hamilton broke from other New Yorkers, including major landowners with claims on Vermont territory, calling for the state to recognize and support Vermont 's bid for admission to the union. He cited Vermont 's de facto independence and its ability to cause trouble by providing support to the discontented from neighboring states as reasons, and introduced legislation that broke the impasse between New York and Vermont. Vermonters responded favorably to the overture, publicly pushing Eli Parsons and Luke Day out of the state (but quietly continuing to support others). After negotiations with New York and the passage of the new constitution, Vermont became the fourteenth state.
Thomas Jefferson, who was serving as ambassador to France at the time, refused to be alarmed by Shays ' Rebellion. In a letter to James Madison on January 30, 1787, he argued that a little rebellion now and then is a good thing. "The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants. It is its natural manure. ''
In contrast to Jefferson 's sentiments George Washington, who had been calling for constitutional reform for many years, wrote in a letter to Henry Lee, "You talk, my good sir, of employing influence to appease the present tumults in Massachusetts. I know not where that influence is to be found, or, if attainable, that it would be a proper remedy for the disorders. Influence is not government. Let us have a government by which our lives, liberties, and properties will be secured, or let us know the worst at once. ''
At the time of the rebellion, the weaknesses of the federal government as constituted under the Articles of Confederation were apparent to many. A vigorous debate was going on throughout the states on the need for a stronger central government, with Federalists arguing for the idea, and Anti-Federalists opposing them. Historical opinion is divided on what sort of role the rebellion played in the formation and later ratification of the United States Constitution, although most scholars agree it played some role, at least temporarily drawing some anti-Federalists to the strong government side.
By early 1785 many influential merchants and political leaders were already agreed that a stronger central government was needed. A convention at Annapolis, Maryland, in September 1786 of delegates from five states concluded that vigorous steps needed to be taken to reform the federal government, but it disbanded because of a lack of full representation, calling for a convention of all the states to be held in Philadelphia in May 1787. Historian Robert Feer notes that several prominent figures had hoped that convention would fail, requiring a larger - scale convention, and French diplomat Louis - Guillaume Otto thought the convention was intentionally broken off early to achieve this end.
In early 1787 John Jay wrote that the rural disturbances and the inability of the central government to fund troops in response made "the inefficiency of the Federal government (become) more and more manifest. '' Henry Knox observed that the uprising in Massachusetts clearly influenced local leaders who had previously opposed a strong federal government. Historian David Szatmary writes that the timing of the rebellion "convinced the elites of sovereign states that the proposed gathering at Philadelphia must take place. '' Some states, Massachusetts among them, delayed choosing delegates to the proposed convention, in part because in some ways it resembled the "extra-legal '' conventions organized by the protestors before the rebellion became violent.
The convention that met in Philadelphia was dominated by strong - government advocates. Delegate Oliver Ellsworth of Connecticut argued that because the people could not be trusted (as exemplified by Shays ' Rebellion), the members of the federal House of Representatives should be chosen by state legislatures, not by popular vote. The example of Shays ' Rebellion may also have been influential in the addition of language to the constitution concerning the ability of states to manage domestic violence, and their ability to demand the return of individuals from other states for trial.
The rebellion also played a role in the discussion of a number of the executives. While mindful of tyranny, delegates of the Constitutional Convention thought that the single executive would be more effective in responding to national disturbances.
Federalists cited the rebellion as an example of the confederation government 's weaknesses, while opponents such as Elbridge Gerry thought that a federal response to the rebellion would have been even worse than that of the state. (Gerry, a merchant speculator and Massachusetts delegate from Essex County, was one of the few convention delegates who refused to sign the new constitution, although his reasons for doing so did not stem from the rebellion.)
When the constitution had been drafted, Massachusetts was viewed by Federalists as a state that might not ratify it, because of widespread anti-Federalist sentiment in the rural parts of the state. Massachusetts Federalists, including Henry Knox, were active in courting swing votes in the debates leading up to the state 's ratifying convention in 1788. When the vote was taken on February 6, 1788, representatives of rural communities involved in the rebellion voted against ratification by a wide margin, but the day was carried by a coalition of merchants, urban elites, and market town leaders. The state ratified the constitution by a vote of 187 to 168.
Historians are divided on the impact the rebellion had on the ratification debates. Robert Feer notes that major Federalist pamphleteers rarely mentioned it, and that some anti-Federalists used the fact that Massachusetts survived the rebellion as evidence that a new constitution was unnecessary. Leonard Richards counters that publications like the Pennsylvania Gazette explicitly tied anti-Federalist opinion to the rebel cause, calling opponents of the new constitution "Shaysites '' and the Federalists "Washingtonians ''.
David Szatmary argues that debate in some states was affected, particularly in Massachusetts, where the rebellion had a polarizing effect. Richards records Henry Jackson 's observation that opposition to ratification in Massachusetts was motivated by "that cursed spirit of insurgency '', but that broader opposition in other states originated in other constitutional concerns expressed by Elbridge Gerry, who published a widely distributed pamphlet outlining his concerns about the vagueness of some of the powers granted in the constitution and its lack of a Bill of Rights.
The military powers enshrined in the constitution were soon put to use by President George Washington. After the passage by the United States Congress of the Whiskey Act, protest against the taxes it imposed began in western Pennsylvania. The protests escalated and Washington led federal and state militia to put down what is now known as the Whiskey Rebellion.
The events and people of the uprising are commemorated in the towns where they lived and those where events took place. Sheffield erected a memorial (pictured above) marking the site of the "last battle '', and Pelham memorialized Daniel Shays. US Route 202, which runs through Pelham, is called the Daniel Shays Highway. A statue of General Shepard was erected in his hometown of Westfield.
In the town of Petersham, Massachusetts, a memorial was erected in 1927 by the New England Society of Brooklyn, New York. The memorial commemorates General Benjamin Lincoln, who raised 3,000 troops and routed the rebellion on February 4, 1787. It ends with the line, "Obedience to the law is true liberty. ''
|
what is the name of the welsh language | Welsh language - wikipedia
All UK speakers: 700,000 + (2012)
Welsh (Cymraeg or y Gymraeg (kəmˈrai̯ɡ, ə ɡəmˈrai̯ɡ) (listen)) is a member of the Brittonic branch of the Celtic languages. It is spoken natively in Wales, by few in England, and in Y Wladfa (the Welsh colony in Chubut Province, Argentina). Historically, it has also been known in English as "Cambrian '', "Cambric '' and "Cymric ''.
19.0 per cent of usual residents in Wales aged three and over reported that they could speak Welsh in the United Kingdom Census 2011. According to the 2001 Census, 20.8 per cent of the population aged 3 + reported that they could speak Welsh. The Censuses suggested that there was a decrease in the number of Welsh speakers in Wales from 2001 to 2011 - from approximately 582,000 to 562,000 respectively. However, according to the Welsh Language Use Survey 2013 -- 15, 24 per cent of people aged three and over living in Wales were able to speak Welsh, demonstrating a possible increase in the prevalence of the Welsh language since the last Census in 2011.
The Welsh Language (Wales) Measure 2011 gave the Welsh language official status in Wales, making it the only language that is de jure official in any part of the United Kingdom, with English being de facto official. The Welsh language, along with English, is also a de jure official language of the National Assembly for Wales.
The language of the Welsh arguably originated from the Britons at the end of the 6th century. Prior to this, three distinct languages were spoken by the Britons during the 5th and 6th centuries: Latin, Irish, and British. According to T.M. Charles - Edwards, the emergence of Welsh as a distinct language occurred towards the end of this period. The emergence of Welsh was not instantaneous and clearly identifiable. Instead, the shift occurred over a long period of time, with some historians claiming that it happened as late as the 9th century. Kenneth H. Jackson proposed a more general time period for the emergence, specifically after the Battle of Dyrham, a military battle between the West Saxons and the Britons in 577 AD.
Four periods are identified in the history of Welsh, with rather indistinct boundaries: Primitive Welsh, Old Welsh, Middle Welsh, and Modern Welsh. The period immediately following the language 's emergence is sometimes referred to as Primitive Welsh, followed by the Old Welsh period -- which is generally considered to stretch from the beginning of the 9th century to sometime during the 12th century. The Middle Welsh period is considered to have lasted from then until the 14th century, when the Modern Welsh period began, which in turn is divided into Early and Late Modern Welsh.
The name Welsh originated as an exonym given to its speakers by the Anglo - Saxons, meaning "foreign speech '' (see Walha), and the native term for the language is Cymraeg '.
Welsh evolved from Common Brittonic, the Celtic language spoken by the ancient Celtic Britons. Classified as Insular Celtic, the British language probably arrived in Britain during the Bronze Age or Iron Age and was probably spoken throughout the island south of the Firth of Forth. During the Early Middle Ages the British language began to fragment due to increased dialect differentiation, thus evolving into Welsh and the other Brittonic languages. It is not clear when Welsh became distinct.
Kenneth H. Jackson suggested that the evolution in syllabic structure and sound pattern was complete by around 550, and labelled the period between then and about 800 "Primitive Welsh ''. This Primitive Welsh may have been spoken in both Wales and the Hen Ogledd ("Old North '') - the Brittonic - speaking areas of what is now northern England and southern Scotland - and therefore may have been the ancestor of Cumbric as well as Welsh. Jackson, however, believed that the two varieties were already distinct by that time. The earliest Welsh poetry -- that attributed to the Cynfeirdd or "Early Poets '' -- is generally considered to date to the Primitive Welsh period. However, much of this poetry was supposedly composed in the Hen Ogledd, raising further questions about the dating of the material and language in which it was originally composed. This discretion stems from the fact that Cumbric was widely believed to have been the language used in Hen Ogledd. An 8th century inscription in Tywyn shows the language already dropping inflections in the declension of nouns.
Janet Davies proposed that the origins of Welsh language were much less definite; in The Welsh Language: A History, she proposes that Welsh may have been around even earlier than 600 AD. This is evidenced by the dropping of final syllables from Brittonic: * bardos "poet '' became bardd, and * abona "river '' became afon. Though both Davies and Jackson cite minor changes in syllable structure and sounds as evidence for the creation of Old Welsh, Davies suggests it may be more appropriate to refer to this derivative language as Lingua Brittanica rather than characterizing it as a new language altogether.
The argued dates for the period of "Primitive Welsh '' are widely debated, with some historians ' suggestions differing by hundreds of years.
The next main period is Old Welsh (Hen Gymraeg, 9th to 11th centuries); poetry from both Wales and Scotland has been preserved in this form of the language. As Germanic and Gaelic colonisation of Britain proceeded, the Brittonic speakers in Wales were split off from those in northern England, speaking Cumbric, and those in the southwest, speaking what would become Cornish, and so the languages diverged. Both the works of Aneirin (Canu Aneirin, c. 600) and the Book of Taliesin (Canu Taliesin) were during this era.
Middle Welsh (Cymraeg Canol) is the label attached to the Welsh of the 12th to 14th centuries, of which much more remains than for any earlier period. This is the language of nearly all surviving early manuscripts of the Mabinogion, although the tales themselves are certainly much older. It is also the language of the existing Welsh law manuscripts. Middle Welsh is reasonably intelligible to a modern - day Welsh speaker.
The famous cleric Gerald of Wales tells, in his Descriptio Cambriae, a story of King Henry II of England. During one of the King 's many raids in the 12th century, Henry asked an old man of Pencader, Carmarthenshire whether the Welsh people could resist his army. The old man replied:
It can never be destroyed through the wrath of man, unless the wrath of God shall concur. Nor do I think that any other nation than this of Wales, nor any other language, whatever may hereafter come to pass, shall in the day of reckoning before the Supreme Judge, answer for this corner of the Earth.
Modern Welsh is subdivided into Early Modern Welsh and Late Modern Welsh. Early Modern Welsh ran from the 15th century through to the end of the 16th century, and the Late Modern Welsh period roughly dates from the 16th century onwards. Contemporary Welsh still differs greatly from the Welsh of the 16th Century, but they are similar enough that a fluent Welsh speaker should have little trouble understanding it. The Modern Welsh period is where one can see a decline in the popularity of the Welsh language, as the number of people who spoke Welsh declined to the point at which there was concern that the language would become extinct entirely. Welsh government processes and legislation have worked to increase the proliferation of the Welsh language throughout school projects and the like.
The Bible translations into Welsh helped maintain the use of Welsh in daily life. The New Testament was translated by William Salesbury in 1567 followed by the complete Bible by William Morgan in 1588.
Welsh has been spoken continuously in Wales throughout recorded history, but by 1911 it had become a minority language, spoken by 43.5 per cent of the population. While this decline continued over the following decades, the language did not die out. By the start of the 21st century, numbers began to increase once more, at least partly as a result of the increase in Welsh medium education.
The 2004 Welsh Language Use Survey showed that 21.7 per cent of the population of Wales spoke Welsh, compared with 20.8 per cent in the 2001 Census, and 18.5 per cent in 1991 Census. The 2011 Census, however, showed a slight decline to 562,000, or 19 per cent of the population. The census also showed a "big drop '' in the number of speakers in the Welsh - speaking heartlands, with the number dropping to under 50 per cent in Ceredigion and Carmarthenshire for the first time. However, according to the Welsh Language Use Survey in 2013 - 15, 24 per cent of people aged three and over were able to speak Welsh.
Historically, large numbers of Welsh people spoke only Welsh. Over the course of the 20th century this monolingual population "all but disappeared '', but a small percentage remained at the time of the 1981 census. Most Welsh - speaking people in Wales also speak English (while in Chubut Province, Argentina, most speakers can speak Spanish -- see Y Wladfa). However, many Welsh - speaking people are more comfortable expressing themselves in Welsh than in English. A speaker 's choice of language can vary according to the subject domain and the social context, even within a single discourse (known in linguistics as code - switching).
Welsh speakers are largely concentrated in the north and west of Wales, principally Gwynedd, Conwy, Denbighshire (Sir Ddinbych), Anglesey (Ynys Môn), Carmarthenshire (Sir Gâr), north Pembrokeshire (Sir Benfro), Ceredigion, parts of Glamorgan (Morgannwg), and north - west and extreme south - west Powys. However, first - language and other fluent speakers can be found throughout Wales.
Welsh - speaking communities persisted well on into the modern period across the border with England. Archenfield was still Welsh enough in the time of Elizabeth I for the Bishop of Hereford to be made responsible, together with the four Welsh bishops, for the translation of the Bible and the Book of Common Prayer into Welsh. Welsh was still commonly spoken here in the first half of the 19th century, and churchwardens ' notices were put up in both Welsh and English until about 1860. In one of the earliest works of phonetics, On Early English Pronunciation in 1889, Alexander John Ellis identified a small part of Shropshire as still speaking Welsh, and plotted a Celtic border that passed from Llanymynech to Chirk through Oswestry.
The number of Welsh - speaking people in the rest of Britain has not yet been counted for statistical purposes. In 1993, the Welsh - language television channel S4C published the results of a survey into the numbers of people who spoke or understood Welsh, which estimated that there were around 133,000 Welsh - speaking people living in England, about 50,000 of them in the Greater London area. The Welsh Language Board, on the basis of an analysis of the Office for National Statistics (ONS) Longitudinal Study, estimated there were 110,000 Welsh - speaking people in England, and another thousand in Scotland and Northern Ireland. In the 2011 Census, 8,248 people in England gave Welsh in answer to the question "What is your main language? '' The ONS subsequently published a census glossary of terms to support the release of results from the census, including their definition of "main language '' as referring to "first or preferred language '' (though that wording was not in the census questionnaire itself). The wards in England with the most people giving Welsh as their main language were the Liverpool wards: Central and Greenbank, and Oswestry South. In terms of the regions of England, North West England (1,945), London (1,310) and the West Midlands (1,265) had the highest number of people noting Welsh as their main language.
The American Community Survey 2009 -- 2013 noted that 2,235 people aged 5 years and over in the United States spoke Welsh at home. The highest number of those (255) lived in Florida.
Although Welsh is a minority language, support for it grew during the second half of the 20th century, along with the rise of organisations such as the nationalist political party Plaid Cymru from 1925 and Welsh Language Society from 1962.
The Welsh Language Act 1993 and the Government of Wales Act 1998 provide that the Welsh and English languages be treated equally in the public sector, as far as is reasonable and practicable. Each public body is required to prepare for approval a Welsh Language Scheme, which indicates its commitment to the equality of treatment principle. This is sent out in draft form for public consultation for a three - month period, whereupon comments on it may be incorporated into a final version. It requires the final approval of the now defunct Welsh Language Board (Bwrdd yr Iaith Gymraeg). Thereafter, the public body is charged with implementing and fulfilling its obligations under the Welsh Language Scheme. The list of other public bodies which have to prepare Schemes could be added to by initially the Secretary of State for Wales, from 1993 -- 1997, by way of statutory instrument. Subsequent to the forming of the National Assembly for Wales in 1997, the Government Minister responsible for the Welsh language can and has passed statutory instruments naming public bodies who have to prepare Schemes. Neither the 1993 Act nor secondary legislation made under it covers the private sector, although some organisations, notably banks and some railway companies, provide some of their information in Welsh.
On 7 December 2010, the Welsh Assembly unanimously approved a set of measures to develop the use of the Welsh language within Wales. On 9 February 2011 this measure, the Proposed Welsh Language (Wales) Measure 2011, was passed and received Royal Assent, thus making the Welsh language an officially recognised language within Wales. The Measure:
With the passing of this measure, public bodies and some private companies are required to provide services in Welsh. The Welsh government 's Minister for Heritage at the time, Alun Ffred Jones, said, "The Welsh language is a source of great pride for the people of Wales, whether they speak it or not, and I am delighted that this Measure has now become law. I am very proud to have steered legislation through the Assembly which confirms the official status of the Welsh language; which creates a strong advocate for Welsh speakers and will improve the quality and quantity of services available through the medium of Welsh. I believe that everyone who wants to access services in the Welsh language should be able to do so, and that is what this government has worked towards. This legislation is an important and historic step forward for the language, its speakers and for the nation. '' The measure was not welcomed warmly by all supporters: Bethan Williams, chairperson of the Welsh Language Society, gave a mixed response to the move, saying, "Through this measure we have won official status for the language and that has been warmly welcomed. But there was a core principle missing in the law passed by the Assembly before Christmas. It does n't give language rights to the people of Wales in every aspect of their lives. Despite that, an amendment to that effect was supported by 18 Assembly Members from three different parties, and that was a significant step forward. ''
On 5 October 2011, Meri Huws, Chair of the Welsh Language Board, was appointed the new Welsh Language Commissioner. She released a statement that she was "delighted '' to have been appointed to the "hugely important role '', adding, "I look forward to working with the Welsh Government and organisations in Wales in developing the new system of standards. I will look to build on the good work that has been done by the Welsh Language Board and others to strengthen the Welsh language and ensure that it continues to thrive. '' First Minister Carwyn Jones said that Meri would act as a champion for the Welsh language, though some had concerns over her appointment: Plaid Cymru spokeswoman Bethan Jenkins said, "I have concerns about the transition from Meri Huws 's role from the Welsh Language Board to the language commissioner, and I will be asking the Welsh government how this will be successfully managed. We must be sure that there is no conflict of interest, and that the Welsh Language Commissioner can demonstrate how she will offer the required fresh approach to this new role. '' Ms Huws started her role as the Welsh Language Commissioner on 1 April 2012.
Local councils and the National Assembly for Wales use Welsh, issuing Welsh versions of their literature, to varying degrees.
Most road signs in Wales are in English and Welsh.
Since 2000, the teaching of Welsh has been compulsory in all schools in Wales up to age 16. That has had an effect in stabilising and reversing the decline in the language. It means, for example, that even the children of non-Welsh - speaking parents from elsewhere in the UK grow up with a knowledge of, or complete fluency in, the language.
The wording on currency is only in English, except in the legend on Welsh pound coins dated 1985, 1990 and 1995, which circulate in all parts of the UK. The wording is Pleidiol wyf i 'm gwlad, which means True am I to my country, and derives from the national anthem of Wales, Hen Wlad Fy Nhadau.
Some shops employ bilingual signage. Welsh rarely appears on product packaging or instructions.
The UK government has ratified the European Charter for Regional or Minority Languages in respect of Welsh.
The language has greatly increased its prominence since the creation of the television channel S4C in November 1982, which until digital switchover in 2010 broadcast 70 per cent of Channel 4 's programming along with a majority of Welsh language shows during peak viewing hours. The all - Welsh - language digital station S4C Digidol is available throughout Europe on satellite and online throughout the UK. Since the digital switchover was completed in South Wales on 31 March 2010, S4C Digidol became the main broadcasting channel and fully in Welsh. The main evening television news provided by the BBC in Welsh is available for download. There is also a Welsh - language radio station, BBC Radio Cymru, which was launched in 1977.
The only Welsh - language national newspaper Y Cymro (The Welshman) was published weekly until 2017. There is no daily newspaper in Welsh. A daily newspaper called Y Byd (The World) was scheduled to be launched on 3 March 2008, but was scrapped, owing to poor sales of subscriptions and the Welsh Government deeming the publication not to meet the criteria necessary for the kind of public funding it needed to be rescued. There is a Welsh - language online news service which publishes news stories in Welsh called Golwg360 ("360 (degree) view '').
The decade around 1840 was a period of great social upheaval in Wales, manifested in the Chartist movement. In 1839, 20,000 people marched on Newport, resulting in a riot when 20 people were killed by soldiers defending the Westgate Hotel, and the Rebecca Riots where tollbooths on turnpikes were systematically destroyed.
This unrest brought the state of education in Wales to the attention of the English establishment since social reformers of the time considered education as a means of dealing with social ills. The Times newspaper was prominent among those who considered that the lack of education of the Welsh people was the root cause of most of the problems.
In July 1846, three commissioners, R.R.W. Lingen, Jellynger C. Symons and H.R. Vaughan Johnson, were appointed to inquire into the state of education in Wales; the Commissioners were all Anglicans and thus presumed unsympathetic to the nonconformist majority in Wales. The Commissioners presented their report to the Government on 1 July 1847 in three large blue - bound volumes. This report quickly became known as the Treachery of the Blue Books (Brad y Llyfrau Gleision) since, apart from documenting the state of education in Wales, the Commissioners were also free with their comments disparaging the language, nonconformity, and the morals of the Welsh people in general. An immediate effect of the report was that ordinary Welsh people began to believe that the only way to get on in the world was through the medium of English, and an inferiority complex developed about the Welsh language whose effects have not yet been completely eradicated. The historian Professor Kenneth O. Morgan referred to the significance of the report and its consequences as "the Glencoe and the Amritsar of Welsh history ''.
In the later 19th century, virtually all teaching in the schools of Wales was in English, even in areas where the pupils barely understood English. Some schools used the Welsh Not, a piece of wood, often bearing the letters "WN '', which was hung around the neck of any pupil caught speaking Welsh. The pupil could pass it on to any schoolmate heard speaking Welsh, with the pupil wearing it at the end of the day being given a beating. One of the most famous Welsh - born pioneers of higher education in Wales was Sir Hugh Owen. He made great progress in the cause of education, and more especially the University College of Wales at Aberystwyth, of which he was chief founder. He has been credited with the Welsh Intermediate Education Act 1889 (52 & 53 Vict c 40), following which several new Welsh schools were built. The first was completed in 1894 and named Ysgol Syr Hugh Owen.
Towards the beginning of the 20th century this policy slowly began to change, partly owing to the efforts of Owen Morgan Edwards when he became chief inspector of schools for Wales in 1907.
The Aberystwyth Welsh School (Ysgol Gymraeg Aberystwyth) was founded in 1939 by Sir Ifan ap Owen Edwards, the son of O.M. Edwards, as the first Welsh Primary School. The headteacher was Norah Isaac. Ysgol Gymraeg is still a very successful school, and now there are Welsh language primary schools all over the country. Ysgol Glan Clwyd was established in Rhyl in 1955 as the first Welsh language school to teach at the secondary level.
Welsh is now widely used in education, with 101,345 children and young people in Wales receiving their education in Welsh medium schools in 2014 / 15, 65,460 in primary and 35,885 in secondary. 26 per cent of all schools in Wales are defined as Welsh medium schools, with a further 7.3 per cent offering some Welsh - medium instruction to pupils. 22 per cent of pupils are in schools in which Welsh the primary language of instruction. Under the National Curriculum, it is compulsory that all students study Welsh up to the age of 16 as either a first or a second language. Some students choose to continue with their studies through the medium of Welsh for the completion of their A-levels as well as during their college years. All local education authorities in Wales have schools providing bilingual or Welsh - medium education. The remainder study Welsh as a second language in English - medium schools. Specialist teachers of Welsh called Athrawon Bro support the teaching of Welsh in the National Curriculum. Welsh is also taught in adult education classes. The Welsh Government has recently set up six centres of excellence in the teaching of Welsh for Adults, with centres in North Wales, Mid Wales, South West, Glamorgan, Gwent. and Cardiff.
The ability to speak Welsh or to have Welsh as a qualification is desirable for certain career choices in Wales, such as teaching or customer service. All universities in Wales teach courses in the language, with many undergraduate and post-graduate degree programs offered in the medium of Welsh, ranging from law, modern languages, social sciences, and also other sciences such as biological sciences. Aberystwyth, Cardiff, Bangor, and Swansea have all had chairs in Welsh since their virtual establishment, and all their schools of Welsh are successful centres for the study of the Welsh language and its literature, offering a BA in Welsh as well as post-graduate courses. At all Welsh universities and the Open University, students have the right to submit assessed work and sit exams in Welsh even if the course was taught in English (usually the only exception is where the course requires demonstrating proficiency in another language). Following a commitment made in the One Wales coalition government between Labour and Plaid Cymru, the Coleg Cymraeg Cenedlaethol (Welsh Language National College) was established. The purpose of the federal structured college, spread out between all the universities of Wales, is to provide and also advance Welsh medium courses and Welsh medium scholarship and research in Welsh universities. There is also a Welsh - medium academic journal called Gwerddon ("Oasis ''), which is a platform for academic research in Welsh and is published quarterly. There have been calls for more teaching of Welsh in English - medium schools.
Like many of the world 's languages, the Welsh language has seen an increased use and presence on the internet, ranging from formal lists of terminology in a variety of fields to Welsh language interfaces for Windows 7, Microsoft Windows XP, Vista, Microsoft Office, LibreOffice, OpenOffice.org, Mozilla Firefox and a variety of Linux distributions, and on - line services to blogs kept in Welsh. Wikipedia has had a Welsh version since July 2003 and Facebook since 2009.
In 2006 the Welsh Language Board launched a free software pack which enabled the use of SMS predictive text in Welsh. At the National Eisteddfod of Wales 2009, a further announcement was made by the Welsh Language Board that the mobile phone company Samsung was to work with the network provider Orange to provide the first mobile phone in the Welsh language, with the interface and the T9 dictionary on the Samsung S5600 available in the Welsh language. The model, available with the Welsh language interface, has been available since 1 September 2009, with plans to introduce it on other networks.
On Android devices, both the built - in Google Keyboard and user - created keyboards can be used. iOS devices have fully supported the Welsh language since the release of iOS 8 in September 2014. Users can switch their device to Welsh to access apps that are available in Welsh. Date and time on iOS is also localised, as shown by the built - in Calendar application, as well as certain third party apps that have been localized.
Secure communications are often difficult to achieve in wartime. Cryptography can be used to protect messages, but codes can be broken. Therefore, lesser - known languages are sometimes encoded, so that even if the code is broken, the message is still in a language few people know. For example, Navajo code talkers were used by the United States military during World War II. Similarly, the Royal Welch Fusiliers, a Welsh regiment serving in Bosnia, used Welsh for emergency communications that needed to be secure. It has been reported that Welsh speakers from Wales and from Patagonia fought on both sides in the Falklands War.
In 2017, parliamentary rules were amended to allow the use of Welsh when the Welsh Grand Committee meets at Westminster. The change did not alter the rules about debates within the House of Commons, where only English can be used.
In February 2018, Welsh was first used when the Welsh Secretary, Alun Cairns, delivered his welcoming speech at a sitting of the committee. He said, "I am proud to be using the language I grew up speaking, which is not only important to me, my family and the communities Welsh MPs represent, but is also an integral part of Welsh history and culture ''.
In November 2008, the Welsh language was used at a meeting of the European Union 's Council of Ministers for the first time. The Heritage Minister Alun Ffred Jones addressed his audience in Welsh and his words were interpreted into the EU 's 23 official languages. The official use of the language followed years of campaigning. Jones said "In the UK we have one of the world 's major languages, English, as the mother tongue of many. But there is a diversity of languages within our islands. I am proud to be speaking to you in one of the oldest of these, Welsh, the language of Wales. '' He described the breakthrough as "more than (merely) symbolic '' saying "Welsh might be one of the oldest languages to be used in the UK, but it remains one of the most vibrant. Our literature, our arts, our festivals, our great tradition of song all find expression through our language. And this is a powerful demonstration of how our culture, the very essence of who we are, is expressed through language. ''
A greeting in Welsh is one of the 55 languages included on the Voyager Golden Record chosen to be representative of Earth in NASA 's Voyager program launched in 1977. The greetings are unique to each language, with the Welsh greeting being Iechyd da i chwi yn awr ac yn oesoedd, which translates into English as "Good health to you now and forever ''.
Welsh vocabulary draws mainly from original Brittonic words (wy "egg '', carreg "stone ''), with some loans from Latin (ffenestr "window '' < Latin fenestra, gwin "wine '' < Latin vinum), and English (silff "shelf '', giat "gate '').
The phonology of Welsh includes a number of sounds that do not occur in English and are typologically rare in European languages. The voiceless alveolar lateral fricative (ɬ), the voiceless nasals (m̥), (n̥) and (ŋ̊), and the voiceless alveolar trill (r̥) are distinctive features of the Welsh language. Stress usually falls on the penultimate syllable in polysyllabic words, and the word - final unstressed syllable receives a higher pitch than the stressed syllable.
Welsh is written in a Latin alphabet traditionally consisting of 29 letters, of which eight are digraphs treated as single letters for collation:
In contrast to English practice, "w '' and "y '' are considered vowel letters in Welsh along with "a '', "e '', "i '', "o '' and "u ''.
The letter "j '' is used in many everyday words borrowed from English, like jam, jôc "joke '' and garej "garage ''. The letters "k '', "q '', "v '', "x '', and "z '' are used in some technical terms, like kilogram, volt and zero, but in all cases can be, and often are, replaced by Welsh letters: cilogram, folt and sero. The letter "k '' was in common use until the sixteenth century, but was dropped at the time of the publication of the New Testament in Welsh, as William Salesbury explained: "C for K, because the printers have not so many as the Welsh requireth ''. This change was not popular at the time.
The most common diacritic is the circumflex, which disambiguates long vowels, most often in the case of homographs, where the vowel is short in one word and long in the other: e.g. man "place '' vs mân "fine '', "small ''.
Welsh morphology has much in common with that of the other modern Insular Celtic languages, such as the use of initial consonant mutations and of so - called "conjugated prepositions '' (prepositions that fuse with the personal pronouns that are their object). Welsh nouns belong to one of two grammatical genders, masculine and feminine, but they are not inflected for case. Welsh has a variety of different endings and other methods to indicate the plural, and two endings to indicate the singular of some nouns. In spoken Welsh, verbal features are indicated primarily by the use of auxiliary verbs rather than by the inflection of the main verb. In literary Welsh, on the other hand, inflection of the main verb is usual.
The canonical word order in Welsh is verb -- subject -- object.
Colloquial Welsh inclines very strongly towards the use of auxiliaries with its verbs, as in English. The present tense is constructed with bod ("to be '') as an auxiliary verb, with the main verb appearing as a verbnoun (used in a way loosely equivalent to an infinitive) after the particle yn:
There, mae is a third - person singular present indicative form of bod, and mynd is the verbnoun meaning "to go ''. The imperfect is constructed in a similar manner, as are the periphrastic forms of the future and conditional tenses.
In the preterite, future and conditional mood tenses, there are inflected forms of all verbs, which are used in the written language. However, speech now more commonly uses the verbnoun together with an inflected form of gwneud ("do ''), so "I went '' can be Mi es i or Mi wnes i fynd ("I did go ''). Mi is an example of a preverbal particle; such particles are common in Welsh.
Welsh lacks separate pronouns for constructing subordinate clauses; instead, special verb forms or relative pronouns that appear identical to some preverbal particles are used.
The Welsh for "I like Rhodri '' is Dw i'n hoffi Rhodri (word for word, "am I in (the) liking (of) Rhodri ''), with Rhodri in a possessive relationship with hoffi. With personal pronouns, the possessive form of the personal pronoun is used, as in "I like him '': (Dw i'n ei hoffi), literally, "am I in his liking '' -- "I like you '' is (Dw i'n dy hoffi) ("am I in your liking '').
In colloquial Welsh, possessive pronouns, whether they are used to mean "my '', "your '', etc. or to indicate the direct object of a verbnoun, are commonly reinforced by the use of the corresponding personal pronoun after the noun or verbnoun: ei dŷ e "his house '' (literally "his house of him ''), Dw i'n dy hoffi di "I like you '' ("I am (engaged in the action of) your liking of you ''), etc. It should be noted that the "reinforcement '' (or, simply, "redoubling '') adds no emphasis in the colloquial register. While the possessive pronoun alone may be used, especially in more formal registers, as shown above, it is considered incorrect to use only the personal pronoun. Such usage is nevertheless sometimes heard in very colloquial speech, mainly among young speakers: Ble ' dyn ni'n mynd? Tŷ ti neu dŷ fi? ("Where are we going? Your house or my house? '').
The traditional counting system used in the Welsh language is vigesimal, i.e. it is based on twenties, as in standard French numbers 70 (soixante - dix, literally "sixty - ten '') to 99 (quatre - vingt - dix - neuf, literally "four score nineteen ''). Welsh numbers from 11 to 14 are "x on ten '' (e.g. un ar ddeg: 11), 16 to 19 are "x on fifteen '' (e.g. un ar bymtheg: 16), though 18 is deunaw, "two nines ''; numbers from 21 to 39 are "1 -- 19 on twenty '' (e.g. Deg ar hugain: 30), 40 is deugain "two twenties '', 60 is trigain "three twenties '', etc. This form continues to be used, especially by older people, and it is obligatory in certain circumstances (such as telling the time, and in ordinal numbers).
There is also a decimal counting system, which has become relatively widely used, though less so in giving the time, ages, and dates (it features no ordinal numbers). This system is in especially common use in schools due to its simplicity, and in Patagonian Welsh. Whereas 39 in the vigesimal system is pedwar ar bymtheg ar hugain ("four on fifteen on twenty '') or even deugain namyn un ("two score minus one ''), in the decimal system it is tri deg naw ("three tens nine '').
Although there is only one word for "one '' (un), it triggers the soft mutation (treiglad meddal) of feminine nouns, where possible, other than those beginning with "ll '' or "rh ''. There are separate masculine and feminine forms of the numbers "two '' (dau and dwy), "three '' (tri and tair) and "four '' (pedwar and pedair), which must agree with the grammatical gender of the objects being counted. The objects being counted appear in the singular, not plural form.
There is no standard or definitive form of the Welsh language. Although northern and southern Welsh are two commonly mentioned main dialects, in reality additional significant variation exists within those areas. The more useful traditional classification refers to four main dialects: Y Wyndodeg, the language of Gwynedd; Y Bowyseg, the language of Powys; Y Ddyfedeg, the language of Dyfed; and Y Wenhwyseg, the language of Gwent and Morgannwg. Fine - grained classifications exist beyond those four: the book Cymraeg, Cymrâg, Cymrêg: Cyflwyno'r Tafodieithoedd ("Welsh, Welsh, Welsh: Introducing the Dialects '') about Welsh dialects was accompanied by a cassette containing recordings of fourteen different speakers demonstrating aspects of different area dialects. The book also refers to the earlier Linguistic Geography of Wales as describing six different regions which could be identified as having words specific to those regions.
Another dialect is Patagonian Welsh, which has developed since the start of Y Wladfa (the Welsh settlement in Argentina) in 1865; it includes Spanish loanwords and terms for local features, but a survey in the 1970s showed that the language in Patagonia is consistent throughout the lower Chubut valley and in the Andes.
The differences in dialect are marked in pronunciation and in some vocabulary but also in minor points of grammar. For example: consider the question "Do you want a cuppa (a cup of tea)? '' In Gwynedd this would typically be Dach chi isio panad? while in Glamorgan one would be more likely to hear Ych chi'n moyn dishgled? (though in other parts of the South one would not be surprised to hear Ych chi isie paned? as well, among other possibilities). An example of a pronunciation difference is the tendency in some southern dialects to palatalise the letter "s '', e.g. mis (month), usually pronounced (miːs), but as (miːʃ) in parts of the south. This normally occurs next to a high front vowel like / i /, although exceptions include the pronunciation of sut "how '' as (ʃʊd) in the southern dialects (compared with northern (sɨt)).
In the 1970s, there was an attempt to standardise the language by teaching ' Cymraeg Byw ' ("Living Welsh '') -- a colloquially - based generic form of Welsh. But the attempt largely failed because it did not encompass the regional differences used by speakers of Welsh.
Modern Welsh can be considered to fall broadly into two main registers -- Colloquial Welsh (Cymraeg llafar) and Literary Welsh (Cymraeg llenyddol). The grammar described here is that of Colloquial Welsh, which is used in most speech and informal writing. Literary Welsh is closer to the form of Welsh standardised by the 1588 translation of the Bible and is found in official documents and other formal registers, including much literature. As a standardised form, literary Welsh shows little if any of the dialectal variation found in colloquial Welsh. Some differences include:
Amongst the characteristics of the literary, as against the spoken, language are a higher dependence on inflected verb forms, different usage of some of the tenses, less frequent use of pronouns (since the information is usually conveyed in the verb / preposition inflections) and a much lesser tendency to substitute English loanwords for native Welsh words. In addition, more archaic pronouns and forms of mutation may be observed in Literary Welsh.
In fact, the differences between dialects of modern spoken Welsh pale into insignificance compared to the difference between some forms of the spoken language and the most formal constructions of the literary language. The latter is considerably more conservative and is the language used in Welsh translations of the Bible, amongst other things (although the 2004 Beibl Cymraeg Newydd -- New Welsh Bible -- is significantly less formal than the traditional 1588 Bible). Gareth King, author of a popular Welsh grammar, observes that "The difference between these two is much greater than between the virtually identical colloquial and literary forms of English ''. A grammar of Literary Welsh can be found in A Grammar of Welsh (1980) by Stephen J. Williams or more completely in Gramadeg y Gymraeg (1996) by Peter Wynn Thomas. (No comprehensive grammar of formal literary Welsh exists in English.) An English - language guide to colloquial Welsh forms and register and dialect differences is "Dweud Eich Dweud '' (2001, 2013) by Ceri Jones.
|
what metaphor is evoked in the inscriptions and architecture of the taj mahal | Origins and architecture of the Taj Mahal - wikipedia
The ' Taj Mahal ' represents the finest and most sophisticated example of Mughal architecture. Its origins lie in the moving circumstances of its commission and the culture and history of an Islamic Mughal empire 's rule of large parts of India. The distraught Mughal Emperor Shah Jahan commissioned the mausoleum upon the death of his favorite wife Mumtaz Mahal.
Today it is one of the most famous and recognizable buildings in the world and while the large, domed marble mausoleum is the most familiar part of the monument, the Taj Mahal is an extensive complex of buildings and gardens that extends over 22.44 hectares (55.5 acres) and includes subsidiary tombs, waterworks infrastructure, the small town of ' Taj Ganji ' to the south and a ' moonlight garden ' to the north of the river. Construction of Taj Mahal began in 1632 AD, (1041 AH), on the south bank of the River Yamuna in Agra, and was substantially complete by 1648 AD (1058 AH). The design was conceived as both an earthly replica of the house of Mumtaz Mahal in paradise and an instrument of propaganda for the emperor.
In 1607 (AH 1025) the Mughal Prince Khurrum (later to become Shah Jahan) was betrothed to Arjumand Banu Begum, the grand daughter of a Persian noble. She would become the unquestioned love of his life. They were married five years later in 1612. After their wedding celebrations, Khurram "finding her in appearance and character elect among all the women of the time, '' gave her the title Mumtaz Mahal (Jewel of the Palace).
The intervening years had seen Khurrum take two other wives known as Akbarabadi Mahal and Kandahari Mahal, but according to the official court chronicler Qazwini, the relationship with his other wives "had little more than the status of marriage. The intimacy, deep affection, attention and favour which His Majesty had for the Cradle of Excellence (Mumtaz) lacked by a thousand times what he felt for any other. ''
Mumtaz died in Burhanpur on 17 June 1631, after complications with the birth of their fourteenth child, a daughter named Gauhara Begum. She had been accompanying her husband whilst he was fighting a campaign in the Deccan Plateau. Her body was temporarily buried in a garden called Zainabad on the banks of the Tapti River in Burhanpur. The contemporary court chroniclers paid an unusual amount of attention to this event and Shah Jahan 's grief at her demise. Immediately after hearing the news the emperor was reportedly inconsolable. He was not seen for a week at court and considered abdicating and living his life as a religious recluse. The court historian Muhammad Amin Qazwini, wrote that before his wife 's death the emperor 's beard had "not more than ten or twelve grey hairs, which he used to pluck out ', turned grey and eventually white '' and that he soon needed spectacles because his eyes deteriorated from constant weeping. Since Mumtaz had died on Wednesday, all entertainments were banned on that day. Jahan gave up listening to music, wearing jewelry, sumptuous clothes or perfumes for two years. So concerned were the imperial family that an honorary uncle wrote to say that "if he continued to abandon himself to his mourning, Mumtaz might think of giving up the joys of Paradise to come back to earth, this place of misery -- and he should also consider the children she had left to his care. '' The Austrian scholar Ebba Koch compares Shah Jahan to "Majnun, the ultimate lover of Muslim lore, who flees into the desert to pine for his unattainable Layla. ''
Jahan 's eldest daughter, the devoted Jahanara Begum Sahib, gradually brought him out of grief and fulfilled the functions of Mumtaz at court. Immediately after the burial in Burhanpur, Jahan and the imperial court turned their attentions to the planning and design of the mausoleum and funerary garden in Agra.
The first Mughal garden was created in 1526 in Agra by Babur, the founder of the dynasty. Thereafter, gardens became important Mughal symbols of power, supplanting the emphasis of pre-Mughal power symbols such as forts. The shift represented the introduction of a new ordered aesthetic -- an artistic expression with religious and funerary aspects and as a metaphor for Babur 's ability to control the arid Indian plains and hence the country at large. Babur rejected much of the indigenous and Lodhi architecture on the opposite bank and attempted to create new works inspired by Persian gardens and royal encampments. The first of these gardens, Aram Bagh, was followed by an extensive, regular and integrated complex of gardens and palaces stretching for more than a kilometre along the river. A high continuous stone plinth bounded the transition between gardens and river and established the framework for future development in the city.
In the following century, a thriving riverfront garden city developed on both banks of the Yamuna. This included the rebuilding of Agra Fort by Akbar, which was completed in 1573. By the time Jahan ascended to the throne, Agra 's population had grown to approximately 700,000 and was, as Abdul Aziz wrote, "a wonder of the age -- as much a centre of the arteries of trade both by land and water as a meeting - place of saints, sages and scholars from all Asia... a veritable lodestar for artistic workmanship, literary talent and spiritual worth ''.
Agra became a city centered on its waterfront and developed partly eastwards but mostly westwards from the rich estates that lined the banks. The prime sites remained those that had access to the river and the Taj Mahal was built in this context, but uniquely; as a related complex on both banks of the river.
Click image to navigate
The Taj Mahal complex can be conveniently divided into 5 sections: 1. The ' moonlight garden ' to the north of the river Yamuna. 2. The riverfront terrace, containing the Mausoleum, Mosque and Jawab. 3. The Charbagh garden containing pavilions. 4. The jilaukhana containing accommodation for the tomb attendants and two subsidiary tombs. 5. The Taj Ganji, originally a bazaar and caravanserai only traces of which are still preserved. The great gate lies between the jilaukhana and the garden. Levels gradually descend in steps from the Taj Ganji towards the river. Contemporary descriptions of the complex list the elements in order from the river terrace towards the Taj Ganji.
The erection of Mughal tombs to honour the dead was the subject of a theological debate conducted in part, through built architecture over several centuries. For the majority of Muslims, the spiritual power (barakat) of visiting the resting places (ziyarat) of those venerated in Islam, was a force by which greater personal sanctity could be achieved. However, orthodox Islam found tombs problematic because a number of Hadith forbade their construction. As a culture also attempting to accommodate, assimilate and subjugate the majority Hindu populace, opposition also came from local traditions which believed dead bodies and the structures over them were impure. For many Muslims at the time of the Taj 's construction, tombs could be considered legitimate providing they did not strive for pomp and were seen as a means to provide a reflection of paradise (Jannah) here on earth.
The ebb and flow of this debate can be seen in the Mughul 's dynastic mausoleums stretching back to that of their ancestor Timur. Built in 1403 AD (810 AH) Timur is buried in the Gur - e Amir in Samarkand, under a fluted dome. The tomb employs a traditional Persian iwan as an entrance. The 1528 AD (935 AH) Tomb of Babur in Kabul is much more modest in comparison, with a simple cenotaph exposed to the sky, laid out in the centre of a walled garden.
Humayun 's tomb commissioned in 1562 AD, was one of the most direct influences on the Taj Mahal 's design and was a response to the Gur - e Amir, borrowing a central dome, geometric symmetrical planning and iwan entrances, but incorporating the more specifically Indian Mughal devices of chhatris, red sandstone face work, and a ' Paradise garden ' (Charbagh). Akbar 's tomb c. 1600 at Sikandra, Agra, retains many of the elements of Humayan 's tomb but possesses no dome and reverts to a cenotaph open to the sky. A theme which was carried forward in the Itmad - Ud - Daulah 's Tomb also at Agra, built between 1622 and 1628, commissioned by his daughter Nur Jahan. The Tomb of Jahangir at Shahdara (Lahore), begun in 1628 AD (1037 AH), only 4 years before the construction of the Taj and again without a dome, takes the form of a simple plinth with a minaret at each corner.
The concept of the paradise garden (charbagh) was brought from Persia by the Mughals as a form of Timurid garden. They were the first architectural expression the new empire made in the Indian sub-continent, and fulfilled diverse functions with strong symbolic meanings. The symbolism of these gardens is derived from mystic Islamic texts describing paradise as a garden filled with abundant trees, flowers and plants, with water playing a key role: In Paradise four rivers source at a central spring or mountain. In their ideal form they were laid out as a square subdivided into four equal parts. These rivers are often represented in the charbagh as shallow canals which separate the garden by flowing towards the cardinal points. The canals represent the promise of water, milk, wine and honey. The centre of the garden, at the intersection of the divisions is highly symbolically charged and is where, in the ideal form, a pavilion, pool or tomb would be situated. The tombs of Humayun, Akbar and Jahangir, the previous Mughal emperors, follow this pattern. The cross axial garden also finds independent precedents within South Asia dating from the 5th century where the royal gardens of Sigiriya in Sri Lanka were laid out in a similar way.
For the tomb of Jahan 's late wife though, where the mausoleum is sited at the edge of the garden, there is a debate amongst scholars regarding why the traditional charbagh form has not been used. Ebba Koch suggests a variant of the charbagh was employed; that of the more secular waterfront garden found in Agra, adapted for a religious purpose. Such gardens were developed by the Mughuls for the specific conditions of the Indian plains where slow flowing rivers provide the water source, the water is raised from the river by animal driven devices known as purs and stored in cisterns. A linear terrace is set close to the riverbank with low - level rooms set below the main building opening on to the river. Both ends of the terrace were emphasised with towers. This form was brought to Agra by Babur and by the time of Shah Jahan, gardens of this type, as well as the more traditional charbagh, lined both sides of the Jumna river. The riverside terrace was designed to enhance the views of Agra for the imperial elite who would travel in and around the city by river. Other scholars suggest another explanation for the eccentric siting of the mausoleum. If the Midnight Garden to the north of the river Jumna is considered an integral part of the complex, then the mausoleum can be interpreted as being in the centre of a garden divided by a real river and thus can be considered more in the tradition of the pure charbagh.
The favoured form of both Mughal garden pavilions and mausolea (seen as a funerary form of pavilion) was the hasht bihisht which translates from Persian as ' eight paradises '. These were a square or rectangular planned buildings with a central domed chamber surrounded by eight elements. Later developments of the hasht bihisht divided the square at 45 degree angles to create a more radial plan which often also includes chamfered corners; examples of which can be found in Todar Mal 's Baradari at Fatehpur Sikri and Humayun 's Tomb. Each element of the plan is reflected in the elevations with iwans and with the corner rooms expressed through smaller arched niches. Often such structures are topped with chhatris (small pillared pavilions) at each corner. The eight divisions and frequent octagonal forms of such structures represent the eight levels of paradise for Muslims. The paradigm however was not confined solely to Islamic antecedents. The Chinese magic square was employed for numerous purposes including crop rotation and also finds a Muslim expression in the wafq of their mathematicians. Ninefold schemes find particular resonance in the Indian mandalas, the cosmic maps of Hinduism and Buddhism.
In addition to Humayun 's tomb, the more closely contemporary Tomb of Itmad - Ud - Daulah marked a new era of Mughal architecture. It was built by the empress Nur Jehan for her father from 1622 -- 1625 AD (1031 -- 1034 AH) and is small in comparison to many other Mughal - era tombs. So exquisite is the execution of its surface treatments, it is often described as a jewel box. The garden layout, hierarchical use of white marble and sandstone, Parchin kari inlay designs and latticework presage many elements of the Taj Mahal. The cenotaph of Nur Jehan 's father is laid, off centre, to the west of her mother. This break in symmetry was repeated in the Taj where Mumtaz was interred in the geometric centre of the complex and Jahan is laid to her side. These close similarities with the tomb of Mumtaz have earned it the sobriquet - The Baby Taj.
Minarets did not become a common feature of Mughal architecture until the 17th century, particularly under the patronage of Shah Jahan. A few precedents exist in the 20 years before the construction of the Taj in the Tomb of Akbar and the Tomb of Jahangir. Their increasing use was influenced by developments elsewhere in the Islamic world, particularly in Ottoman and Timurid architecture and is seen as suggestive of an increasing religious orthodoxy of the Mughal dynasty.
Under the reign of Shah Jahan, the symbolic content of Mughal architecture reached a peak. The Taj Mahal complex was conceived as a replica on earth of the house of the departed in paradise (inspired by a verse by the imperial goldsmith and poet Bibadal Khan. This theme, common in most Mughal funerary architecture, permeates the entire complex and informs the detailed design of all the elements. A number of secondary principles also inform the design, of which hierarchy is the most dominant. A deliberate interplay is established between the building 's elements, its surface decoration, materials, geometric planning and its acoustics. This interplay extends from what can be experienced directly with the senses, into religious, intellectual, mathematical and poetic ideas. The constantly changing sunlight reflected from the Taj 's translucent marble is not a happy accident, it had a deliberate metaphoric role associated with the presence of god as light.
Symmetry and geometric planning played an important role in ordering the complex and reflected a trend towards formal systematisation that was apparent in all of the arts emanating from Jahan 's imperial patronage. Bilateral symmetry expressed simultaneous ideas of pairing, counterparts and integration, reflecting intellectual and spiritual notions of universal harmony. A complex set of implied grids based on the Mughul Gaz unit of measurement provided a flexible means of bringing proportional order to all the elements of the Taj Mahal.
Hierarchical ordering of architecture is commonly used to emphasise particular elements of a design and to create drama. In the Taj Mahal, the hierarchical use of red sandstone and white marble contributes manifold symbolic significance. The Mughals were elaborating on a concept which traced its roots to earlier Hindu practices, set out in the Vishnudharmottara Purana, which recommended white stone for buildings for the Brahmins (priestly caste) and red stone for members of the Kshatriyas (warrior caste). By building structures that employed such colour - coding, the Mughals identified themselves with the two leading classes of Indian social structure and thus defined themselves as rulers in Indian terms. Red sandstone also had significance in the Persian origins of the Mughal empire where red was the exclusive colour of imperial tents. In the Taj Mahal the relative importance of each building in the complex is denoted by the amount of white marble (or sometimes white polished plaster) that is used.
The use of naturalist ornament demonstrates a similar hierarchy. Wholly absent from the more lowly jilaukhana and caravanserai areas, it can be found with increasing frequency as the processional route approaches the climactic Mausoleum. Its symbolism is multifaceted, on the one hand evoking a more perfect, stylised and permanent garden of paradise than could be found growing in the earthly garden; on the other, an instrument of propaganda for Jahan 's chroniclers who portrayed him as an ' erect cypress of the garden of the caliphate ' and frequently used plant metaphors to praise his good governance, person, family and court. Plant metaphors also find common cause with Hindu traditions where such symbols as the ' vase of plenty ' (Kalasha) can be found.
Sound was also used to express ideas of paradise. The interior of the mausoleum has a reverberation time (the time taken from when a noise is made until all of its echoes have died away) of 28 seconds. This provided an atmosphere where the words of those employed to continually recite the Qu'ran (the Hafiz), in tribute and prayer for the soul of Mumtaz, would linger in the air.
Wayne E. Begley put forward an interpretation in 1979 that exploits the Islamic idea that the ' Garden of paradise ' is also the location of the Throne of God on the Day of Judgement. In his reading the Taj Mahal is seen as a monument where Shah Jahan has appropriated the authority of the ' throne of god ' symbolism for the glorification of his own reign. Koch disagrees, finding this an overly elaborate explanation and pointing out that the ' Throne ' verse from the Qu'ran (sura 2, verse 255) is missing from the calligraphic inscriptions.
In 1996 Begley stated that it is likely that the diagram of "Plain of Assembly '' (Ard al - Hashr) on the Day of Judgment by Sufi mystic and philosopher Ibn Arabi (ca. 1238) was a source of inspiration for the layout of the Taj Mahal garden. Ibn Arabi was held in high regard at the time and many copies of the Futuhat al - Makkiyya, that contains the diagram, were available in India. The diagram shows the ' Arsh (Throne of God; the circle with the eight pointed star), pulpits for the righteous (al - Aminun), seven rows of angels, Gabriel (al - Ruh), A'raf (the Barrier), the Hauzu'l - Kausar (Fountain of Abundance; the semi-circle in the center), al - Maqam al - Mahmud (the Praiseworthy Station; where the prophet Muhammad will stand to intercede for the faithful), Mizan (the Scale), As - Sirāt (the Bridge), Jahannam (Hell) and Marj al - Jannat (Meadow of Paradise). The general proportions and the placement of the Throne, the pulpits and the Kausar Fountain show striking similarities with the Taj Mahal and its garden.
The popular view of the Taj Mahal as one of the world 's monuments to a great "love story '' is borne out by the contemporary accounts and most scholars accept this has a strong basis in fact. The building was also used to assert Jahani propaganda concerning the ' perfection ' of the Mughal leadership. The extent to which the Taj uses propaganda is the subject of some debate amongst contemporary scholars. This period of Mughal architecture best exemplifies the maturity of a style that had synthesised Islamic architecture with its indigenous counterparts. By the time the Mughals built the Taj, though proud of their Persian and Timurid roots, they had come to see themselves as Indian. Copplestone writes "Although it is certainly a native Indian production, its architectural success rests on its fundamentally Persian sense of intelligible and undisturbed proportions, applied to clean uncomplicated surfaces. ''
A site was chosen on the banks of the Yamuna River on the southern edge of Agra and purchased from Raja Jai Singh in exchange for four mansions in the city. The site, "from the point of view of loftiness and pleasantness appeared to be worthy of the burial of that one who dwells in paradise ''. In January 1632 AD (1041 AH), Mumtaz 's body was moved with great ceremony from Burhanpur to Agra while food, drink and coins were distributed amongst the poor and deserving along the way. Work had already begun on the foundations of the river terrace when the body arrived. A small domed building was erected over her body, thought to have been sited, and now marked, by an enclosure in the western garden near the riverfront terrace.
The foundations represented the biggest technical challenge to be overcome by the Mughal builders. In order to support the considerable load resulting from the mausoleum, the sands of the riverbank needed to be stabilised. To this end, wells were sunk and then cased in timber and finally filled with rubble, iron and mortar -- essentially acting as augured piles. After construction of the terrace was completed, work began simultaneously on the rest of the complex. Trees were planted almost immediately to allow them to mature as work progressed.
The initial stages of the build were noted by Shah Jahan 's chroniclers in their description of the first two anniversary celebrations in honour of Mumtaz -- known as the ' Urs. The first, held on the 22 June 1632 AD (1041 AH), was a tented affair open to all ranks of society and held in the location of what is now the entrance courtyard (jilaukhana). Alms were distributed and prayers recited. By the second Urs, held on 26 May 1633 AD (1042 AH), Mumtaz Mahal had been interred in her final resting place, the riverside terrace was finished; as was the plinth of the mausoleum and the tahkhana, a galleried suite of rooms opening to the river and under the terrace. It was used by the imperial retinue for the celebrations. Peter Mundy, an employee of the British East India company and a western eye witness, noted the ongoing construction of the caravanserais and bazaars and that "There is alreadye (sic) about Her Tombe a raile (sic) of gold ''. To deter theft it was replaced in 1643 AD (1053 AH) with an inlaid marble jali.
After the second Urs further dating of the progress can be made from several signatures left by the calligrapher Amanat Khan. The signed frame of the south arch of the domed hall of the mausoleum indicates it was reaching completion in 1638 / 39 AD (1048 / 1049 AH). In 1643 AD (1053 AH) the official sources documenting the twelfth Urs give a detailed description of a substantially completed complex. Decorative work apparently continued until 1648 AD (1058 AH) when Amanat Khan dated the north arch of the great gate with the inscription "Finished with His help, the Most High ''.
The Taj Mahal was constructed using materials from all over India and Asia. The buildings are constructed with walls of brick and rubble inner cores faced with either marble or sandstone locked together with iron dowels and clamps. Some of the walls of the mausoleum are several metres thick. Over 1,000 elephants were used to transport the building materials during the construction. The bricks were fired locally and the sandstone was quarried 28 miles (45 km) away near Fatehpur Sikri. The white marble was brought 250 miles (400 km) from quarries belonging to Raja Jai Singh in Makrana, Rajasthan. The Jasper was sourced from the Punjab and the Jade and crystal from China. The turquoise was from Tibet and the Lapis lazuli from Afghanistan, while the sapphire came from Sri Lanka and the carnelian from Arabia. In all, 28 types of precious and semi-precious stones were inlaid into the white marble. Jean - Baptiste Tavernier records that the scaffolding and centering for the arches was constructed entirely in brick. Legend says that the emperor offered these scaffolding bricks to anyone who would remove them and that at the end of the construction they were removed within a week. Modern scholars dispute this and consider it much more likely that the scaffolding was made of bamboo and materials were elevated by means of timber ramps.
Initial estimates for the cost of the works of 4,000,000 rupees had risen to 5,000,000 by completion. A waqf (trust) was established for the perpetual upkeep of the mausoleum with an income of 300,000 Rupees. One third of this income came from 30 villages in the district of Agra while the remainder came from taxes generated as a result of trade from the bazaars and caravanserais which had been built at an early stage to the south of the complex. Any surplus would be distributed by the emperor as he saw fit. As well as paying for routine maintenance, the waqf financed the expenses for the tomb attendants and the Hafiz, the Quran reciters who would sit day and night in the mausoleum and perform funerary services praying for the eternal soul of Mumtaz Mahal.
We do not know precisely who designed the Taj Mahal today. In the Islamic world at the time, the credit for a building 's design was usually given to its patron rather than its architects. From the evidence of contemporary sources, it is clear that a team of architects were responsible for the design and supervision of the works, but they are mentioned infrequently. Shah Jahan 's court histories emphasise his personal involvement in the construction and it is true that, more than any other Mughal emperor, he showed the greatest interest in building, holding daily meetings with his architects and supervisors. The court chronicler Lahouri, writes that Jahan would make "appropriate alterations to whatever the skilful architects designed after many thoughts, and asked competent questions. '' Two architects are mentioned by name, Ustad Ahmad Lahauri and Mir Abd - ul Karim in writings by Lahauri 's son Lutfullah Muhandis. Ustad Ahmad Lahauri had laid the foundations of the Red Fort at Delhi. Mir Abd - ul Karim had been the favourite architect of the previous emperor Jahangir and is mentioned as a supervisor, together with Makramat Khan, of the construction of the Taj Mahal.
In the complex, passages from the Qur'an are used as decorative elements. Recent scholarship suggests that the passages were chosen by a Persian calligrapher Abd ul - Haq, who came to India from Shiraz, Iran, in 1609. As a reward for his "dazzling virtuosity '', Shah Jahan gave him the title of "Amanat Khan ''. This is supported by an inscription near the lines from the Qur'an at the base of the interior dome that reads "Written by the insignificant being, Amanat Khan Shirazi. ''
The calligraphy on the Great Gate reads "O Soul, thou art at rest. Return to the Lord at peace with Him, and He at peace with you. ''
Much of the calligraphy is composed of florid thuluth script, made of jasper or black marble, inlaid in white marble panels. Higher panels are written in slightly larger script to reduce the skewing effect when viewed from below. The calligraphy found on the marble cenotaphs in the tomb is particularly detailed and delicate.
Abstract forms are used throughout, especially in the plinth, minarets, gateway, mosque, jawab and to a lesser extent, on the surfaces of the tomb. The domes and vaults of the sandstone buildings are worked with tracery of incised painting to create elaborate geometric forms. Herringbone inlays define the space between many of the adjoining elements. White inlays are used in sandstone buildings, and dark or black inlays on the white marbles. Mortared areas of the marble buildings have been stained or painted in a contrasting colour, creating geometric patterns of considerable complexity. Floors and walkways use contrasting tiles or blocks in tessellation patterns.
On the lower walls of the tomb there are white marble dados that have been sculpted with realistic bas relief depictions of flowers and vines. The marble has been polished to emphasise the exquisite detailing of the carvings and the dado frames and archway spandrels have been decorated with pietra dura inlays of highly stylised, almost geometric vines, flowers and fruits. The inlay stones are of yellow marble, jasper and jade, polished and leveled to the surface of the walls.
The Taj complex is ordered by grids. The complex was originally surveyed by J.A. Hodgson in 1825, however the first detailed scholastic examination of how the various elements of the Taj might fit into a coordinating grid was not carried out until 1989 by Begley and Desai. Numerous 17th - century accounts detail the precise measurements of the complex in terms of the gaz or zira, the Mughal linear yard, equivalent to approximately 80 -- 92 cm. Begley and Desai concluded a 400 - gaz grid was used and then subdivided and that the various discrepancies they discovered were due to errors in the contemporary descriptions.
Research and measurement by Koch and Richard André Barraud in 2006 suggested a more complex method of ordering that relates better to the 17th century records. Whereas Begley and Desai had used a simple fixed grid on which the buildings are superimposed, Koch and Barraud found the layout 's proportions were better explained by the use of a generated grid system in which specific lengths may be divided in a number of ways such as halving, dividing by three or using decimal systems. They suggest the 374 - gaz width of the complex given by the contemporary historians was correct and the Taj is planned as a tripartite rectangle of three 374 - gaz squares. Different modular divisions are then used to proportion the rest of the complex. A 17 - gaz module is used in the jilaukhana, bazaar and caravanserais areas whereas a more detailed 23 - gaz module is used in the garden and terrace areas (since their width is 368 gaz, a multiple of 23). The buildings were in turn proportioned using yet smaller grids superimposed on the larger organisational ones. The smaller grids were also used to establish elevational proportion throughout the complex.
Koch and Barraud explain such apparently peculiar numbers as making more sense when seen as part of Mughal geometric understanding. Octagons and triangles, which feature extensively in the Taj, have particular properties in terms of the relationships of their sides. A right - angled triangle with two sides of 12 will have a hypotenuse of approximately 17 (16.97 +); similarly if it has two sides of 17 its hypotenuse will be approximately 24 (24.04 +). An octagon with a width of 17 will have sides of approximately 7 (7.04 +), which is the basic grid upon which the mausoleum, mosque and Mihman Khana are planned.
Discrepancies remain in Koch and Barraud 's work which they attribute to numbers being rounded fractions, inaccuracies of reporting from third persons and errors in workmanship (most notable in the caravanserais areas further from the tomb itself).
A 2009 paper by Prof R. Balasubramaniam of the Indian Institute of Technology found Barraud 's explanation of the dimensional errors and the transition between the 23 and 17 gaz grid at the great gate unconvincing. Balasubramaniam conducted dimensional analysis of the complex based on Barraud 's surveys. He concluded that the Taj was constructed using the ancient Aṅgula as the basic unit rather than the Mughal ' gaz ', noted in the contemporary accounts. The Aṅgula, which equates to 1.763 cm and the Vistasti (12 Angulams) were first mentioned in the Arthashastra in c. 300 BC and may have been derived from the earlier Indus Valley Civilisation. In this analysis the forecourt and caravanserai areas were set out with a 60 Vistasti grid, and the riverfront and garden sections with a 90 - vistari grid. The transition between the grids is more easily accommodated, 90 being easily divisible by 60. The research suggests that older, pre-Mughal methods of proportion were employed as ordering principles in the Taj.
The focus and climax of the Taj Mahal complex is the symmetrical white marble tomb; a cubic building with chamfered corners, with arched recesses known as pishtaqs. It is topped by a large dome and several pillared, roofed chhatris. In plan, it has a near perfect symmetry about 4 axes. It comprises 4 floors; the lower basement storey containing the tombs of Jahan and Mumtaz, the entrance storey containing identical cenotaphs of the tombs below in a much more elaborate chamber, an ambulatory storey and a roof terrace.
The mausoleum is cubic with chamfered edges. On the long sides, a massive pishtaq, or vaulted archway frames an arch - shaped doorway, with a similar arch - shaped balcony above. These main arches extend above the roof the building by use of an integrated facade. To either side of the main arch, additional pishtaqs are stacked above and below. This motif of stacked pishtaqs is replicated on the chamfered corner areas. The design is completely uniform and consistent on all sides of the building.
The marble dome that surmounts the tomb is its most spectacular feature. Its height is about the same size as the base building, about 35 m. Its height is accentuated because it sits on a cylindrical "drum '' about 7 metres high. Because of its shape, the dome is often called an onion dome (also called an amrud or apple dome). The dome is topped by a gilded finial, which mixes traditional Islamic and Hindu decorative elements. The dome shape is emphasised by four smaller domed chhatris placed at its corners. The chhatri domes replicate the onion shape of main dome. Their columned bases open through the roof of the tomb, and provide light to the interior. The chhatris also are topped by gilded finials. Tall decorative spires (guldastas) extend from the edges of the base walls, and provide visual emphasis of the dome height.
Muslim tradition forbids elaborate decoration of graves, so the bodies of Mumtaz and Shah Jahan are laid in a relatively plain, marble faced chamber, beneath the main chamber of the Taj. They are buried in graves on a north - south axis, with faces turned right (west) toward Mecca. Two cenotaphs above mark the graves. Mumtaz 's cenotaph is placed at the precise center of the inner chamber. On a rectangular marble base about 1.5 by 2.5 metres is a smaller marble casket. Both base and casket are elaborately inlaid with precious and semiprecious gems. Calligraphic inscriptions on top of the casket recite verses from the Koran and on the sides express the Ninety - Nine beautiful names of Allah.
The inner chamber of the Taj Mahal contains the cenotaphs of Mumtaz and Shah Jahan. It is a masterpiece of artistic craftsmanship, virtually without precedent or equal. The inner chamber is an octagon. While the design allows for entry from each face, only the south (garden facing) door is used. The interior walls are about 25 metres high, topped by a "false '' interior dome decorated with a sun motif. Eight pishtaq arches define the space at ground level. As is typical with the exterior, each lower pishtaq is crowned by a second pishtaq about midway up the wall. The four central upper arches form balconies or viewing areas; each balcony 's exterior window has an intricate screen or jali cut from marble. In addition to the light from the balcony screens, light enters through roof openings covered by the chhatris at the corners of the exterior dome. Each of the chamber walls has been highly decorated with dado bas relief, intricate lapidary inlay, and refined calligraphy panels.
The hierarchical ordering of the entire complex reaches its crescendo in the chamber. Mumtaz 's cenotaph sits at the geometric centre of the building; Jahan was buried at a later date by her side to the west -- an arrangement seen in other Mughal tombs of the period such as Itmad - Ud - Daulah. Marble is used exclusively as the base material for increasingly dense, expensive and complex parchin kari floral decoration as one approaches the screen and cenotaphs which are inlaid with semi-precious stones. The use of such inlay work is often reserved in Shah Jahani architecture for spaces associated with the emperor or his immediate family. The ordering of this decoration simultaneously emphasises the cardinal points and the centre of the chamber with dissipating concentric octagons. Such hierarchies appear in both Muslim and Indian culture as important spiritual and atrological themes. The chamber is an abundant evocation of the garden of paradise with representations of flowers, plants and arabesques and the calligraphic inscriptions in both the thuluth and the less formal naskh script,
Shah Jahan 's cenotaph is beside Mumtaz 's to the western side. It is the only asymmetric element in the entire complex. His cenotaph is bigger than his wife 's, but reflects the same elements: A larger casket on slightly taller base, again decorated with astonishing precision with lapidary and calligraphy which identifies Shah Jahan. On the lid of this casket is a sculpture of a small pen box. (The pen box and writing tablet were traditional Mughal funerary icons decorating men 's and women 's caskets respectively.)
An octagonal marble screen or jali borders the cenotaphs and is made from eight marble panels. Each panel has been carved through with intricate piercework. The remaining surfaces have been inlaid with semiprecious stones in extremely delicate detail, forming twining vines, fruits and flowers.
At the corners of the plinth stand minarets: four large towers each more than 40 metres tall. The towers are designed as working minarets, a traditional element of mosques, a place for a muezzin to call the Islamic faithful to prayer. Each minaret is effectively divided into three equal parts by two balconies that ring the tower. At the top of the tower is a final balcony surmounted by a chhatri that echoes the design of those on the tomb. The minaret chhatris share the same finishing touches: a lotus design topped by a gilded finial. Each of the minarets was constructed slightly out of plumb to the outside of the plinth, so that in the event of collapse (a typical occurrence with many such tall constructions of the period) the structure would fall away from the tomb.
The mausoleum is flanked by two almost identical buildings on either side of the platform. To the west is the Mosque, to the east is Jawab. The Jawab, meaning ' answer ' balances the bilateral symmetry of the composition and was originally used as a place for entertaining and accommodation for important visitors. It differs from the mosque in that it lacks a mihrab, a niche in a mosque 's wall facing Mecca, and the floors have a geometric design, while the mosque floor was laid out with the outlines of 569 prayer rugs in black marble.
The mosque 's basic tripartite design is similar to others built by Shah Jahan, particularly the Masjid - i - Jahan Numa in Delhi -- a long hall surmounted by three domes. Mughal mosques of this period divide the sanctuary hall into three areas: a main sanctuary with slightly smaller sanctuaries to either side. At the Taj Mahal, each sanctuary opens onto an enormous vaulting dome.
The large charbagh (a form of Persian garden divided into four parts) provides the foreground for the classic view of the Taj Mahal. The garden 's strict and formal planning employs raised pathways which divide each quarter of the garden into 16 sunken parterres or flowerbeds. A raised marble water tank at the center of the garden, halfway between the tomb and the gateway, and a linear reflecting pool on the North - South axis reflect the Taj Mahal. Elsewhere the garden is laid out with avenues of trees and fountains. The charbagh garden is meant to symbolise the four flowing Rivers of Paradise. The raised marble water tank (hauz) is called al Hawd al - Kawthar, literally meaning and named after the "Tank of Abundance '' promised to Muhammad in paradise where the faithful may quench their thirst upon arrival.
Two pavilions occupy the east and west ends of the cross axis, one the mirror of the other. In the classic charbargh design, gates would have been located in this location. In the Taj they provide punctuation and access to the long enclosing wall with its decorative crenellations. Built of sandstone, they are given a tripartite form and over two storeys and are capped with a white marble chhatris supported from 8 columns.
The original planting of the garden is one of the Taj Mahal 's remaining mysteries. The contemporary accounts mostly deal just with the architecture and only mention ' various kinds of fruit - bearing trees and rare aromatic herbs ' in relation to the garden. Cypress trees are almost certainly to have been planted being popular similes in Persian poetry for the slender elegant stature of the beloved. By the end of the 18th century, Thomas Twining noted orange trees and a large plan of the complex suggests beds of various other fruits such as pineapples, pomegranates, bananas, limes and apples. The British, at the end of the 19th century thinned out a lot of the increasingly forested trees, replanted the cypresses and laid the gardens to lawns in their own taste.
The layout of the garden, and its architectural features such as its fountains, brick and marble walkways, and geometric brick - lined flowerbeds are similar to Shalimar 's, and suggest that the garden may have been designed by the same engineer, Ali Mardan.
Early accounts of the garden describe its profusion of vegetation, including roses, daffodils, and fruit trees in abundance. As the Mughal Empire declined, the tending of the garden declined as well. When the British took over management of the Taj Mahal, they changed the landscaping to resemble the formal lawns of London.
The great gate stands to the north of the entrance forecourt (jilaukhana) and provides a symbolic transition between the worldly realm of bazaars and caravanserai and the spiritual realm of the paradise garden, mosque and the mausoleum. Its rectangular plan is a variation of the 9 - part hasht bihisht plan found in the mausoleum. The corners are articulated with octagonal towers giving the structure a defensive appearance. External domes were reserved for tombs and mosques and so the large central space does not receive any outward expression of its internal dome. From within the great gate, the Mausoleum is framed by the pointed arch of the portal. Inscriptions from the Qu'ran are inlaid around the two northern and southern pishtaqs, the southern one ' Daybreak ' invites believers to enter the garden of paradise.
Running the length of the northern side of the southern garden wall to the east and west of the great gate are galleried arcades. The galleries were used during the rainy season to admit the poor and distribute alms. A raised platform with geometric paving provides a seating for the column bases and between them are cusped arches typical of the Mughul architecture of the period. The galleries terminate at each end with a transversely placed room with tripartite divisions.
The jilaukhana (literally meaning ' in front of house ') was a courtyard feature introduced to mughal architecture by Shah Jahan. It provided an area where visitors would dismount from their horses or elephants and assemble in style before entering the main tomb complex. The rectangular area divides north - south and east - west with an entry to the tomb complex through the main gate to the north and entrance gates leading to the outside provided in the eastern, western and southern walls. The southern gate leads to the Taj Ganji quarter.
Two identical streets lead from the east and west gates to the centre of the courtyard. They are lined by verandahed colonnades articulated with cusped arches behind which cellular rooms were used to sell goods from when the Taj was built until 1996. The tax revenue from this trade was used for the upkeep of the Taj complex. The eastern bazaar streets were essentially ruined by the end of the 19th century and were restored by Lord Curzon restored 1900 and 1908.
Two mirror image tombs are located at the southern corners of the jilaukhana. They are conceived as miniature replicas of the main complex and stand on raised platforms accessed by steps. Each octagonal tomb is constructed on a rectangular platform flanked by smaller rectangular buildings in front of which is laid a charbargh garden. Some uncertainty exists as to whom the tombs might memorialise. Their descriptions are absent from the contemporary accounts either because they were unbuilt or because they were ignored, being the tombs of women. On the first written document to mention them, the plan drawn up by Thomas and William Daniel in 1789, the eastern tomb is marked as that belonging to Akbarabadi Mahal and the western as Fatehpuri Mahal (two of Jahan 's other wives).
A pair of courtyards is found in the northern corners of the jilaukhana which provided quarters (Khawasspuras) for the tombs attendants and the Hafiz. This residential element provided a transition between the outside world and the other - worldy delights of the tomb complex. The Khawasspurs had fallen into a state of disrepair by the late 18th century but the institution of the Khadim continued into the 20th century. The Khawasspuras were restored by Lord Curzon as part of his repairs between 1900 and 1908, after which the western courtyard was used as a nursery for the garden and the western courtyard was used as a cattle stable until 2003.
The Bazaar and caravanserai were constructed as an integral part of the complex, initially to provide the construction workers with accommodation and facilities for their wellbeing, and later as a place for trade, the revenue of which supplemented the expenses of the complex. The area became a small town in its own right during and after the building of the Taj. Originally known as ' Mumtazabad ', today it is called Taj Ganji or ' Taj Market '. Its plan took the characteristic form of a square divided by two cross axial streets with gates to the four cardinal points. Bazaars lined each street and the resultant squares to each corner housed the caravanserais in open courtyards accessed from internal gates from where the streets intersected (Chauk). Contemporary sources pay more attention to the north eastern and western parts of the Taj Ganji (Taj Market) and it is likely that only this half received imperial funding. Thus, the quality of the architecture was finer than the southern half.
The distinction between how the sacred part of the complex and the secular was regarded is most acute in this part of the complex. Whilst the rest of the complex only received maintenance after its construction, the Taj Ganji became a bustling town and the centre of Agra 's economic activity where "different kinds of merchandise from every land, varieties of goods from every country, all sorts of luxuries of the time, and various kinds of necessities of civilisation and comfortable living brought from all parts of the world '' were sold. An idea of what sort of goods might have been traded is found in the names for the caravanserais; the north western one was known as Katra Omar Khan (Market of Omar Khan), the north eastern as Katra Fulel (Perfume Market), the south western as Katra Resham (Silk Market) and the south - eastern as Katra Jogidas. It has been constantly redeveloped ever since its construction, to the extent that by the 19th century it had become unrecognisable as part of the Taj Mahal and no longer featured on contemporary plans and its architecture was largely obliterated. Today, the contrast is stark between the Taj Mahal 's elegant, formal geometric layout and the narrow streets with organic, random and un-unified constructions found in the Taj Ganji. Only fragments of the original constructions remain, most notably the gates.
The Taj Mahal complex is bounded on three sides by crenellated red sandstone walls, with the river - facing side left open. The garden - facing inner sides of the wall are fronted by columned arcades, a feature typical of Hindu temples which was later incorporated into Mughal mosques. The wall is interspersed with domed chhatris, and small buildings that may have been viewing areas or watch towers.
Outside the walls are several additional mausolea. These structures, composed primarily of red sandstone, are typical of the smaller Mughal tombs of the era. The outer eastern tomb has an associated mosque called the Black Mosque (Kali Masjid) or the Sandalwood Mosque (Sandli Masjid). The design is closely related to the inner subsidiary tombs found in the Jilhaukhana -- small, landlocked versions of the riverfront terrace with a garden separating the mosque from the tomb. The person interred here is unknown, but was likely a female member of Jahan 's household.
Water for the Taj complex was provided through a complex infrastructure. It was first drawn from the river by a series of purs - an animal - powered rope and bucket mechanism. The water then flowed along an arched aqueduct into a large storage tank, where, by thirteen additional purs, it was raised to large distribution cistern above the Taj ground level located to the west of the complex 's wall. From here water passed into three subsidiary tanks and was then piped to the complex. The head of pressure generated by the height of the tanks (9.5 m) was sufficient to supply the fountains and irrigate the gardens. A 0.25 metre diameter earthenware pipe lies 1.8 metres below the surface, in line with the main walkway which fills the main pools of the complex. Some of the earthenware pipes were replaced in 1903 with cast iron. The fountain pipes were not connected directly to the fountain heads, instead a copper pot was provided under each fountain head: water filled the pots ensuring an equal pressure to each fountain. The purs no longer remain, but the other parts of the infrastructure have survived with the arches of the aqueduct now used to accommodate offices for the Archaeological Survey of India 's Horticultural Department.
To the north of the Taj Mahal complex, across the river is another Charbagh garden, Mahtab Bagh. It was designed as an integral part of the complex in the riverfront terrace pattern seen elsewhere in Agra. Its width is identical to that of the rest of the Taj. The garden historian Elizabeth Moynihan suggests the large octagonal pool in the centre of the terrace would reflect the image of the Mausoleum and thus the garden would provide a setting to view the Taj Mahal. The garden has been beset by flooding from the river since Mughal times. As a result, the condition of the remaining structures is quite ruinous. Four sandstone towers marked the corners of the garden, only the south - eastward one remains. The foundations of two structures remain immediately north and south of the large pool which were probably garden pavilions. From the northern structure a stepped waterfall would have fed the pool. The garden to the north has the typical square, cross-axial plan with a square pool in its centre. To the west an aqueduct fed the garden.
Coordinates: 27 ° 10 ′ 30 '' N 78 ° 02 ′ 32 '' E / 27.17500 ° N 78.04222 ° E / 27.17500; 78.04222
|
what was a key belief of the great awakening | First Great Awakening - wikipedia
The Great Awakening or First Great Awakening was a Protestant religious revival that swept Protestant Europe and British America in the 1730s and 1740s. An evangelical and revitalization movement, it left a permanent impact on American Protestantism. It resulted from powerful preaching that gave listeners a sense of deep personal revelation of their need of salvation by Jesus Christ. The Great Awakening pulled away from ritual, ceremony, sacramentalism, and hierarchy, and made Christianity intensely personal to the average person by fostering a deep sense of spiritual conviction and redemption, and by encouraging introspection and a commitment to a new standard of personal morality.
The movement was an important social event in New England, which challenged established authority and incited rancor and division between traditionalist Protestants, who insisted on the continuing importance of ritual and doctrine, and the revivalists, who encouraged emotional involvement. It had an impact in reshaping the Congregational church, the Presbyterian church, the Dutch Reformed Church, and the German Reformed denominations, and strengthened the small Baptist and Methodist Anglican denominations. It had little impact on most Anglicans, Lutherans, Quakers, and non-Protestants. Throughout the colonies, especially in the south, the revivalist movement increased the number of African slaves and free blacks who were exposed to and subsequently converted to Christianity.
The Second Great Awakening began about 1800 and reached out to the unchurched, whereas the First Great Awakening focused on people who were already church members. 18th - century American Christians added an emphasis on "outpourings of the Holy Spirit '' to the evangelical imperatives of Reformation Protestantism. Revivals encapsulated those hallmarks and spread the newly created evangelicalism in the early republic. Evangelical preachers "sought to include every person in conversion, regardless of gender, race, and status. ''
The evangelical revival was international in scope, affecting predominantly Protestant countries of Europe. The emotional response of churchgoers marked the start of the English awakening in Bristol and London in 1737, and of the Kingswood colliers (coal miners) with white gutters on their cheeks caused by tears in 1739 under the preaching of George Whitefield. Historian Sydney E. Ahlstrom sees it as part of a "great international Protestant upheaval '' that also created Pietism in Germany, the Evangelical Revival and Methodism in England. Revivalism was a critical component of the Great Awakening, and actually began in the 1620s in Scotland among Presbyterians, and featured itinerant preachers.
The idea of a "great awakening '' has been contested by Butler (1982) as vague and exaggerated, but it is clear that the period was a time of increased religious activity, particularly in New England. The First Great Awakening led to changes in Americans ' understanding of God, themselves, the world around them, and religion. In the Middle and Southern colonies, especially in the "back country '' regions, the Awakening was influential among Presbyterians. In the southern Tidewater and Low Country, northern Baptist and Methodist preachers converted both white and black people. Some were enslaved at their time of conversion while others were free. Caucasians began to welcome dark - skinned individuals into their churches, taking their religious experiences seriously, while also admitting them into active roles in congregations as exhorters, deacons, and even preachers, although the last was a rarity.
The message of spiritual equality appealed to many slaves, and, as African religious traditions continued to decline in North America, black people accepted Christianity in large numbers for the first time. Evangelist leaders in the southern colonies had to deal with the issue of slavery much more frequently than those in the North. Still, many leaders of the revivals proclaimed that slaveholders should educate their slaves so that they could become literate and be able to read and study the Bible. Many Africans were finally provided with some sort of education. Africans hoped that their newly acquired spiritual equality would translate into earthly equalities. As black people started to make up substantial proportions of congregations, they were given a chance to momentarily forget about their bondage and enjoy a slight sense of freedom. Before the American Revolution, the first black Baptist churches were founded in the South in Virginia, South Carolina, and Georgia; two black Baptist churches were founded in Petersburg, Virginia.
The revival began with Jonathan Edwards in Northampton, Massachusetts. Edwards came from Puritan, Calvinist roots, but emphasized the importance and power of immediate, personal religious experience. Religious experience had to be immediate, he taught. He distrusted hierarchy and catechisms. His sermons were "solemn, with a distinct and careful enunciation, and a slow cadence. '' His sermons were powerful and attracted a large following. Anglican preacher George Whitefield visited from England; he continued the movement, traveling throughout the colonies and preaching in a more dramatic and emotional style, accepting everyone into his audiences. Both Edwards and Whitefield were slave owners and believed that blacks would acquire absolute equality with whites in the Millennial church.
Winiarski (2005) examines Edwards 's preaching in 1741, especially his famous sermon "Sinners in the Hands of an Angry God. '' At this point, Edwards countenanced the "noise '' of the Great Awakening, but his approach to revivalism became more moderate and critical in the years immediately following.
The arrival of young Anglican preacher George Whitefield sparked the Great Awakening. Whitefield 's reputation preceded his visit as a great pulpit and open - air orator. He traveled through the colonies in 1739 and 1740. He attracted large and emotional crowds everywhere, eliciting countless conversions as well as considerable controversy. He declared the whole world his "parish. '' God was merciful, Whitefield proclaimed. Men and women were not predestined to damnation, but could be saved by repenting of their sins. Whitefield mainly spoke about the concept of spiritual "rebirth '', explaining that men and women could experience a spiritual revival in life that would grant them entrance to the Promised Land. He appealed to the passions of his listeners, powerfully sketching the boundless joy of salvation and the horrors of damnation.
Critics condemned his "enthusiasm '', his censoriousness, and his extemporaneous and itinerant preaching. His techniques were copied by numerous imitators, both lay and clerical. They became itinerant preachers themselves, spreading the Great Awakening from New England to Georgia, among rich and poor, educated and illiterate, and in the back country as well as in seaboard towns and cities.
Whitefield 's sermons reiterated an egalitarian message, but only translated into a spiritual equality for Africans in the colonies who mostly remained enslaved. Whitefield was known to criticize slaveholders who treated their slaves cruelly and those who did not educate them, but he had no intention to abolish. He lobbied to have slavery reinstated in Georgia and proceeded to become a slave holder himself. Whitefield shared a common belief held among Evangels that, after conversion, slaves would be granted true equality in Heaven.
Despite his stance on slavery, Whitefield became influential to many Africans. Benjamin Franklin became an enthusiastic supporter of him. Franklin was a Deist who rarely attended church, and he did not subscribe to Whitefield 's theology, but he admired him for exhorting people to worship God through good works. He printed Whitefield 's sermons on the front page of his Gazette, devoting 45 issues to Whitefield 's activities. Franklin used the power of his press to spread Whitefield 's fame by publishing all of his sermons and journals. Many of Franklin 's publications between 1739 -- 1741 contained information about Whitefield 's work, and helped promote the evangelical movement in America. Franklin remained a friend and supporter of Whitefield until Whitefield 's death in 1770.
Samuel Davies was a Presbyterian minister who later became the fourth president of Princeton University. He was noted for preaching to African slaves who converted to Christianity in unusually large numbers, and is credited with the first sustained proselytization of slaves in Virginia. Davies wrote a letter in 1757 in which he refers to the religious zeal of an enslaved man whom he had encountered during his journey. "I am a poor slave, brought into a strange country, where I never expect to enjoy my liberty. While I lived in my own country, I knew nothing of that Jesus I have heard you speak so much about. I lived quite careless what will become of me when I die; but I now see such a life will never do, and I come to you, Sir, that you may tell me some good things, concerning Jesus Christ, and my Duty to GOD, for I am resolved not to live any more as I have done. ''
Davies became accustomed to hearing such excitement from many blacks who were exposed to the revivals. He believed that blacks could attain knowledge equal to whites if given an adequate education, and he promoted the importance for slaveholders to permit their slaves to become literate so that they could become more familiar with the instructions of the Bible.
The new style sermons and the way in which people practiced their faith breathed new life into religion in America. Participants became passionately and emotionally involved in their religion, rather than passively listening to intellectual discourse in a detached manner. Ministers who used this new style of preaching were generally called "new lights '', while the preachers who remained unemotional were referred to as "old lights ''. People affected by the revival began to study the Bible at home. This effectively decentralized the means of informing the public on religious matters and was akin to the individualistic trends present in Europe during the Protestant Reformation.
The Awakening played a major role in the lives of women, though they were rarely allowed to preach or take leadership roles. A deep sense of religious enthusiasm encouraged women, especially to analyze their feelings, share them with other women, and write about them. They became more independent in their decisions, as in the choice of a husband. This introspection led many women to keep diaries or write memoirs. The autobiography of Hannah Heaton (1721 -- 94), a farm wife of North Haven, Connecticut, tells of her experiences in the Great Awakening, her encounters with Satan, her intellectual and spiritual development, and daily life on the farm.
Phillis Wheatley was the first published black female poet, and she was converted to Christianity as a child after she was brought to America. Her beliefs were overt in her works; she describes the journey of being taken from a Pagan land to be exposed to Christianity in the colonies in a poem entitled "On Being Brought from Africa to America. '' Wheatley became so influenced by the revivals and especially George Whitefield that she dedicated a poem to him after his death in which she referred to him as an "Impartial Saviour. '' Sarah Osborn adds another layer to the role of women during the Awakening. She was a Rhode Island schoolteacher, and her writings offer a fascinating glimpse into the spiritual and cultural upheaval of the time period, including a 1743 memoir, various diaries and letters, and her anonymously published The Nature, Certainty and Evidence of True Christianity (1753).
The emotionality of the revivals appealed to many Africans and African leaders started to emerge from the revivals soon after they converted in substantial numbers. These figures paved the way for the establishment of the first black congregations and churches in the American colonies.
The newly incorporated town of Uxbridge, Massachusetts saw the first new Congregational church congregation and worship building in Massachusetts in the Great Awakening period of 1730 -- 60. It was headed by Pastor Rev. Nathan Webb, a native of Braintree, who remained in the ministry in Uxbridge for the next 41 years. His student Samuel Spring served as a chaplain in the American Revolutionary War, and started the Andover Seminary and the Massachusetts Missionary Society.
The Calvinist denominations were especially affected. For example, Congregational churches in New England experienced 98 schisms, which in Connecticut also had impact on which group would be considered "official '' for tax purposes. These splits were between the New Lights (those who were influenced by the Great Awakening) and the Old Lights (those who were more traditional). It is estimated in New England that in the churches there were about 1 / 3 each of New Lights, Old Lights, and those who saw both sides as valid.
In Connecticut, the Saybrook Platform of 1708 marked a conservative counter-revolution against a non-conformist tide which had begun with the Halfway Covenant and culminated in the Great Awakening in the 1740s.
The Great Awakening bitterly divided Congregationalists between the "New Lights '' or "Arminians '' who welcomed the revivals, and the "Old Lights '' or "Calvinists '' who used governmental authority to suppress revivals. The Arminians believed that every person could be saved by experiencing a religious conversion and one of the revivals, while the Calvinists held that everyone 's fate was a matter of predestination, and revivals were a false religion. The legislature was controlled by the Old Lights who passed an "Act for regulating abuses and correcting disorder in ecclesiastical affairs '' in 1742 that sharply restricted ministers from leading revivals. Another law was passed to prevent the opening of a New Light seminary. Numerous New Light evangelicals were imprisoned or fined. The New Lights responded by their own political organization, fighting it out town by town. The religious issues declined somewhat after 1748, although the New Light versus Old Light factionalism spilled into other issues, such as disputes over currency and Imperial issues. However, the divisions involved did not play a role in the coming of the American Revolution, which both sides supported.
|
what is the work of seaman in navy | Seaman - wikipedia
Seaman is a naval rank and is either the lowest or one of the lowest ranks in most navies around the world. In the Commonwealth it is the lowest rank in the navy. The next rank up is able seaman, followed by leading seaman, which is followed by the petty officer ranks.
In the United States, it means the lowest three enlisted rates of the U.S. Navy and U.S. Coast Guard, followed by the higher petty officer ranks. The equivalent of the seaman is the matelot in French - speaking countries, and Matrose in German - speaking countries.
The term "seaman '' is also a general - purpose for a man or a woman who works anywhere on board a modern ship, including in the engine spaces, which is the very opposite of sailing. This is untrue in the US Navy where a sailor might be a seaman but not all US Navy sailors are a ' Seaman ' as they might be an Airman, Fireman, Constructionman, or Hospital Corpsman. Furthermore, "seaman '' is a short form for the status of an "able - bodied seaman, '' either in the navies or in the merchant marines. An able - bodied seaman is one who is fully trained and qualified to work on the decks and superstructure of modern ships, even during foul weather, whereas less - qualified sailors are restricted to remaining within the ship during times of foul weather -- lest they be swept overboard by the stormy seas or by the high winds.
The Royal Australian Navy features one Seaman rank, which is split into two distinct classes. Seaman and Seaman * (pronounced Seaman Star), to differentiate between those who have completed their employment training and those who are in training. There is no insignia on a Seaman rank slide.
There are 4 grades of seaman / matelot in the Royal Canadian Navy:
Ordinary seaman or matelot de troisième classe
Able seaman or matelot de deuxième classe
Leading seaman or matelot de première classe
Master seaman or matelot - chef
The rank of master seaman is unique because it was created only for the Canadian Navy. It does not follow the British tradition of other Canadian ranks. It corresponds to the rank of master corporal / caporal - chef.
Matelot 2e classe (seaman 2nd class), or apprentice seaman, and matelot breveté (able seaman) are designations of the French Navy. Matelots are colloquially known as "mousses ''.
Matelot
Matelot breveté (able seaman)
Madrus is the lowest rank in the Estonian Navy. It is equivalent to OR - 1 in NATO
The German rank of "seaman '' (German: Matrose) is the lowest enlisted rank of the German Navy. It is equivalent to OR1 in NATO and is a grade A3 in the pay rules of the Federal Ministry of Defence.
There is one grade of seaman in the Hellenic Navy.
In the Indonesian Navy this rank is referred to as "Kelasi '' (read: Kelasee). There are three levels of this rank in the Indonesian Navy which are: "Seaman Recruit '' (Kelasi Dua), "Seaman Apprentice '' (Kelasi Satu), and "Seaman '' (Kelasi Kepala), the rating system thus mirrors the one used in the US Navy.
The Italian rank of "seaman '' (Italian: comune di seconda classe) is the lowest enlisted rank of the Italian Navy equivalent in NATO to OR1.
Badge of technics and mechanics
Badge of coxswains
Badge of tracking radar operators
Badge of electro - mechanics
Much Russian military vocabulary was imported, along with military advisers, from Germany in the 16th and 17th centuries. The Russian word for "seaman '' or "sailor '' (Russian: матрос; matros) was borrowed from the German "matrose ''. In Imperial Russia the most junior naval rank was "seaman 2nd class '' (матрос 2 - й статьи; matros vtoroi stati). The 1917 Revolution led to the term "Red Fleet man '' (краснофлотец; krasnoflotets) until 1943, when the Soviet Navy reintroduced the term "seaman '' (матрос; matros), along with badges of rank. The Russian federation inherited the term in 1991, as did several other former Soviet republics, including Ukraine, Azerbaijan and Belarus, with Bulgaria using the same word and the same Cyrillic orthography. Estonia (Estonian: mаtrus) and Latvia (Latvian: mаtrozis) use closely related loanwords.
In the Royal Navy the rate is split into two divisions: AB1 and AB2. The AB2 rating is used for those who have not yet completed their professional taskbooks. The rate of ordinary seaman has been discontinued.
Constructionman variation
Fireman variation
Airman variation
Seaman insignia
Seaman is the third enlisted rank from the bottom in the U.S. Navy and U.S. Coast Guard, ranking above seaman apprentice and below petty officer third class. This naval rank was formerly called "seaman first class ''. The rank is also used in United States Naval Sea Cadet Corps, a naval - themed uniformed youth program under the sponsorship of the Navy League of the United States.
The actual title for an E-3 in the U.S. Navy varies based on the subset of the Navy or Coast Guard, also known as a group rate, to which the member will ultimately be assigned. Likewise, the color of their group rate mark also depends on that subset of the Navy or Coast Guard in which they are serving and which technical rating they will eventually pursue.
No such stripes for E-1, E-2 or E-3 are authorized to be worn on working uniforms, e.g., navy work uniform, USCG operational dress uniform, coveralls, utility wear, flight suits, hospital and clinic garb, diving suits, etc. However, sailors with the pay grade of E-2 or E-3 are permitted to wear silver - anodized collar devices on their service uniforms.
Some sailors and Coast Guardsmen receive a rating following completion of a military technical training course for that particular rating known as an "A '' school. Other sailors and Coast Guardsmen who have completed the requirements to be assigned a rating and have been accepted by the Navy Personnel Command / Bureau of Naval Personnel (USN) or the Coast Guard Personnel Service Center Command (USCG) as holding that rating (a process called "striking '') are called "designated strikers '', and are referred to by their full rate and rating in formal communications (i.e., machinist 's mate fireman (MMFN), as opposed to simply fireman (FN)), though the rating is often left off in informal communications. Those who have not officially been assigned to a rating are officially referred to as "undesignated '' or "non-rates. '' Once selected for a particular rating of their choice they become eligible for advancement in that community.
|
who sang the original knocking on heaven's door | Knockin ' on Heaven 's Door - wikipedia
"Knockin ' on Heaven 's Door '' is a song written and sung by Bob Dylan, for the soundtrack of the 1973 film Pat Garrett and Billy the Kid. Released as a single, it reached No. 12 on the Billboard Hot 100 singles chart. Described by Dylan biographer Clinton Heylin as "an exercise in splendid simplicity, '' the song, measured simply in terms of the number of other artists who have covered it, is one of Dylan 's most popular post-1960s compositions.
Members of the Western Writers of America chose it as one of the Top 100 Western songs of all time.
In January 1975 Eric Clapton played on Jamaican singer Arthur Louis ' recording of "Knockin ' on Heaven 's Door '' arranged in a reggae style. Subsequently, Clapton recorded his own reggae - style version of the song which was released in August 1975, two weeks after Louis 's version was released as a single in July 1975. Clapton 's single peaked at No. 38 in the UK Singles Chart. The single was less successful in the US, only reaching No. 109 in Cash Box. Clapton 's 1996 boxed set Crossroads 2: Live in the Seventies features a performance recorded in London in April 1977. The song was also performed during the Journeyman and One More Car, One More Rider world tours in 1990 and 2003. Additionally, the song has been included on several Clapton compilation albums, such as Time Pieces: The Best of Eric Clapton, Backtrackin ', The Cream of Clapton and Complete Clapton.
In 1987, Guns N ' Roses started including the song in their live sets. A live version of the song was released on the maxi - single of "Welcome to the Jungle '' the same year. They recorded a studio version in 1990 for the soundtrack of the film Days of Thunder which reached No. 18 on the Billboard Album Rock Tracks chart in 1990. This version was later slightly modified for the 1991 album Use Your Illusion II (basically discarding the responses in the second verse). Released as the fourth single from the album, it reached No. 2 on the UK Singles Chart as well as No. 12 in Australia and No. 1 in Ireland. Their performance of the song at the Freddie Mercury Tribute Concert in 1992 was used as the B - side for the single release and was also included on their Live Era: ' 87 -- ' 93 album, released in 1999. The music video for this version of the song was directed by Andy Morahan.
In 1996 and with the consent of Bob Dylan, Scottish musician Ted Christopher wrote a new verse for "Knockin ' on Heaven 's Door '' in memory of the schoolchildren and teacher killed in the Dunblane school massacre. This has been, according to some sources, one of the few times Dylan has officially authorized anybody to add or change the lyrics to one of his songs. This version of the song, including children from the village singing the chorus with guitarist and producer of Dylan 's album Infidels (1983), Mark Knopfler, was released on December 9 in the UK and reached No. 1 in the UK Singles Chart. The proceeds went to charities for children. The song was featured on the compilation album Hits 97, where all royalties from the song were given to three separate charities.
Gabrielle 's single "Rise '' (2000) sampled from this song.
Nick Talevski used the title for his book, Knocking On Heaven 's Door: Rock Obituaries. A version of this song was performed by Raign for the TV show "The 100 ''
The title is used as a name of a Stand, "Heaven 's Door '', in the fourth part of the manga "Jojo 's Bizarre Adventure '', Diamond is Unbreakable.
The title was used for the 11th episode of Angel Beats! and Bob Dylan along with the song was mentioned in the episode by T.K. and Hinata
Covered by The Jenerators, the song was used during a tribute to the death of cast member Miguel Ferrer of NCIS: Los Angeles at the end of the Season 8 Episode 16 "New Tricks '', broadcast on March 5, 2017. Ferrer was a member of The Jenerators.
|
who sings the theme song for space jam | Space Jam (soundtrack) - Wikipedia
Space Jam: Music from and Inspired by the Motion Picture is the original soundtrack album of the 1996 film starring Michael Jordan and the Looney Tunes cast. An album featuring the film 's score by James Newton Howard was also released. The soundtrack was released by Warner Sunset and Atlantic Records on November 12, 1996. The worldwide smash hit "I Believe I Can Fly '' by R. Kelly was first released on the soundtrack.
The soundtrack earned mostly good reviews and was successful on the Billboard 200, peaking at # 2. The soundtrack also managed to sell very well; it was certified as double platinum less than two months after its release, in January 1997. In 2001, the soundtrack was certified 6x Platinum.
(*) Does not appear in the film
|
what type of rocks form when lava cools after a volcanic eruption | Volcanic rock - wikipedia
Volcanic rock (often shortened to volcanics in scientific contexts) is a rock formed from magma erupted from a volcano. In other words, it differs from other igneous rock by being of volcanic origin. Like all rock types, the concept of volcanic rock is artificial, and in nature volcanic rocks grade into hypabyssal and metamorphic rocks and constitute an important element of some sediments and sedimentary rocks. For these reasons, in geology, volcanics and shallow hypabyssal rocks are not always treated as distinct. In the context of Precambrian shield geology, the term "volcanic '' is often applied to what are strictly metavolcanic rocks.
Volcanic rocks are among the most common rock types on Earth 's surface, particularly in the oceans. On land, they are very common at plate boundaries and in flood basalt provinces. It has been estimated that volcanic rocks cover about 8 % of the Earth 's current land surface.
Volcanic rocks are usually fine - grained or aphanitic to glass in texture. They often contain clasts of other rocks and phenocrysts. Phenocrysts are crystals that are larger than the matrix and are identifiable with the unaided eye. Rhomb porphyry is an example with large rhomb shaped phenocrysts embedded in a very fine grained matrix.
Volcanic rocks often have a vesicular texture caused by voids left by volatiles trapped in the molten lava. Pumice is a highly vesicular rock produced in explosive volcanic eruptions.
Most modern petrologists classify igneous rocks, including volcanic rocks, by their chemistry when dealing with their origin. The fact that different mineralogies and textures may be developed from the same initial magmas has led petrologists to rely heavily on chemistry to look at a volcanic rock 's origin.
The chemistry of volcanic rocks is dependent on two things: the initial composition of the primary magma and the subsequent differentiation. Differentiation of most volcanic rocks tends to increase the silica (SiO) content, mainly by crystal fractionation.
The initial composition of most volcanic rocks is basaltic, albeit small differences in initial compositions may result in multiple differentiation series. The most common of these series are tholeiitic, calc - alkaline, and alkaline.
Most volcanic rocks share a number of common minerals. Differentiation of volcanic rocks tends to increase the silica (SiO) content mainly by fractional crystallization. Thus, more evolved volcanic rocks tend to be richer in minerals with a higher amount of silica such as phyllo and tectosilicates including the feldspars, quartz polymorphs and muscovite. While still dominated by silicates, more primitive volcanic rocks have mineral assemblages with less silica, such as olivine and the pyroxenes. Bowen 's reaction series correctly predicts the order of formation of the most common minerals in volcanic rocks.
Occasionally, a magma may pick up crystals that crystallized from another magma; these crystals are called xenocrysts. Diamonds found in kimberlites are rare but well - known xenocrysts; the kimberlites do not create the diamonds, but pick them up and transport them to the surface of the Earth.
Volcanic rocks are named according to both their chemical composition and texture. Basalt is a very common volcanic rock with low silica content. Rhyolite is a volcanic rock with high silica content. Rhyolite has silica content similar to that of granite while basalt is compositionally equal to gabbro. Intermediate volcanic rocks include andesite, dacite, trachyte, and latite.
Pyroclastic rocks are the product of explosive volcanism. They are often felsic (high in silica). Pyroclastic rocks are often the result of volcanic debris, such as ash, bombs and tephra, and other volcanic ejecta. Examples of pyroclastic rocks are tuff and ignimbrite.
Shallow intrusions, which possess structure similar to volcanic rather than plutonic rocks, are also considered to be volcanic, shading into subvolcanic.
The terms lava stone and lava rock are more used by marketers than geologists, who would likely say "volcanic rock '' (since lava is a molten liquid and rock is solid). "Lava stone '' may describe anything from a friable silicic pumice to solid mafic flow basalt, and is sometimes used to describe rocks that were never lava, but look as if they were (such as sedimentary limestone with dissolution pitting). To convey anything about the physical or chemical properties of the rock, a more specific term should be used; a good supplier will know what sort of volcanic rock they are selling.
The sub-family of rocks that form from volcanic lava are called igneous volcanic rocks (to differentiate them from igneous rocks that form from magma below the surface, called igneous plutonic rocks).
The lavas of different volcanoes, when cooled and hardened, differ much in their appearance and composition. If a rhyolite lava - stream cools quickly, it can quickly freeze into a black glassy substance called obsidian. When filled with bubbles of gas, the same lava may form the spongy appearing pumice. Allowed to cool slowly, it forms a light - colored, uniformly solid rock called rhyolite.
The lavas, having cooled rapidly in contact with the air or water, are mostly finely crystalline or have at least fine - grained ground - mass representing that part of the viscous semi-crystalline lava flow that was still liquid at the moment of eruption. At this time they were exposed only to atmospheric pressure, and the steam and other gases, which they contained in great quantity were free to escape; many important modifications arise from this, the most striking being the frequent presence of numerous steam cavities (vesicular structure) often drawn out to elongated shapes subsequently filled up with minerals by infiltration (amygdaloidal structure).
As crystallization was going on while the mass was still creeping forward under the surface of the Earth, the latest formed minerals (in the ground - mass) are commonly arranged in subparallel winding lines that follow the direction of movement (fluxion or fluidal structure) -- and larger early minerals that previously crystallized may show the same arrangement. Most lavas fall considerably below their original temperatures before emitted. In their behavior, they present a close analogy to hot solutions of salts in water, which, when they approach the saturation temperature, first deposit a crop of large, well - formed crystals (labile stage) and subsequently precipitate clouds of smaller less perfect crystalline particles (metastable stage).
In igneous rocks the first generation of crystals generally forms before the lava has emerged to the surface, that is to say, during the ascent from the subterranean depths to the crater of the volcano. It has frequently been verified by observation that freshly emitted lavas contain large crystals borne along in a molten, liquid mass. The large, well - formed, early crystals (phenocrysts) are said to be porphyritic; the smaller crystals of the surrounding matrix or ground - mass belong to the post-effusion stage. More rarely lavas are completely fused at the moment of ejection; they may then cool to form a non-porphyritic, finely crystalline rock, or if more rapidly chilled may in large part be non-crystalline or glassy (vitreous rocks such as obsidian, tachylyte, pitchstone).
A common feature of glassy rocks is the presence of rounded bodies (spherulites), consisting of fine divergent fibres radiating from a center; they consist of imperfect crystals of feldspar, mixed with quartz or tridymite; similar bodies are often produced artificially in glasses that are allowed to cool slowly. Rarely these spherulites are hollow or consist of concentric shells with spaces between (lithophysae). Perlitic structure, also common in glasses, consists of the presence of concentric rounded cracks owing to contraction on cooling.
The phenocrysts or porphyritic minerals are not only larger than those of the ground - mass; as the matrix was still liquid when they formed they were free to take perfect crystalline shapes, without interference by the pressure of adjacent crystals. They seem to have grown rapidly, as they are often filled with enclosures of glassy or finely crystalline material like that of the ground - mass. Microscopic examination of the phenocrysts often reveals that they have had a complex history. Very frequently they show layers of different composition, indicated by variations in color or other optical properties; thus augite may be green in the center surrounded by various shades of brown; or they may be pale green centrally and darker green with strong pleochroism (aegirine) at the periphery.
In the feldspars the center is usually richer in calcium than the surrounding layers, and successive zones may often be noted, each less calcic than those within it. Phenocrysts of quartz (and of other minerals), instead of sharp, perfect crystalline faces, may show rounded corroded surfaces, with the points blunted and irregular tongue - like projections of the matrix into the substance of the crystal. It is clear that after the mineral had crystallized it was partly again dissolved or corroded at some period before the matrix solidified.
Corroded phenocrysts of biotite and hornblende are very common in some lavas; they are surrounded by black rims of magnetite mixed with pale green augite. The hornblende or biotite substance has proved unstable at a certain stage of consolidation, and has been replaced by a paramorph of augite and magnetite, which may partially or completely substitute for the original crystal but still retains its characteristic outlines.
Volcanic rocks: Subvolcanic rocks: Plutonic rocks:
Komatiite, Picrite basalt Kimberlite, Lamproite Peridotite
Basalt Diabase (Dolerite) Gabbro
Andesite Micro diorite Diorite
Dacite Micro granodiorite Granodiorite
Rhyolite Micro granite, Aplite Granite
|
identify and explain at least 3 different forms of protest that took place during the vietnam war | Lists of protests against the Vietnam War - wikipedia
Protests against the Vietnam War took place in the 1960s and 1970s. The protests were part of a movement in opposition to the Vietnam War and took place mainly in the United States.
There are many pro - and anti-war slogans and chants. Those who used the anti-war slogans were commonly called "doves ''; those who supported the war were known as "hawks ''
|
don't forget me when i'm gone i have loved you for so long | Do n't Forget Me (When I 'm Gone) - wikipedia
"Do n't Forget Me (When I 'm Gone) '' is a song by Canadian rock band Glass Tiger. It was released in January 1986 as the lead from their debut album, The Thin Red Line. The song reached number - one in Canada and number 2 in the United States. The song features backing vocals by rock singer Bryan Adams.
In 1985, Glass Tiger chose Jim Vallance to produce the band 's debut album. At the time, Vallance was primarily known as a songwriter, having written most frequently (and successfully) with Bryan Adams (who can be heard providing background vocals towards the end of this song). He also had some previous production experience, having produced one album apiece by Adams, Doug and the Slugs and CANO in the early 1980s. The band 's lead vocalist Alan Frew recalled: "It worked out great because we were all at the same stage of development. He did n't change the sound of the band at all. He let us experiment but was n't afraid to get heavy - handed when he had to. '' Vallance composed "Do n't Forget Me (When I 'm Gone) '' with the band, while Adams provided backing vocals. Frew - "On the very first day that we met Jim Vallance, he picked us up at the airport and to break the ice asked us what we were listening to. One was Tears For Fears. We went to his house and drank tea and listened to some tunes. ' Everybody Wants to Rule the World ' came on and we really liked the shuffle beat. So we went into the studio and based on this shuffle beat, we wrote ' Do n't Forget Me (When I 'm Gone) '. First day, first song. ''
"Do n't Forget Me (When I 'm Gone) '' topped the Canadian Singles Chart in March 1986, and spent two weeks at number 1. The single was certified platinum by the Canadian Recording Industry Association in July. The song entered the U.S. Billboard Hot 100 in July, peaked at number 2 in October - kept from number 1 by Janet Jackson 's "When I Think of You '' - and spent 24 weeks on the chart. It reached number 1 on the Singles Sales chart and number 6 on the Hot 100 Airplay chart. The song also peaked at number 17 on the Mainstream Rock chart, number 30 on the Adult Contemporary chart, and number 34 on the Billboard Year - End singles chart of 1986. The single reached the top 10 in Australia, number 27 in New Zealand, number 29 in the United Kingdom, and number 40 in the Netherlands.
Frew credited the song 's chart performance to "solid record company involvement '' and the band 's international appeal. "We are n't rewriting musical history by any means, '' he added. "But our melody lines are strong and mature enough to appeal to the English - speaking world. '' The song won the 1986 Juno Award for Single of the Year, and was named top Canadian single in the Rock Express magazine readers ' poll awards in 1987. In 1996, the Society of Composers, Authors and Music Publishers of Canada honored the song for airing more than 100,000 times on Canadian radio. Glass Tiger performed the song during an episode of the 2005 NBC reality television program Hit Me, Baby, One More Time.
The song 's original music video, made for the Canadian market, mixed performance footage with a storybook concept. Directed by Rob Quartly, the video was nominated for Best Video at the Juno Awards of 1986. This version was the first video to air on the MuchMoreRetro digital cable music video channel when it launched on September 4, 2003. A second video was created for other markets, according to Manhattan Records Vice President of A&R Bruce Garfield. He noted that "Steven Reed, our senior vice president of marketing, took a very strong stand because the Canadian video was too cutesy and directed solely toward the youth market. '' Garfield added, "It did n't focus enough on the artistic integrity and entertainment aspect of the band. '' The newer version received heavy rotation on MTV.
|
harukanaru toki no naka de akane and akram | Haruka: Beyond the Stream of Time (manga) - wikipedia
Harukanaru Toki no Naka de (遙 か なる 時空 の 中 で) is a Japanese shojo manga series written by Tohko Mizuno who also worked on the video game of the same name, which was developed by Ruby Party and published by Koei. The manga was serialized in LaLa DX magazine from July 1999 to January 2010, and published by Hakusensha in 17 volumes. An English version was licensed by Viz Media under the title Haruka: Beyond the Stream of Time. The storyline initially follows the characters and events from the first installment of the video game series. An anime series titled Harukanaru Toki no Naka de ~ Hachiyou Shou ~ (遙 か なる 時空 の 中 で ~ 八 葉 抄 ~) was developed by Yumeta Company with Aki Tsunaki directing, with 26 episodes broadcast from October 5, 2004 to March 29, 2005. The anime received an English dub titled Haruka: Beyond the Stream of Time: A Tale of the Eight Guardians.
Although an exact time period is not given for Akane 's arrival in Heian Kyou, it seems likely that the story is set in the earlier part of the era, when families such as the Tachibana - ke, Fujiwara - ke and Minamoto - ke were all in some prominence. Though the Fujiwara - ke and Minamoto - ke continued to hold significant position through until the Genpei war, the Tachibana - ke faded out of favor during the 9th and 10th centuries. This suggests that Harukanaru Toki no Naka de is set sometime in the 9th or 10th centuries. This would also fit with later continuations of the Harukanaru Toki no Naka de plot, since Harukanaru Toki no Naka de 2 is set a century later, and features prominently once more the Fujiwara - ke and Minamoto - ke, as well as the Taira - ke. The third installment, Harukanaru Toki no Naka de 3 is set in the period of the Genpei War, which came to an end in 1185. If it can be considered that the third installment is set a century on from the second, this would indicate that the original Harukanaru Toki no Naka de storyline is most probably set in the latter half of the 10th century.
On her walk to school, ordinary Kyoto High school student Motomiya Akane hears a voice calling to her from an old well in an abandoned historical estate. The voice is that of the oni leader Akuram, and Akane finds herself summoned into another world that resembles the city of Kyoto during the Heian Period (approx. 800 - 1200). Here she is asked to be the Ryuujin no Miko (龍神 の 神子, Priestess of the Dragon God), a legendary figure who possesses the power of the gods. Akane is told that she must defend this world, called Kyou (京), from the encroachment of the Oni Clan (鬼 の 一族) before she can return home. Fortunately, her school friends Tenma and Shimon are on hand to help her out and along with six Kyou natives they become members of the Hachiyou (八 葉, Eight Leaves), a group of specially chosen men who act as the Miko 's protectors.
The manga was done by Mizuno. It was serialized in LaLa DX magazine from July 1999 to January 2010, and published by Hakusensha in 17 volumes. An English version was licensed by Viz Media under the title Haruka: Beyond the Stream of Time.
An anime television series titled Harukanaru Toki no Naka de ~ Hachiyou Shou ~ (遙 か なる 時空 の 中 で ~ 八 葉 抄 ~) was developed by Yumeta Company with Aki Tsunaki directing, with 26 episodes broadcast from October 5, 2004 to March 29, 2005. The twenty - six episode anime series was licensed in English by Bandai Visual and released in a series of nine DVD volumes under the title Haruka: Beyond the Stream of Time -- A Tale of the Eight Guardians. A new volume was released monthly with the first volume being released April 22, 2008 and the final volume January 13, 2009. The final volume of the DVD featured omake optional endings for each of the characters. This was so that, in keeping with the game 's Neoromance theme, viewers could choose for themselves which of the Hachiyou Akane liked the best.
Two OVA episodes were released as part of the anime series. Ten - Kokoro no yukue (天 - 心 の 行方, Heaven -- Destination of the Heart) was released on December 23, 2005, and Chi - Omoi no ariko (地 - 想い の 在処, Earth - Where the Heart Lies) was released on January 27, 2006. The Ten and Chi OVAs feature the characters from the anime, but its storyline is not based on the manga.
An anime feature film, titled Harukanaru Toki no Naka de - Maihitoyo (遙 か なる 時空 の 中 で ~ 舞 一夜 ~) was produced and released in Japan August 19, 2006.
|
what are the lyrics to take me out to the ball game | Take Me Out to the Ball Game - wikipedia
"Take Me Out to the Ball Game '' is a 1908 Tin Pan Alley song by Jack Norworth and Albert Von Tilzer which has become the unofficial anthem of North American baseball, although neither of its authors had attended a game prior to writing the song. The song 's chorus is traditionally sung during the middle of the seventh inning of a baseball game. Fans are generally encouraged to sing along, and at some ballparks, the words "home team '' are replaced with the team name.
Jack Norworth, while riding a subway train, was inspired by a sign that said "Baseball Today -- Polo Grounds ''. In the song, Katie 's beau calls to ask her out to see a show. She accepts the date, but only if her date will take her out to the baseball game. The words were set to music by Albert Von Tilzer. (Norworth and Von Tilzer finally saw their first Major League Baseball games 32 and 20 years later, respectively.) The song was first sung by Norworth 's then - wife Nora Bayes and popularized by many other vaudeville acts. It was played at a ballpark for the first known time in 1934, at a high - school game in Los Angeles; it was played later that year during the fourth game of the 1934 World Series.
Norworth wrote an alternative version of the song in 1927. (Norworth and Bayes were famous for writing and performing such hits as "Shine On, Harvest Moon ''.) With the sale of so many records, sheet music, and piano rolls, the song became one of the most popular hits of 1908. The Haydn Quartet singing group, led by popular tenor Harry MacDonough, recorded a successful version on Victor Records.
The most famous recording of the song was credited to "Billy Murray and the Haydn Quartet '', even though Murray did not sing on it. The confusion, nonetheless, is so pervasive that, when "Take Me Out to the Ball Game '' was selected by the National Endowment for the Arts and the Recording Industry Association of America as one of the 365 top "Songs of the Century '', the song was credited to Billy Murray, implying his recording of it as having received the most votes among songs from the first decade. The first recorded version was by Edward Meeker. Meeker 's recording was selected by the Library of Congress as a 2010 addition to the National Recording Registry, which selects recordings annually that are "culturally, historically, or aesthetically significant ''.
Below are the lyrics of the 1908 version, which is out of copyright.
Katie Casey was baseball mad, Had the fever and had it bad. Just to root for the home town crew, Ev'ry sou Katie blew. On a Saturday her young beau Called to see if she 'd like to go To see a show, but Miss Kate said "No, I 'll tell you what you can do: '' Chorus Take me out to the ball game, Take me out with the crowd; Buy me some peanuts and Cracker Jack, I do n't care if I never get back. Let me root, root, root for the home team, If they do n't win, it 's a shame. For it 's one, two, three strikes, you 're out, At the old ball game. Katie Casey saw all the games, Knew the players by their first names. Told the umpire he was wrong, All along, Good and strong. When the score was just two to two, Katie Casey knew what to do, Just to cheer up the boys she knew, She made the gang sing this song:
The term "sou '', a coin of French origin, was at the time common slang for a low - denomination coin. In French the expression "sans le sou '' means penniless. Carly Simon 's version, produced for Ken Burns ' 1994 documentary Baseball, reads "Ev'ry cent / Katie spent ''.
Though not so indicated in the lyrics, the chorus is usually sung with a pause in the middle of the word "Cracker '', giving "Cracker Jack '' a pronunciation "Crac -- - ker Jack ''. Also, there is a noticeable pause between the first and second "root ''.
The song (or at least its chorus) has been recorded or cited countless times in the 100 years since it was written. The original music and 1908 lyrics of the song will not revert to the public domain in the United States and the United Kingdom until September 1, 2029, or 70 years after the composers ' deaths; as well, the copyright to the revised 1927 lyrics still remain in effect. It has been used as an instrumental underscore or introduction to many films or skits having to do with baseball.
Copyright Clarification: Musical works before 1923 are public domain (United States). Recorded songs is another matter. The written music and 1908 lyrics is considered public domain in the United States. However, the UK uses the 70 - years past death rule.
The first verse of the 1927 version is sung by Gene Kelly and Frank Sinatra at the start of the MGM musical film, Take Me Out to the Ball Game (1949), a movie that also features a song about the famous and fictitious double play combination, O'Brien to Ryan to Goldberg.
Bing Crosby included the song in a medley on his album Join Bing and Sing Along (1959)
In the early to mid-1980s, the Kidsongs Kids recorded a different version of this song for A Day at Old MacDonald 's Farm.
In the mid-1990s, a Major League Baseball ad campaign featured versions of the song performed by musicians of several different genres. An alternative rock version by the Goo Goo Dolls was also recorded. Multiple genre Louisiana singer - songwriter Dr. John and pop singer Carly Simon both recorded different versions of the song for the PBS documentary series Baseball, by Ken Burns.
In 2001, Nike aired a commercial featuring a diverse group of Major League Baseball players singing lines of the song in their native languages. The players and languages featured were Ken Griffey, Jr. (American English), Alex Rodriguez (Caribbean Spanish), Chan Ho Park (Korean), Kazuhiro Sasaki (Japanese), Graeme Lloyd (Australian English), Éric Gagné (Québécois French), Andruw Jones (Dutch), John Franco (Italian), Iván Rodríguez (Caribbean Spanish), and Mark McGwire (American English).
The iconic song has been used and alluded to in many different ways:
|
to beat with a stick on the soles of the feet | Foot whipping - wikipedia
Foot whipping or bastinado is a method of corporal punishment which consists in hitting the bare soles of a person 's feet. Unlike most types of flogging, this punishment was meant to be more painful than it was to cause actual injury to the victim. Blows were generally delivered with a light rod, knotted cord, or lash.
The receiving person is required to be barefoot. The uncovered soles of the feet need to be placed in an exposed position. The beating is typically performed with an object in the type of a cane or switch. The strokes are usually aimed at the arches of the feet and repeated a certain number of times.
Bastinado is also referred to as foot (bottom) caning or sole caning, depending on the instrument in use. The particular Middle East method is called falaka or falanga, derived from the Greek term phalanx. The German term is Bastonade, deriving from the Italian noun bastonata (stroke with the use of a stick). In former times it was also referred to as Sohlenstreich (corr. striking the soles). The Chinese term is da jiao xin.
The first scripted documentation of bastinado in Europe dates back to the year 1537, in China to 960. References to bastinado have been hypothesised to be found in the Bible (Prov. 22: 15; Lev. 19: 20; Deut. 22: 18), suggesting the practice since antiquity.
This subform of flagellation differentiates from most other forms by limiting the strokes to a very narrow section of the body. The beatings typically aim at the vaults of the feet where the soles are particularly pain sensitive, at this usually avoiding hitting the balls and heels directly but concentrating on the small area in between.
As the skin texture under the soles of the feet can naturally endure high levels of strain, injuries demanding medical attention, such as lacerations or bruises, are rarely inflicted if certain precautions are observed by the executant. The undersides of the feet have therefore become a common target for corporal punishment in many cultures while basically different methods exist.
Foot whipping is typically carried out within prisons and structurally similar institutions. Besides inflicting intense physical suffering it trades on the significance of bare feet as a dishonouring socio - cultural attribute. Therefore, it is regarded to be a particularly humiliating as well as degrading form of punishment.
As wearing shoes is an integral element of societal appearance since antiquity, the visual exposure of bare feet is a traditional and sometimes even ritualistic practice to display the subjection or submission of a person under a manifestation of superior power. At this was often used as a visual indicator of a subservient standing within a social structure and to display the imbalance in power. It was therefore routinely imposed as a visual identifier and obstacle on slaves and prisoners, often divested of rights and liberties in a similar manner. Exploiting its socio - cultural significance, people have been forced to go barefoot as a formal shame sanction and for public humiliation as well.
Foot whipping hereby comprises a drastic aggravation of forcing a person to expose his or her bare feet as part of a punishment and the underlying significance thereof. By making the highly pain susceptible soles of a barefooted person the target for inflicting physical punishment, a usually private section of the body is forcefully accessed. As the person is of course unable to evade the beatings, a critical sphere of his or her privacy is repeatedly invaded, which underlines an especially disproportionate imbalance in power. As a result, a by nature entirely defenseless prison inmate often feels particularly intimidated and victimized by this specific form punishment.
Keeping prisoners barefoot is common practice in several countries of today.
The prisoners are hereby excluded from the protective benefits of standard clothing items with their uncovered feet exposed to the obstacles of the surrounding area. A lack of protection can alone have a victimizing effect and make the person feel vulnerable. The reluctant exposure can aggravate the perception of being powerless or helpless usually experienced in situations of incarceration or captivity. Beating a prisoner 's bare feet can hereby drastically escalate his or her emotional distress and mental suffering.
Foot whipping therefore poses a distinct threat and is often particularly dreaded by potential victims (usually prisoners). Exploiting the effects this penalty is typically used to maintain discipline and compliance in prisons.
Bastinado is commonly associated with Middle and Far Eastern nations, where it is occasionally executed in public, therefore covered by occasional reports and photographs. However it has been frequently practised within in the Western World as well, particularly in prisons, reformatories, boarding schools and similar institutions.
In Europe bastinado was a frequently encountered form of corporal punishment particularly in German areas, where it was mainly carried out to enforce discipline within penal and reformatory institutions, culminating during the Third Reich era. In several German and Austrian institutions it was still practised during the 1950s. Although bastinado has been practiced in penal institutions of the Western World until the late 20th century, it was barely noticed as there is no reference to ever being adjudged on a high level. Instead it was carried out on a rather low level within the confines of the institutions, typically to punish inmates during incarceration. If not specifically authorized the practice was usually condoned, while happening unbeknown to the public. Also foot whipping hardly attracts public interest in general as it appears unspectacular and relatively inoffensive compared to other punishment methods. As it was not executed publicly in the western world, it came to be witnessed only by the individuals directly involved. At this former prisoners rarely communicate incidents as bastinado is widely perceived as a degrading punishment (see public humiliation), while former executants are usually obliged to confidentiality.
Bastinado is still used as prison punishment in several countries (see below). As it causes a high level of suffering for the victim and physical evidence remains largely undetectable after some time, it is frequently used for interrogation and torture.
Bastinado usually requires a certain amount of collaborative effort and an authoritarian presence on the executing party to be enforced. Therefore, it typically appears in settings where corporal punishment is officially approved to be exerted on predefined group of people. This can be situations of imprisonment and incarceration as well as slavery. This moderated subform of flagellation is characteristically prevalent where subjected individuals are forced to remain barefoot.
Foot whipping was common practice as means of disciplinary punishment in different kinds of institutions throughout Central Europe until the 1950s, especially in German territories. In German prisons this method consistently served as the principal disciplinary punishment. Throughout the Nazi era it was frequently used in German penal institutions and labour camps. It was also inflicted on the population in occupied territories, notably Denmark and Norway.
During the era of slavery in Brazil and the American South it was often used whenever so - called "clean beating '' instead of the prevalent more radical forms of flagellation was demanded. This was the case when a loss in market value through visible injuries especially on females was to be avoided. As many so - called "slave - codes '' included a barefoot constraint, bastinado required minimal effort to be performed. As it was sufficiently effective but usually left no visible or relevant injuries, bastinado was often used as an alternative for female slaves with higher market value.
Bastinado is still practised in penal institutions of several countries around the world. In a 1967 survey 83 % of the inmates in Greek prisons reported about frequent infliction of bastinado. It was also used against rioting students. In Spanish prisons 39 % of the inmates reported about this kind of treatment. The French Sûreté reportedly used it to extract confessions. British occupants used it in Palestine, French occupants in Algeria. Within colonial India it was used to punish tax offenders. Within penal institutions in Europe bastinado was reportedly used in Germany, Austria, France, Spain, Greece, Poland, Romania, Bulgaria, Portugal, Macedonia, Lithuania, Georgia, Ukraine, Cyprus, Slovakia and Croatia. Other nations with documented use of bastinado are Syria, Israel, Turkey, Morocco, Iran, Egypt, Iraq, Libya, Lebanon, Tunisia, Yemen, Saudi Arabia, Kuwait, Brazil, Argentina, Nicaragua, Chile, South Africa, Venezuela, Rhodesia, Zimbabwe, Paraguay, Honduras, Bolivia, Ethiopia, Somalia, Kenya, Cameroon, Mauritius, Philippines, South Korea, Pakistan and Nepal.
The prisoner is barefooted and restrained in such manner, that the feet can not be shifted out of position. The intention is to avert serious injuries of the forefoot by stray hitting, especially of the fracturable toes. The energy of the stroke impacts is typically meant to be absorbed by the muscular tissue inside the vaults of the feet.
The Middle Eastern falaka method entails tying up the person 's feet into an elevated position while lying on the back, the beating is generally performed with a rigid wooden stick, a club or a truncheon. The term falaka describes the wooden plank used to tie up the ankles, however different items are used for this purpose. The essentially different German method, which was practiced until the end of the Third Reich era, consisted in strapping the barefoot prisoner prone onto a wooden bench or a plank. Hereby the feet were forced into a pointed posture (plantar flexion) with their bare undersides facing upward. For this purpose the upper body and both ankles were strapped onto the bench. The prisoner 's hands were tied behind the back, usually using a cord or a leather strap. Hereby the person was rendered largely immobile and was especially not able to move the feet out of their forced position. The typically occurring contortions of the body during the execution were largely halted as well. This way the punishment could be inflicted with a certain degree of accuracy to not cause unwanted lesions or other severe injury. It was typically executed with a slightly flexible beating accessory such as a cane or a switch. More infrequently also short whips or leather straps were used. This form of punishment was mainly employed in women 's penal institutions and labor camps where prisoners were often kept barefoot.
The middle eastern falaka is more inclined to cause serious injuries such as bone fractures and nerve damage than the German method, as the person undergoing falaka can move the body and feet to a certain degree. As a result, the strokes impact more or less randomly and injury - prone areas are frequently affected. As falaka is usually carried out with a rigid and often heavy stick, it accordingly causes blunt trauma leaving the person unable to walk and often impeded for life. For the German form, the prisoner was principally unable to move and the beatings were performed with lightweight objects that were relatively thin in diameter and usually slightly flexible. The physical aftereffects of the procedure remained mostly superficial and unwanted injuries were relatively rare. Therefore, the person usually remained capable of walking right after the punishment. Still, the German form of bastinado caused severe levels of pain and suffering for the receiving person.
The beatings usually aim at the tender longitudinal arch of the foot avoiding the bone structure of the ball and the heel. The vaults are particularly touch - sensitive and therefore susceptible to pain due to the tight clustering of nerve endings.
When exerted with a thin and flexible object of lighter weight the corporal effects usually remain temporary. The numerous bones and tendons of the foot are sufficiently protected by muscular tissue so the impact is absorbed by the skin and muscular tissue. The skin under the soles of the human feet is of high elasticity and consistence similar to the palms of the hands. Lesions and hematoma therefore rarely occur while beating marks are mostly superficial. Depending on the characteristics of the beating device in use and the intensity of the beatings the emerging visible aftereffects remain ascertainable over a time frame of a few hours to several days. The receiving person usually remains able to walk without help right after the punishment.
When the beating is executed with heavy sticks like clubs or truncheons according to the falaka method, bone fractures commonly occur as well as nerve damage and severe hematoma. The sustained injuries can take a long time to heal with even lasting or irreversible physical damage to the human musculoskeletal system.
When thin and flexible instruments are used the immediate experience of pain is described as acutely stinging and searing. The instant sensations are disproportionally intense compared to the applied force and reflexively radiate through the body. The subsequent pain sensations of a succession of strokes are often described as throbbing, piercing or burning and gradually ease off within a few hours. A slightly stinging or nagging sensation often remains perceptible for a couple of days, especially while walking.
As the nerve endings under the soles of the feet do not adapt to recurring sensations or impacts, the pain reception does not alleviate through continuous beatings. On the contrary the perception of pain is further intensified over the course of additional impacts through the activation of nociceptors. Over a sequence of impacts applied with nearly constant force the perception of pain is therefore progressively intensifying until a maximum level of activation is reached. For that reason a facile impact can already cause an acute pain sensation after a certain number of preceding strokes.
The subjective experience of corporal suffering can however largely diverge according to a person 's individual pain tolerance. The pain reception itself is hereby aggravated through feelings of anxiety and agitation. The subjective pain susceptibility is accordingly higher the more apprehensive the individual feels about it. Further the female gender generally experiences physical pain notably more intensive and typically reacts with a higher level of anxiety. At the same time women are distinctly sensitive to pressure pain. According to respective assertions women 's subjective suffering under the infliction of foot whipping therefore is significantly more severe. The acute pain sensations can hereby be experienced as largely intolerable.
Seizing and withholding the footwear from a person in a situation of imprisonment, which is commonplace in many countries (Barefoot # Imprisonment and slavery), often has a disconsolating and victimizing effect on the individual. As bare feet are traditionally regarded as a token of subjection and captivity, the unaccustomed and largely reluctant exposure is often perceived as humiliating or oppressive. The increased physical vulnerability by having to remain barefoot often leads to trepidation and the feeling of insecurity. This measure alone can therefore already cause significant distress.
This circumstance is usually aggravated if the bare feet are the target for corporal punishment. The feet are typically hidden away and protected by footwear in most social situations, hereby avoiding unwanted exposure. Therefore, the enforced exposure for the purpose of punishment is mostly perceived as a form of harassment. The obligatory restraints further add to the anxiety and humiliation of the captive.
Any form of methodical corporal punishment typically causes a high level of distress through the inflicted pain and the experience of being defenseless and unable to evade the situation. The mostly occurring loss of composure during the punishment as well as the experience of weakness and vulnerability often permanently damages a person 's self - esteem.
Beating the undersides of a person 's feet moreover conveys an especially steep imbalance in power between the executing party (prison staff or similar) towards the receiving individual (typically prison inmate). A rather private area of the body, which traditionally remains covered or not visible in the presence of other people, is forcibly exposed and beaten. This act represents a blunt intrusion into the sphere of personal privacy and an according elimination of personal boundaries. By this means the receiving person experiences his or her individual powerlessness against the executing authority in a particularly manifest way. This experience can also change or deconstruct the individual 's self - perception and self - awareness.
As a result, the experience of bastinado leads to drastic physical and mental suffering for the receiving individual and is therefore regarded as a highly effectual method of corporal punishment. Exploiting the effects of bastinado on a person, it is still frequently employed on prisoners in several countries.
|
who sang where have all the flowers gone | Where Have All the Flowers Gone? - Wikipedia
"Where Have All the Flowers Gone? '' is a modern folk - style song. The melody and the first three verses were written by Pete Seeger in 1955 and published in Sing Out! magazine. Additional verses were added in May 1960 by Joe Hickerson, who turned it into a circular song. Its rhetorical "where? '' and meditation on death place the song in the ubi sunt tradition. In 2010, the New Statesman listed it as one of the "Top 20 Political Songs ''.
The 1964 release of the song as a Columbia Records 45 single, 13 - 33088, by Pete Seeger was inducted into the Grammy Hall of Fame in 2002 in the Folk category.
Seeger found inspiration for the song in October 1955 while he was on a plane bound for a concert at Oberlin College, one of the few venues which would hire him during the McCarthy era. Leafing through his notebook he saw the passage, "Where are the flowers, the girls have plucked them. Where are the girls, they 've all taken husbands. Where are the men, they 're all in the army. '' These lines were taken from the traditional Cossack folk song "Koloda - Duda '', referenced in the Mikhail Sholokhov novel And Quiet Flows the Don (1934), which Seeger had read "at least a year or two before ''.
Seeger created a song which was subsequently published in Sing Out in 1962. He recorded a version with three verses on The Rainbow Quest album (Folkways LP FA 2454) released in July 1960. Later, Joe Hickerson added two more verses with a recapitulation of the first in May 1960 in Bloomington, Indiana.
In 2010, the New Statesman listed it as one of the "Top 20 Political Songs ''.
The song appeared on the compilation album Pete Seeger 's Greatest Hits (1967) released by Columbia Records as CS 9416.
Pete Seeger 's recording from the Columbia album The Bitter and the Sweet (November 1962), CL 1916, produced by John H. Hammond was also released as a Columbia Hall of Fame 45 single as 13 - 33088 backed by "Little Boxes '' in August, 1965.
Adhunik Bengali singer Anjan Dutt covered the song in his 2001 album Rawng Pencil.
Pete Seeger 's recording of his composition was inducted into the Grammy Hall of Fame, which is a special Grammy award established in 1973 to honor recordings that are at least 25 years old and that have "qualitative or historical significance. ''
|
who won against the patriots and the dolphins | Dolphins -- Patriots rivalry - wikipedia
Patriots lead 2 -- 1
The Dolphins -- Patriots rivalry is an American football rivalry between the Miami Dolphins and New England Patriots of the National Football League (NFL). The Dolphins lead the all - time series 54 -- 51. The two teams play twice every regular season.
While not as famous as some other rivalries, the rivalry has a long history that dates back to the 1960s. The beginning of the rivalry was dominated by the Dolphins, as the time the Dolphins were one of the NFL 's most successful teams, while the Patriots were one of the worst. The Patriots finally made the Super Bowl in 1985.
Starting in 1986, the rivalry was a little bit more even, with the Pats having a 7 - game winning streak from 1986 to 1988. The Dolphins then took over the rivalry once again, winning 13 of the next 15 matchups between the 2 teams. Both teams had great quarterbacks in the 90s, with the Patriots having Drew Bledsoe and the Dolphins with Dan Marino. The Dolphins continued to dominate the rivalry through the late 1990s with the Dolphins sweeping the Patriots in back to back years, 1999 and 2000.
Since 2003, the Patriots dominated the rivalry, though they have noticeably struggled more against the Dolphins than against the other AFC East teams during this time period. In 2004, one of the most famous moments in the rivalry happened where the Dolphins, 2 -- 11 at the time, upset the 12 -- 1 Patriots in a game that has been known as "The Night That Courage Wore Orange ''. In 2008, the rivalry briefly intensified when the Dolphins became the only team other than the Patriots since 2003 to win the division. In week 3 of the aforementioned 2008 season, the Dolphins used the Wildcat formation to throw the Patriots off and went on to upset them, 38 -- 13.
Also notable is the fact that the Dolphins and Patriots are the only NFL teams to post undefeated regular season records following the NFL - AFL merger. The 1972 Dolphins finished with a 14 -- 0 regular season record and went on to win Super Bowl VIII, finishing the only complete perfect season in NFL history, while the 2007 Patriots were the first team to go undefeated in the regular season since the league expanded to 16 games, but famously lost Super Bowl XLII against the New York Giants.
|
what type of interatomic bonds form in n2 and nh3 | Bond order - wikipedia
Bond order is the number of chemical bonds between a pair of atoms. For example, in diatomic nitrogen N ≡ N the bond order is 3, in acetylene H − C ≡ C − H the bond order between the two carbon atoms is also 3, and the C − H bond order is 1. Bond order gives an indication of the stability of a bond. An element with bond order value 0 can not exist, but compounds can have a 0 bond value. Isoelectronic species have same bond number.
In molecules that have resonance or nonclassical bonding, bond order does not need to be an integer. In benzene, where the delocalized molecular orbitals contain 6 pi electrons over six carbons essentially yielding half a pi bond together with the sigma bond for each pair of carbon atoms, giving a calculated bond order of 1.5. Furthermore, bond orders of 1.1, for example, can arise under complex scenarios and essentially refer to bond strength relative to bonds with order 1.
In molecular orbital theory, bond order is also defined as half the difference between the number of bonding electrons and the number of antibonding electrons as per the equation below. This often but not always yields similar results for bonds near their equilibrium lengths, but it does not work for stretched bonds. Bond order is also an index of bond strength and is also used extensively in valence bond theory.
Generally, the higher the bond order, the stronger the bond. Bond orders of one - half may be stable, as shown by the stability of H (bond length 106 pm, bond energy 269 kJ / mol) and He (bond length 108 pm, bond energy 251 kJ / mol).
The bond order concept is used in molecular dynamics and bond order potentials. The magnitude of the bond order is associated with the bond length. According to Linus Pauling in 1947, the bond order is experimentally described as
where d 1 (\ displaystyle d_ (1)) is the single bond length, d i j (\ displaystyle d_ (ij)) is the bond length experimentally measured, and b is a constant, depending on the atoms. Pauling suggested a value of 0.353 Å for b, for carbon - carbon bonds in the original equation:
The value of the constant b depends on the atoms.
The above definition of bond order is somewhat ad hoc and only easy to apply for diatomic molecules. A standard quantum mechanical definition for bond order has been debated for a long time. A comprehensive method to compute bond orders from quantum chemistry calculations was recently published.
|
who is the singer of tadap tadap ke | KK (singer) - Wikipedia
Official web page of KK KK on Facebook
Krishnakumar Kunnath (born 23 August 1968), popularly known as KK, K.K. or Kay Kay (Hindi: केके), is an Indian singer who sings for Hindi, Telugu, Tamil, Kannada and Malayalam language films. KK is noted for his broad vocal range and is considered one of the most versatile singer in India.
Born in Delhi to Hindu Malayali parents C.S. Nair and Kunnath Kanakavalli, Krishnakumar Kunnath was brought up in New Delhi. KK sang 3,500 jingles before breaking into Bollywood. He is an alumnus of Delhi 's Mount St Mary 's School and graduated from Kirori Mal College, Delhi University. He sang in the "Josh of India '' song for the support of Indian Cricket Team during Cricket World Cup of 1999.
KK married his childhood sweetheart Jyothy in 1991. His son Nakul Krishna Kunnath sang a song "Masti '' from his latest album Humsafar with him. KK also has a daughter named Tamara Kunnath who, according to KK, loves playing piano. KK says that his family is his source of energy.
KK has been greatly influenced by the singer Kishore Kumar and music director R.D. Burman. Michael Jackson, Billy Joel, Bryan Adams are also favourite international singers of KK. KK has said that it is not important for a singer 's face to be prominently seen -- saying he believes the important thing is that "a singer must be heard. '' KK has never undergone any formal training in music.
After graduating from Kirori Mal College, Delhi University in commerce, KK had a brief stint of eight months as marketing executive in the hotel industry. After few years, in 1994, he moved to Mumbai.
In 1994, he gave his demo tape to Louis Banks, Ranjit Barot and Lesle Lewis to get a break in the music arena. He was called by UTV and he sang a jingle for Santogen Suiting ad. In a span of four years, he has sung more than 3,500 jingles in 11 Indian languages. He got the first break in Mumbai from UTV to sing jingles. He considers Lesle Lewis as his mentor for giving him his first jingle to sing in Mumbai. KK was introduced as a playback singer with A.R. Rahman 's hit song "College Styley '' and "Hello Dr. '' from Kadir 's Kadhal Desam and then "Strawberry Kannae '' from AVM Productions 's musical film Minsara Kanavu (1997). Pritam Chakraborty is also a fan of him. He got his Bollywood break with "Tadap Tadap '' from Hum Dil De Chuke Sanam (1999). He has also won a National Award for "Tu Aashiqui Hai '' from movie Jhankar Beats in 2004. However, prior to this song he had sung a small portion of the song "Chhod Aaye Hum '' from Gulzar 's Maachis. Till now, KK has sung more than 500 songs in Hindi and more than 200 songs in Telugu, Bengali, Tamil, Kannada and Malayalam languages. He has worked with almost every music director of Hindi film industry active from 1999. He has lent his voice to the biggest hit song of the year 2014 "Tune Maari Entriyaan '' among several others in Gunday. KK 's new song from the movie Bajrangi Bhaijaan called "Tu Jo Mila '' has become popular, many times remained No. 1 on iTunes. Popular singer Arijit Singh said he has been inspired by KK in singing, and is a big fan of KK. Another popular Bollywood singer, Ankit Tiwari also said he has been greatly influenced by KK and he takes KK as his idol and inspiration. Other singers like Javed Ali, Dev Negi, Armaan Malik, Mohammed Irfan, Anupam Roy, and the composer duo Sachin - Jigar have also said they are big fans of KK.
In 1999, Sony Music had just been launched in India and they were looking to launch a new artiste. KK was selected as the new artiste and he came out with a solo album titled Pal with Lesle Lewis composing the music. The album was arranged, composed and produced by Lesle Lewis of Colonial Cousins. The lyrics have been penned by Mehboob. The songs "Aap Ki Dua '', "Yaaron '' and the title track "Pal '' in no time ruled the lips of youngsters & also the music chart. "Pal '' became an anthem & "Yaaron '' became friendship anthem. The album just created history. Pal was the first album released by KK under Sony Music for which he got the prestigious Screen award as best singer. He is very good friend of Jeevan Deep Prakash -- Only resource of OCMC in India)
On 22 January 2008, KK released his second album Humsafar after a gap of eight years. The songs "Aasman Ke '', "Dekho Na '', "Yeh Kahan Mil Gaye Hum '' and "Rain Bhai Kaari (Maajhi) '' are famous songs from this album. Besides, KK had also sung an English Rock Ballad "Cineraria ''. The title track, "Humsafar '' is a mix of English and Hindi lyrics. The album Humsafar has 10 songs, out of which eight have been composed by KK. The other two songs were taken from his previous album Pal.
KK has also sung many television serial songs like Just Mohabbat, Shaka Laka Boom Boom, Kuch Jhuki Si Palkein, Hip Hip Hurray, kavyanjali, Just Dance. He has also sung the theme song for Star Parivaar Awards 2010 with Shreya Ghoshal. KK appeared on television too. He was invited as jury member for a talent hunt show Fame Gurukul.
KK also has sung a song named "Tanha Chala '' for the Pakistani TV show The Ghost which was aired on Hum TV in 2008. The song was composed by Farrukh Abid and Shoiab Farrukh, and Momina Duraid penned the lyrics.
KK participated in the latest musical venture of MTV India Coke Studio. There he sang one qawwali "Chadta Suraj '' along with Sabri Brothers and a recomposed version of his exquisite track "Tu Aashiqui Hai '' from movie Jhankaar Beats. He also came in ' Surili Baat ' in Aaj Tak Chanel. He has also performed in Sony Mix TV Show and MTV Unplugged Season 3, aired on MTV 11 January 2014. KK was in Dubai for his concert ' Salaam Dubai 2014 ' in April. He also did concerts in Goa, Dubai & Chennai, and Hong kong.
On 29 August 2015, KK appeared in the television singing reality show Indian Idol Junior Season 2 to cheer up the emerging singers of India where he performed "Khuda Jaane '', "Mera Pehla Pehla Pyaar '', "Make Some Noise For The Desi Boyz '', "Ajab Si '', "Sach Kehraha Hai Deewana '' and many more songs with the junior idols and "Tu Aashiqui Hai '' with Vishal Dadlani, "Aashaayen '' with Salim Merchant and "Tadap Tadap '' with Sonakshi Sinha. After 10 years, He appeared in a singing reality show as a judge and guest jury member.
On 13 September 2015, KK came in "Baaton Baaton Mein '' on Sony Mix.
In 2013, KK sang for an international album, Rise Up -- Colors of Peace, which consists of songs written by Turkish poet Fetullah Gulen and sung by artists from 12 countries. KK recorded a song named "Rose of My Heart '' for the album.
In 2010, the Indi Band "Bandish '' produced their second self - titled album. The album has a high energy track in the album Tere Bin -- a rock ballad sung by KK.
His debut album Pal was released in April 1999. His performance on the album received a Star Screen Award from Screen India for Best Male Singer.
His second album, Humsafar, was released nine years later, in January 2008. It belongs to the pop - rock genre and has a total of 10 songs. Humsafar has two songs that are repeated from his earlier album Pal -- "Din Ho Ya Raat '' and "Mehki Hawa ''. The video for a romantic song on the album, "Aasman Ke '', features the singer and south Indian model Suhasi Goradia Dhami. "Humsafar '', the title track, has an interplay of English and Hindi lyrics and is about one 's conscience and how it is a constant companion in the journey of life. One of the songs, "Yeh Kahan Mil Gaye Hum '', was penned seven years before the release. "Dekho Na '', a rock number, was written six years prior. The remaining six songs were developed in the last two years before the release. Other songs in the album include "Rain Bhai Kaari (Maajhi) '', a mix of Bengali Baul with rock with a tinge of SD Burman, and "Cineraria '', a fun - filled English ballad. KK himself wrote the lyrics of "Cineraria '' and the English part of "Humsafar '', the title track, while the remaining tracks were written by others.
|
miles davis quintet live in europe 1967 review | Live in Europe 1967: The Bootleg Series Vol. 1 - wikipedia
Live in Europe 1967: The Bootleg Series Vol. 1 is a 3 CD + 1 DVD live album of Miles Davis and his "second great quintet '' (saxophonist Wayne Shorter, pianist Herbie Hancock, bassist Ron Carter, and drummer Tony Williams). The CDs contain recordings of three separate concerts in Europe (Antwerp, Copenhagen and Paris), and the DVD has two additional concerts from Karlsruhe and Stockholm.
The first disc was recorded at the Konigin Elizabethzaal in Antwerp, Belgium, on 28 October 1967. The second disc contains the concert from 2 November 1967 at the Tivoli Gardens in Copenhagen, Denmark plus the beginning of 6 November 1967 at the Salle Pleyel in Paris, France. The third disc is the majority of the Paris set. The DVD contains concerts recorded in West Germany on 7 November 1967 and Sweden on 31 October 1967
Live in Europe 1967: The Bootleg Series Vol. 1 received positive reviews on release. At Metacritic, which assigns a normalised rating out of 100 to reviews from mainstream critics, the album has received a score of 99, based on 7 reviews which is categorised as universal acclaim. Nate Chinen of The New York Times said the album "captures Davis 's finest working band at its apogee, straining at the limits of post-bop refinement. '' Thom Jurek 's review on Allmusic stated "Musically, the quintet -- Davis, Herbie Hancock, Wayne Shorter, Ron Carter, and Tony Williams -- are firing on all cylinders throughout ''. In his review for Pitchfork, Hank Shteamer observed "it is n't just the best band Miles ever led, but one of the choicest small groups in jazz history... At its heart, jazz thrives on bold, sensitive interaction in the moment, and Live in Europe 1967 represents the pinnacle of that practice. ''. PopMatters ', Matthew Flander gave the album 10 out of 10 saying "no matter how many releases we get from the Davis archives, no matter how familiar you are with his mid - ' 60s work, Live in Europe 1967 will surprise you and remind you that, even in lean times, even when the trends of the genre he championed were moving away from him, even when his country stopped caring, Miles Davis found a way to press forward, to reinvent, and to give us yet another classic sound, and perhaps the final thrilling word on Jazz as he knew it ''
|
who led carthage in the third punic war | Third Punic War - wikipedia
The Third Punic War (Latin: Tertium Bellum Punicum) (149 -- 146 BC) was the third and last of the Punic Wars fought between the former Phoenician colony of Carthage and the Roman Republic. The Punic Wars were named because of the Roman name for Carthaginians: Punici, or Poenici.
This war was a much smaller engagement than the two previous Punic Wars and focused on Tunisia, mainly on the Siege of Carthage, which resulted in the complete destruction of the city, the annexation of all remaining Carthaginian territory by Rome, and the death or enslavement of the entire Carthaginian population. The Third Punic War ended Carthage 's independent existence.
In the years between the Second and Third Punic War, Rome was engaged in the conquest of the Hellenistic empires to the east (see Macedonian Wars, Illyrian Wars, and the Roman - Syrian War) and ruthlessly suppressing the Hispanian peoples in the west, although they had been essential to the Roman success in the Second Punic War. Carthage, stripped of allies and territory (Sicily, Sardinia, Hispania), was suffering under a huge indemnity of 200 silver talents to be paid every year for 50 years.
According to Appian the senator Cato the Elder usually finished his speeches on any subject in the Senate with the phrase ceterum censeo Carthaginem esse delendam, which means "Moreover, I am of the opinion that Carthage ought to be destroyed ''. Cicero put a similar statement in the mouth of Cato in his dialogue De Senectute. He was opposed by the senator Publius Cornelius Scipio Nasica Corculum, who favoured a different course that would not destroy Carthage, and who usually prevailed in the debates.
The peace treaty at the end of the Second Punic War required that all border disputes involving Carthage be arbitrated by the Roman Senate and required Carthage to get explicit Roman approval before going to war. As a result, in the 50 intervening years between the Second and Third Punic War, Carthage had to take all border disputes with Rome 's ally Numidia to the Roman Senate, where they were decided almost exclusively in Numidian favour.
In 151 BC, the Carthaginian debt to Rome was fully repaid, meaning that, in Punic eyes, the treaty was now expired, though not so according to the Romans, who instead viewed the treaty as a permanent declaration of Carthaginian subordination to Rome akin to the Roman treaties with its Italian allies. Moreover, the retirement of the indemnity removed one of the main incentives the Romans had to keep the peace with Carthage -- there were no further payments that might be interrupted.
The Romans had other reasons to conquer Carthage and her remaining territories. By the middle of the 2nd century BC, the population of the city of Rome was about 400,000 and rising. Feeding the growing populace was becoming a major challenge. The farmlands surrounding Carthage represented the most productive, most accessible and perhaps the most easily obtainable agricultural lands not yet under Roman control.
In 151 BC Numidia launched another border raid on Carthaginian soil, besieging the Punic town of Oroscopa, and Carthage launched a large military expedition (25,000 soldiers) to repel the Numidian invaders. As a result, Carthage suffered a military defeat and was charged with another fifty year debt to Numidia. Immediately thereafter, however, Rome showed displeasure with Carthage 's decision to wage war against its neighbour without Roman consent, and told Carthage that in order to avoid a war it had to "satisfy the Roman People. ''
In 149 BC, Rome declared war against Carthage. The Carthaginians made a series of attempts to appease Rome, and received a promise that if three hundred children of well - born Carthaginians were sent as hostages to Rome the Carthaginians would keep the rights to their land and self - government. Even after this was done the allied Punic city of Utica defected to Rome, and a Roman army of 80,000 men gathered there. The consuls then demanded that Carthage hand over all weapons and armor. After those had been handed over, Rome additionally demanded that the Carthaginians move at least 16 kilometres inland, while the city was to be burned. When the Carthaginians learned of this, they abandoned negotiations and the city was immediately besieged, beginning the Third Punic War.
After the main Roman expedition landed at Utica, consuls Manius Manilius and Lucius Marcius Censorius launched a two - pronged attack on Carthage, but were eventually repulsed by the army of the Carthaginian Generals Hasdrubal the Boeotarch and Himilco Phameas. Censorius lost more than 500 men when they were surprised by the Carthaginian cavalry while collecting timber around the Lake of Tunis. A worse disaster fell upon the Romans when their fleet was set ablaze by fire ships which the Carthaginians released upwind. Manilius was replaced by consul Calpurnius Piso Caesonius in 149 after a severe defeat of the Roman army at Nepheris, a Carthaginian stronghold south of the city. Scipio Aemilianus 's intervention saved four cohorts trapped in a ravine. Nepheris eventually fell to Scipio in the winter of 147 - 146. In the autumn of 148, Piso was beaten back while attempting to storm the city of Aspis, near Cape Bon. Undeterred, he laid siege to the town of Hippagreta in the north, but his army was unable to defeat the Punics there before winter and had to retreat. When news of these setbacks reached Rome, he was replaced as consul by Scipio Aemilianus.
The Carthaginians endured the siege, starting 149 BC to the spring of 146 BC, when Scipio Aemilianus successfully assaulted the city. Though the Punic citizens offered a strong resistance, they were gradually pushed back by the overwhelming Roman military force and destroyed.
Many Carthaginians died from starvation during the later part of the siege, while many others died in the final six days of fighting. When the war ended, the remaining 50,000 Carthaginians, a small part of the original pre-war population, were sold into slavery by the victors. Carthage was systematically burned for 17 days; the city 's walls and buildings were utterly destroyed. The remaining Carthaginian territories were annexed by Rome and reconstituted to become the Roman province of Africa.
The notion that Roman forces then sowed the city with salt to ensure that nothing would grow there again is almost certainly a 19th - century invention. Contemporary accounts show that the land surrounding Carthage was declared ager publicus and that it was shared between local farmers, and Roman and Italian ones. North Africa soon became a vital source of grain for the Romans. Roman Carthage was the main hub transporting these supplies to the capital.
Numerous significant Punic cities, such as those in Mauretania, were taken over and rebuilt by the Romans. Examples of these rebuilt cities are Volubilis, Chellah and Mogador. Volubilis, for example, was an important Roman town situated near the westernmost border of Roman conquests. It was built on the site of the previous Punic settlement, but that settlement overlies an earlier neolithic habitation. Utica, the Punic city which changed loyalties at the beginning of the siege, became the capital of the Roman province of Africa.
A century later, the site of Carthage was rebuilt as a Roman city by Julius Caesar, and would later become one of the main cities of Roman Africa by the time of the Empire.
Ruins of Carthage
Ruins of Carthage
Ruins of Carthage
|
tinea pedis is an example of which type of microorganism | Tinea - wikipedia
Tinea, or ringworm, is any of a variety of skin mycoses. Tinea is a very common fungal infection of the skin. Tinea is often called "ringworm '' because the rash is circular, with a ring - like appearance.
It is sometimes equated with dermatophytosis, and, while most conditions identified as "tinea '' are members of the imperfect fungi that make up the dermatophytes, conditions such as tinea nigra and tinea versicolor are not caused by dermatophytes.
Athlete 's foot (also known as "ringworm of the foot '', tinea pedum, and "moccasin foot '') is a common and contagious skin disease that causes itching, scaling, flaking, and sometimes blistering of the affected areas. Its medical name is tinea pedis, a member of the group of diseases or conditions known as tinea, most of which are dermatophytoses (fungal infections of the skin), which in turn are mycoses (broad category of fungal infections). Globally, athlete 's foot affects about 15 % of the population.
Tinea pedis is caused by fungi such as Epidermophyton floccosum or fungi of the Trichophyton genus including T. rubrum and T. mentagrophytes. These fungi are typically transmitted in moist communal areas where people go barefoot, such as around swimming pools or in showers, and require a warm moist environment like the inside of a shoe to incubate. Fungal infection of the foot may be acquired (or reacquired) in many ways, such as by walking in an infected locker room, by using an infested bathtub, by sharing a towel used by someone with the disease, by touching the feet with infected fingers (such as after scratching another infected area of the body), or by wearing fungi - contaminated socks or shoes.
Infection can often be prevented by keeping the feet dry by limiting the use of footwear that enclose the feet, or by remaining barefoot.
The fungi may infect or spread to other areas of the body (such as by scratching one 's feet and then touching one 's groin). For each location on the body, the name of the condition changes. A fungal infection of the groin is called Tinea cruris, or commonly "jock itch ''. The fungi tend to spread to areas of skin that are kept warm and moist, such as with insulation (clothes), body heat, and sweat.
However, the spread of the infection is not limited to skin. Toe nails become infected with fungi in the same way as the rest of the foot, typically by being trapped with fungi in the warm, dark, moist inside of a shoe. Fungal infection of the nails is called tinea unguium, and is not included in the medical definition of "athlete 's foot '', even though toe nails are part of the foot. Fungi are more difficult to kill inside and underneath a nail than on and in the skin. But if the nail infection is not cured, then the fungi can readily spread back to the rest of the foot. The fungi can also spread to hair, grow inside hair strands, and feed on the keratin within hair, including the hair on the feet, the hair of one 's beard, and the hair on one 's head. From hair, the fungi can spread back to skin.
To effectively treat athlete 's foot, it is necessary to treat the entire infection, wherever it is on the body, until the fungi are dead and the skin has fully healed. There is a wide array of over the counter and prescription topical medications in the form of liquids, sprays, powders, ointments, and creams for killing fungi that have infected the feet or the body in general. For persistent conditions, oral medications are available by prescription.
Onychomycosis (also known as "dermatophytic onychomycosis, '' or "tinea unguium '' is a fungal infection of the nail. It is the most common disease of the nails and constitutes about half of all nail abnormalities.
This condition may affect toenails or fingernails, but toenail infections are particularly common. It occurs in about 10 % of the adult population.
Tinea manuum (or tinea manus) is a fungal infection of the hand. It is typically more aggressive than tinea pedis but similar in look. Itching, burning, cracking, and scaling are observable and may be transmitted sexually or otherwise, whether or not symptoms are present.
Tinea cruris, also known as "crotch itch '', "crotch rot '', "Dhobie itch '', "eczema marginatum '', "gym itch '', "jock itch '', "jock rot '', "scrot rot '' and "ringworm of the groin '' is a dermatophyte fungal infection of the groin region in any sex, though more often seen in males. In the German sprachraum this condition is called tinea inguinalis (from Latin inguen = groin) whereas tinea cruris is used for a dermatophytosis of the lower leg (Latin crus).
Tinea cruris is similar to, but different from Candidal intertrigo, which is an infection of the skin by Candida albicans. It is more specifically located between intertriginous folds of adjacent skin, which can be present in the groin or scrotum, and be indistinguishable from fungal infections caused by tinia. However, candidal infections tend to both appear and disappear with treatment more quickly. It may also affect the scrotum.
Tinea corporis (also known as "ringworm '', tinea circinata, and tinea glabrosa) is a superficial fungal infection (dermatophytosis) of the arms and legs, especially on glabrous skin; however, it may occur on any part of the body, it present as annular, marginated plaque with thin scale and clear center. Common organism are Trichophyton mentagrophytes and Micosporum canis. Treatment include: Grisofluvine, itraconazole and clotrimazole cream.
Tinea capitis (also known as "Herpes tonsurans '', "Ringworm of the hair, '' "Ringworm of the scalp, '' "Scalp ringworm '', and "Tinea tonsurans '') is a superficial fungal infection (dermatophytosis) of the scalp. The disease is primarily caused by dermatophytes in the Trichophyton and Microsporum genera that invade the hair shaft. The clinical presentation is typically single or multiple patches of hair loss, sometimes with a ' black dot ' pattern (often with broken - off hairs), that may be accompanied by inflammation, scaling, pustules, and itching. Uncommon in adults, tinea capitis is predominantly seen in pre-pubertal children, more often boys than girls.
At least eight species of dermatophytes are associated with tinea capitis. Cases of Trichophyton infection predominate from Central America to the United States and in parts of Western Europe. Infections from Microsporum species are mainly in South America, Southern and Central Europe, Africa and the Middle East. The disease is infectious and can be transmitted by humans, animals, or objects that harbor the fungus. The fungus can also exist in a carrier state on the scalp, without clinical symptomatology. Treatment of tinea capitis requires an oral antifungal agent; griseofulvin is the most commonly used drug, but other newer antimycotic drugs, such as terbinafine, itraconazole, and fluconazole have started to gain acceptance, topical treatment include selenium sulfide shampoo.
Tinea faciei is a fungal infection of the face.
It generally appears as a red rash on the face, followed by patches of small, raised bumps. The skin may peel while it is being treated.
Tinea faciei is contagious just by touch and can spread easily to all regions of skin.
Tinea barbæ (also known as "Barber 's itch, '' "Ringworm of the beard, '' and "Tinea sycosis '') is a fungal infection of the hair. Tinea barbae is due to a dermatophytic infection around the bearded area of men. Generally, the infection occurs as a follicular inflammation, or as a cutaneous granulomatous lesion, i.e. a chronic inflammatory reaction. It is one of the causes of folliculitis. It is most common among agricultural workers, as the transmission is more common from animal - to - human than human - to - human. The most common causes are Trichophyton mentagrophytes and T. verrucosum.
Tinea imbricata (also known as "Tokelau '') is a superficial fungal infection of the skin limited to southwest Polynesia, Melanesia, Southeast Asia, India, and Central America.
It is associated with Trichophyton concentricum.
Tinea nigra (also known as "superficial phaeohyphomycosis, '' and "Tinea nigra palmaris et plantaris '') is a superficial fungal infection that causes dark brown to black painless patches on the palms of the hands and the soles of the feet.
Tinea versicolor (also known as dermatomycosis furfuracea, pityriasis versicolor, and tinea flava) is a condition characterized by a skin eruption on the trunk and proximal extremities, hypopigmentation macule in area of sun induced pigmentation. During the winter the pigment becomes reddish brown. Recent research has shown that the majority of tinea versicolor is caused by the Malassezia globosa fungus, although Malassezia furfur is responsible for a small number of cases. These yeasts are normally found on the human skin and only become troublesome under certain circumstances, such as a warm and humid environment, although the exact conditions that cause initiation of the disease process are poorly understood. Treatment include (griseofulivin), topical selenium shampoo and topical ketoconazole.
The condition pityriasis versicolor was first identified in 1846. Versicolor comes from the Latin, from versāre to turn + color.
Tinea incognito is a fungal infection (mycosis) of the skin caused by the presence of a topical immunosuppressive agent. The usual agent is a topical corticosteroid (topical steroid). As the skin fungal infection has lost some of the characteristic features due to suppression of inflammation, it may have a poorly defined border, skin atrophy, telangiectasia, and florid growth. Occasionally, secondary infection with bacteria occurs with concurrent pustules and impetigo.
|
who plays the ice queen in sharkboy and lavagirl | The Adventures of Sharkboy and Lavagirl in 3 - D - wikipedia
The Adventures of Sharkboy and Lavagirl (also known simply as Sharkboy and Lavagirl) is a 2005 American adventure film written and directed by Robert Rodriguez and originally released in the United States on June 10, 2005 by Miramax Films, Columbia Pictures and Dimension Films. The film uses the anaglyph 3 - D technology, similar to the one used in Spy Kids 3 - D: Game Over (2003). The film stars Taylor Lautner, Taylor Dooley, Cayden Boyd, David Arquette, Kristin Davis and George Lopez. Many of the concepts and much of the story were conceived by Rodriguez 's children. The special effects were done by Hybride Technologies, CafeFX, The Orphanage, Post Logic, Hydraulx, Industrial Light & Magic, R! ot Pictures, Tippett Studio, Amalgamated Pixels, Intelligent Creatures and Troublemaker Digital. The film received negative reviews from critics with much of the criticism directed at the decision to post-convert the film into 3 - D which damaged the film 's visual look, and earned $69.4 million on a $50 million budget.
Max is a lonely child in the suburbs of Austin who creates an imaginary dreamworld named Planet Drool, where all of his dreams come to life. He creates two characters; Sharkboy, who was raised by sharks after losing his father at sea, and Lavagirl, who can produce fire and lava, but has trouble touching objects without setting them alight. The two left Max to guard Planet Drool. In reality, Max 's parents have little time for him, and their marital relationship is not going well. Max is also bullied by fellow schoolmate Linus. However, he does receive friendship from Marissa, the daughter of his teacher Mr. Electricidad, whose name is Spanish for "electricity ''. After a chase, Linus steals Max 's dream journal (where all of his most precious dreams are kept) and vandalizes it. The next day, as Max attempts to retaliate, twin tornadoes form outside the school. Sharkboy and Lavagirl emerge from the tornadoes and have Max accompany them to Planet Drool, which he learns is turning bad because of Mr. Electric, the dreamworld 's now - corrupt electrician.
They confront Mr. Electric, who drops them in the Dream Graveyard, where some of Max 's dreams have been dumped. They find Tobor, a robot toy that Max never finished building in the real world after being discouraged by his father. Tobor gives them a lift to other parts of the planet. The three form a friendship during their journey, but they face hardships, such as Sharkboy 's anger for the oceans being frozen over, and Lavagirl 's desperation to find her true purpose on Planet Drool. They are pursued by Mr. Electric and his "plughounds '' across the planet. They plan to visit the Ice Princess and obtain the Crystal Heart, which can freeze time, giving them enough time to get to the center of Planet Drool and fix the dreamworld using Max 's daydreaming. However, they are captured by Mr. Electric, and delivered to Linus 's Planet Drool incarnation Minus, who has altered the dreamworld with Max 's own dream journal, and traps the three in a cage. Sharkboy gets annoyed by Minus and has a shark frenzy, destroying the cage. After they escape, Max retrieves the dream journal from Minus while he is sleeping. Max informs Sharkboy that his father is alive in his book, but when Lavagirl wishes to learn what it says about her true identity, she burns the book to ash. In rage, Lavagirl asks Max why she was made out of lava, but Sharkboy tells him to let her cool down.
After an encounter with the Ice Guardian, Max, Sharkboy, and Lavagirl reach the Ice Princess, the Planet Drool incarnation of Marissa Electricidad. She hands over the Crystal Heart, but they are too late to stop the corruption since the ice princess is the only one who can use the Crystal Heart 's power, and she can not leave her home. Mr. Electric fools Sharkboy into jumping into water filled with electric eels, seemingly killing him. Lavagirl also dies after jumping into the water to retrieve Sharkboy. Tobor appears and convinces Max to dream a better and unselfish dream, which in turn revives Sharkboy, who then races Lavagirl to a volcano to revive her. Max concludes that her purpose is as a light against the dark clouds which have engulfed Planet Drool 's skies. Max gains reality warping as the Daydreamer and defeats Minus, then offers to make a better dreamworld between the two of them, to which Minus agrees.
Mr. Electric refuses to accept the new dreamworld, and flies to Earth to kill Max while he is dreaming. Max awakens back in his classroom during the tornado storm. Mr. Electric materializes, and Max 's parents get sucked into the storm, but are saved by Sharkboy and Lavagirl. Max gives the Crystal Heart to Marissa so she can use the Ice Princess 's powers to destroy Mr. Electric. Mr. Electricidad, Linus and Max make peace with one another, and Max reunites with his parents.
Max later informs his class that Planet Drool became a proper dreamworld again, Sharkboy became the King of the Ocean, and Lavagirl became Queen of the Volcanoes, and as the film shows Max finally finishing Tobor, he reminds the class to "dream a better dream, and work to make it real ''.
Robert Rodriguez has an uncredited role voicing a shark. As seen in the credits, two of Robert Rodriguez 's children, Rebel and Racer, portray Sharkboy at age five and age seven respectively. Rico Torres plays Sharkboy 's father. Marc Musso and Shane Graham play children at Max 's school.
Parts of the film were shot on location in Texas, where Max resides and goes to school in the film. Much of the film was shot in a studio against green screen. Most of the ships, landscapes and other effects including some creatures and characters, were accomplished digitally. According to Lautner and Dooley, when filming the scene with the dream train, the front part of the train was an actual physical set piece. "The whole inside was there and when they have all the gadgets you can pull on, that was all there but everything else was a green screen, '' said Dooley. Eleven visual effects companies (Hybride Technologies, CafeFX, The Orphanage, Post Logic, Hydraulx, Industrial Light & Magic, R! ot Pictures, Tippett Studio, Amalgamated Pixels and Intelligent Creatures and Rodriguez 's Texas - based Troublemaker Digital) worked on the film in order to accomplish over 1,000 visual effect shots.
Robert Rodriguez appears in the credits fourteen times, most notably as director, a producer, a screenwriter (along with Marcel Rodriguez), visual effects supervisor, director of photography, editor, a camera operator, and a composer and performer. The story is credited to Racer Max Rodriguez, with additional story elements by Rebecca Rodriguez, who also wrote the lyrics for the main song, "Sharkboy and Lavagirl ''. Other members of the Rodriguez family can be seen in the film or were involved in the production.
Miley Cyrus had auditioned for the film with Lautner, and said it came down to her and another girl who was also auditioning; however, Cyrus began production on Hannah Montana.
The Adventures of Sharkboy and Lavagirl received a 20 % rating on Rotten Tomatoes and the consensus: "The decision to turn this kiddie fantasy into a 3 - D film was a miscalculation. '' Roger Ebert gave the film 2 out of 4 stars and agreed with the other criticisms in which the 3 - D process used was distracting and muted the colors, thus, he believes, "spoiling '' much of the film and that the film would look more visually appealing when released in the home media market.
For its opening weekend, the film earned $12.6 million in 2,655 theaters. It also was placed # 5 at the box office, being overshadowed by Mr. & Mrs. Smith, Madagascar, Star Wars: Episode III -- Revenge of the Sith, and The Longest Yard. The film was not very successful in the US, taking in $39,177,541 and was a box office bomb. However, it did manage to gross $30,248,282 overseas, for a total of $69,425,966 worldwide.
The Total Nonstop Action professional wrestler Dean Roll, who trademarked the name "Shark Boy '' in 1999, sued Miramax on June 8, 2005, claiming that his trademark had been infringed and demanding "(any) money, profits and advantages wrongfully gained ''. In April 2007, the suit was settled for a disclosed amount of $200,000.
Director Robert Rodriguez composed parts of the score himself, with contributions by composers John Debney and Graeme Revell. Green Day were reportedly set to contribute "Wake Me Up When September Ends '' to the soundtrack, but Robert Rodriguez declined it.
Around the time of the film 's debut Rodriguez co-wrote a series of children 's novels entitled Sharkboy and Lavagirl Adventures with acclaimed science fiction writer Chris Roberson. They include Book 1, The Day Dreamer, and Book 2, Return to Planet Drool, which announces that it will be continued in a third volume, Deep Sleep, which has yet to appear. They are illustrated throughout by Alex Toader, who designed characters and environments for the film and the previous Spy Kids franchise.
In the first book, the story of the film is told from Lavagirl 's and Sharkboy 's perspective, with at least one new event. In Return to Planet Drool, Sharkboy, remembering his encounter with the Imagineer in the first book, continues the search for his father by seeking to return to the Dream World. He meets a very bored Lavagirl in the underwater city of Vent, where she now reigns as queen, and together they embark on a subterranean journey. They encounter piranhas, a gargantuan red bear, and a city of inhabited by the dreams of bygone eras, where they are held captive by superheroes, pirates, and cowboys. By the end, after learning the city 's secrets, Sharkboy still hopes to find his father, and Lavagirl the secrets of her origin.
Jeff Jensen of Entertainment Weekly praised another book appearing around the time of the film, The Adventures of SharkBoy and LavaGirl: The Movie Storybook (by Racer Max Rodriguez and Robert Rodriguez), as a far cry from the usual movie storybook tie - in, and also praised Alex Toader 's "cartoony yet detailed '' illustrations.
|
who put an end to the reign of terror | Reign of Terror - Wikipedia
The Reign of Terror or The Terror (French: la Terreur) is the label given by some historians to a period during the French Revolution after the First French Republic was established.
Several historians consider the "reign of terror '' to have begun in 1793, placing the starting date at either 5 September, June or March (birth of the Revolutionary Tribunal), while some consider it to have begun in September 1792 (September Massacres), or even July 1789 (when the first beheadings took place), but there is a general consensus that it ended with the fall of Robespierre in July 1794.
Between June 1793 and the end of July 1794, there were 16,594 official death sentences in France, of which 2,639 were in Paris. However, the total number of deaths in France was much higher, owing to death in imprisonment, suicide and casualties in foreign and civil war.
There was a sense of emergency among leading politicians in France in the summer of 1793 between the widespread civil war and counter-revolution. Mr. Barère exclaimed on 5 September 1793 in the Convention: "Let 's make terror the order of the day! '' They were determined to avoid street violence such as the September Massacres of 1792 by taking violence into their own hands as an instrument of government.
Robespierre in February 1794 in a speech explained the necessity of terror:
If the basis of popular government in peacetime is virtue, the basis of popular government during a revolution is both virtue and terror; virtue, without which terror is baneful; terror, without which virtue is powerless. Terror is nothing more than speedy, severe and inflexible justice; it is thus an emanation of virtue; it is less a principle in itself, than a consequence of the general principle of democracy, applied to the most pressing needs of the patrie (homeland, fatherland).
Some historians argue that such terror was a necessary reaction to the circumstances. Others suggest there were also other causes, including ideological and emotional.
On 10 March 1793 the National Convention created the Revolutionary Tribunal. Among those charged by the tribunal, about a half were acquitted (though the number dropped to about a quarter after the enactment of the Law of 22 Prairial). In March rebellion broke out in the Vendée in response to mass conscription, which developed into a civil war that lasted until after the Terror.
On 6 April the Committee of Public Safety was created, which gradually became the de facto war - time government.
On 2 June, the Parisian sans - culottes surrounded the National Convention, calling for administrative and political purges, a low fixed price for bread, and a limitation of the electoral franchise to sans - culottes alone. With the backing of the national guard, they persuaded the convention to arrest 29 Girondist leaders. In reaction to the imprisonment of the Girondin deputies, some thirteen departments started the Federalist revolt against the National Convention in Paris, which was ultimately crushed. On 24 June, the convention adopted the first republican constitution of France, the French Constitution of 1793. It was ratified by public referendum, but never put into force.
On 13 July the assassination of Jean - Paul Marat -- a Jacobin leader and journalist -- resulted in a further increase in Jacobin political influence. Georges Danton, the leader of the August 1792 uprising against the king, was removed from the committee. On July 27, 1793, Robespierre became part of the Committee of Public Safety.
On 23 August, the National Convention decreed the Levée en masse, "The young men shall fight; the married man shall forge arms and transport provisions; the women shall make tents and clothes and shall serve in the hospitals; the children shall turn all lint into linen; the old men shall betake themselves to the public square in order to arouse the courage of the warriors and preach hatred of kings and the unity of the Republic. ''
On 9 September, the convention established paramilitary forces, the "revolutionary armies '', to force farmers to surrender grain demanded by the government. On 17 September, the Law of Suspects was passed, which authorized the imprisonment of vaguely defined "suspects ''. This created a mass overflow in the prison systems. On 29 September, the convention extended price fixing from grain and bread to other essential goods, and also fixed wages.
On 10 October, the Convention decreed that "the provisional government shall be revolutionary until peace. '' On 24 October, the French Republican Calendar was enacted. The trial of the Girondins started on the same day and they were executed on 31 October.
Anti-clerical sentiments increased during 1793 and a campaign of dechristianization occurred. On 10 November (20 Brumaire Year II of the French Republican Calendar), the Hébertists organized a Festival of Reason.
On 14 Frimaire (5 December 1793) was passed the Law of Frimaire, which gave the central government more control over the actions of the representatives on mission.
On 16 Pluviôse (4 February 1794), the National Convention decreed that slavery be abolished in all of France and French colonies.
On 8 and 13 Ventôse (26 February and 3 March), Saint - Just proposed decrees to confiscate the property of exiles and opponents of the revolution, known as the Ventôse Decrees.
By the end of 1793, two major factions had emerged, both threatening the Revolutionary Government: the Hébertists, who called for an intensification of the Terror and threatened insurrection, and the Dantonists, led by Georges Danton, who demanded moderation and clemency. The Committee of Public Safety took actions against both. The major Hébertists were tried before the Revolutionary Tribunal and executed on 24 March. The Dantonists were arrested on 30 March, tried on 3 to 5 April and executed on 5 April.
On 20 Prairial (8 June) was celebrated across the country the Festival of the Supreme Being, which was part of the Cult of the Supreme Being, a deist national religion. On 22 Prairial (10 June), the National Convention passed a law proposed by Georges Couthon, known as the Law of 22 Prairial, which simplified the judicial process and greatly accelerated the work of the Revolutionary Tribunal. With the enactment of the law, the number of executions greatly increased, and the period from this time to the Thermidorian Reaction became known as "The Grand Terror ''.
On 8 Messidor (26 June), the French army won the Battle of Fleurus, which marked a turning point in France 's military campaign and undermined the necessity of wartime measures and the legitimacy of the Revolutionary Government.
The fall of Robespierre was brought about by a combination of those who wanted more power for the Committee of Public Safety (and a more radical policy than he was willing to allow) and the moderates who completely opposed the revolutionary government. They had, between them, made the Law of 22 Prairial one of the charges against him, so that, after his fall, to advocate terror would be seen as adopting the policy of a convicted enemy of the republic, putting the advocate 's own head at risk. Between his arrest and his execution, Robespierre may have tried to commit suicide by shooting himself, although the bullet wound he sustained, whatever its origin, only shattered his jaw. Alternatively, he may have been shot by the gendarme Merda. The great confusion that arose during the storming of the municipal Hall of Paris, where Robespierre and his friends had found refuge, make it impossible to be sure of the wound 's origin. In any case, Robespierre was guillotined the next day.
The reign of the standing Committee of Public Safety was ended. New members were appointed the day after Robespierre 's execution, and limits on terms of office were fixed (a quarter of the committee retired every three months). The Committee 's powers were gradually eroded.
|
what instrument does june carter play in walk the line | Autoharp - wikipedia
The Autoharp is a musical instrument in the chorded zither family. It features a series of chord bars attached to dampers, which, when pressed, mute all of the strings other than those that form the desired chord. Although the word autoharp was originally a trademark of the Oscar Schmidt company, the term has colloquially come to be used for any hand - held, chorded zither, regardless of manufacturer.
Debate exists over the origin of the autoharp. A German immigrant in Philadelphia, US, Charles F. Zimmermann, was awarded US 257808 in 1882 for a design for a musical instrument that included mechanisms for muting certain strings during play. He named his invention the "autoharp ''. Unlike later autoharps, the shape of the instrument was symmetrical, and the felt - bearing bars moved horizontally against the strings instead of vertically. It is not known if Zimmermann ever commercially produced any instruments of this early design. Karl August Gütter of Markneukirchen, Germany, built a model that he called a "Volkszither '', which most resembles the autoharp played today. Gütter obtained a British patent for his instrument circa 1883 -- 1884. Zimmermann, after returning from a visit to Germany, began production of the Gütter design in 1885, but with his own design patent number and name. Gütter 's instrument design became very popular, and Zimmermann has often been misnamed as the inventor.
A stylized form of the term autoharp was registered as a trademark in 1926. The word is currently claimed as a trademark by the U.S. Music Corporation, whose Oscar Schmidt division manufactures autoharps. The USPTO registration, however, covers only a "Mark Drawing Code (5) Words, Letters, and / or Numbers in Stylized Form '' and has expired. In litigation with George Orthey, it was held that Oscar Schmidt could only claim ownership of the stylized lettering of the word autoharp, the term itself having moved into general usage.
The autoharp body is made of wood, and has a generally rectangular shape, with one corner cut off. The soundboard generally features a guitar - like sound - hole, and the top may be either solid wood or of laminated construction. A pin - block of multiple laminated layers of wood occupies the top and slanted edges, and serves as a bed for the tuning pins, which resemble those used in pianos and concert zithers.
On the edge opposite the top pin - block is either a series of metal pins, or a grooved metal plate, which accepts the lower ends of the strings. Directly above the strings, on the lower half of the top, are the chord bars, which are made of plastic, wood, or metal, and support felt or foam pads on the side facing the strings. These bars are mounted on springs, and pressed down with one hand, via buttons mounted to their topside. The buttons are labeled with the name of the chord produced when that bar is pressed against the strings, and the strings strummed. The back of the instrument usually has three wooden, plastic, or rubber "feet '', which support the instrument when it is placed backside down on a table top, for playing in the traditional position.
Strings run parallel to the top, between the mounting plate and the tuning pins, and pass under the chord bar assembly. Modern autoharps most often have 36 strings, with some examples having as many as 47 strings, and rare 48 - string models (such as Orthey Autoharps No. 136, tuned to G and D major). They are strung in a semi-chromatic manner which, however, is sometimes modified into either diatonic or fully chromatic scales. Standard models have 12, 15 or 21 chord bars available, providing a selection of major, minor, and dominant seventh chords. These are arranged for historical or systemic reasons. Various special models have also been produced, such as diatonic one -, two -, or three - key models, models with fewer or additional chords, and a reverse - strung model (the 43 - string, 28 - chord Chromaharp Caroler).
The range is determined by the number of strings and their tuning. A typical 36 - string chromatic autoharp in standard tuning has a 31⁄2 octave range, from F2 to C6. The instrument is not fully chromatic throughout this range, however, as this would require 44 strings. The exact 36 - string tuning is:
There are a number of gaps in the lowest octave, which functions primarily to provide bass notes in diatonic contexts; there is also a missing G ♯ 3 in the tenor octave. The fully chromatic part of the instrument 's range begins with A3 (the A below middle C).
Diatonically - strung single - key instruments from modern luthiers are known for their lush sound. This is achieved by doubling the strings for individual notes. Since the strings for notes not in the diatonic scale need not appear in the string bed, the resulting extra space is used for the doubled strings, resulting in fewer damped strings. Two - and three - key diatonics compromise the number of doubled strings to gain the ability to play in two or three keys, and to permit tunes containing accidentals, which could not otherwise be rendered on a single key harp. A three - key harp in the circle of fifths, such as a GDA, is often called a festival or campfire harp, as the instrument can easily accompany fiddles around a campfire or at a festival.
The standard, factory chord bar layout for a 12 - chord autoharp, in two rows, is:
The standard, factory chord bar layout for a 15 - chord instrument, in two rows, is:
The standard, factory chord bar layout for a 21 - chord instrument is in three rows:
A variety of chord bar layouts may be had, both in as - delivered instruments, and after customization.
Until the 1960s, no pickups were available to amplify the autoharp other than rudimentary contact microphones, which frequently had a poor - quality, tinny sound. In the early 1960s, a bar magnetic pickup was designed for the instrument by Harry DeArmond, and manufactured by Rowe Industries. Pinkerton 's Assorted Colours used the instrument on their 1966 single "Mirror, mirror ''. In the 1970s, Oscar Schmidt came out with their own magnetic pickup.
Shown is a 1930 refinished Oscar Schmidt Inc. Model "A ''. This harp has two DeArmond magnetic pickups (one under the chord bars), with a d'Aigle fine - tuning mechanism, and d'Aigle chord bar assembly, and was used in a 1968 MGM Records / Heritage Records recording by Euphoria.
A synthesized version of the autoharp, the Omnichord, was introduced in 1981 and is now known as the Q - Chord, described as a "digital songcard guitar ''.
As initially conceived, the autoharp was played in the position of a concert zither, that is, with the instrument set flat on a table (there are three "feet '' on the back for this purpose), and the flat - edge of the instrument (below the chord bars) placed to the player 's right. The left hand worked the chord buttons, and the right hand would strum the strings in the narrow area below the chord bars. Right hand strums were typically done with a plectrum similar to a guitar pick, made of shell, plastic, or compressed felt. A strum would usually activate multiple strings, playing the chord held down by the left hand.
Partly because of this playing mode, the autoharp came to be thought of as a rhythm instrument for playing chordal accompaniment, and even today many still think of the instrument in that way. New techniques have been developed, however, and modern players can play melodies on the instrument: diatonic players, for example, are able to play fiddle tunes using open - chording techniques, "pumping '' the damper buttons while picking individual strings. Skilled chromatic players can perform a range of melodies, and even solos including melody, chords, and complex rhythmic accompaniments.
In the mid-20th century performers began experimenting with taking the instrument off the table and playing it in an upright position, held in the lap, with the back of the instrument (having the "feet '') held against the chest. Cecil Null, of the Grand Ole Opry is usually credited as the first to adopt this playing style in public performance, in the 1950s. In this position the left hand still works the chord buttons, but from the opposite edge of the instrument, and the right hand still executes the strums, but now plays in the area above the chord bars. (See Joe Butler illustration, below.) This playing mode makes a wider area of the strings available to the picking hand, increasing the range of tonal possibilities, and it proved very popular. It was soon adopted by other performers, notably by members of the Carter Family.
By the early 1970s some players were experimenting with finger - style techniques, where individual fingers of the right hand would pluck specific strings, rather than simply hold a pick and strum chords. Bryan Bowers became a master of this mode of playing, and developed a complex technique utilizing all five fingers of his right hand. This allows him to play independent bass notes, chords, melody, and counter melodies as a soloist. Bowers was also one of the early pioneers in adding a strap to the instrument and playing it while standing up.
Kilby Snow (May 28, 1905 -- March 29, 1980) was an American folk musician and virtuoso autoharpist, who won the title of Autoharp Champion of North Carolina at the age of 5. He developed the "drag note '' playing style, a technique that relied on his left - handedness to produce "slurred '' notes. Although his recorded output is small (a single album for Folkways Records in the 1960s), he has been enormously influential among autoharpists, and is regarded by many as the first modern autoharp player.
Autoharps have been used in the United States as bluegrass and folk instruments by Maybelle Carter, Sara Carter and June Carter; all of the Carter Family.
Maybelle Carter 's granddaughter Carlene Carter frequently plays the autoharp onstage and on her recordings; her song "Me and the Wildwood Rose '', a tribute to her grandmother, makes prominent use of the autoharp.
Janis Joplin occasionally played the autoharp, which can be heard in her early, unreleased recording "So Sad to Be Alone ''.
Several Lovin ' Spoonful songs feature the autoharp playing of John Sebastian, including "Do You Believe in Magic '' and "You Did n't Have to Be So Nice ''. He also played in the 1979 Randy VanWarmer hit song "Just When I Needed You Most ''.
Brian Jones played an autoharp on The Rolling Stones ' Let It Bleed, one of his final recorded performances.
Bryan Bowers developed a complex finger - picking style of playing the autoharp (as opposed to the more common strumming technique) which he initially brought to bluegrass performances with The Dillards in the 1970s, and later to several of his own solo albums. Bowers was an early experimenter with customizing the instrument, often stripping it down to 8 - 10 chords to obtain more room above the chord bars for his right - hand fingers to work in; he also favors diatonic single - key autoharps, which have doubled strings, thus increasing the power and resonance of the tone. He is also a music educator, a strong advocate for the instrument, and was inducted into the Autoharp Hall of Fame in 1993.
Comedian Billy Connolly has used an autoharp in his performances (mostly in earlier concerts during the 1980s).
British singer songwriter Corinne Bailey Rae regularly plays the autoharp and composed the title track from her 2010 album The Sea on the autoharp.
Norwegian Avant - Garde Artist Sturle Dagsland frequently performs with an Autoharp.
Singer / songwriter Brittain Ashford of the band Prairie Empire is known for using autoharp in her music, including the 2008 release "There, but for You, go I ''. She also regularly performs on the autoharp as part of her role in Ghost Quartet, a four - person song cycle composed by Dave Malloy.
In 2017, drag queen and singer - songwriter, Trixie Mattel, used the autoharp in her Two Birds (album). Also, Mattel plays the autoharp as part of her regular drag performances.
|
who called lamb the prince of english essayists | Charles Lamb - wikipedia
Charles Lamb (10 February 1775 -- 27 December 1834) was an English essayist, poet, and antiquarian, best known for his Essays of Elia and for the children 's book Tales from Shakespeare, co-authored with his sister, Mary Lamb (1764 -- 1847).
Friends with such literary luminaries as Samuel Taylor Coleridge, William Wordsworth, and William Hazlitt, Lamb was at the centre of a major literary circle in England. He has been referred to by E.V. Lucas, his principal biographer, as "the most lovable figure in English literature ''.
Lamb was born in London, the son of Elizabeth Field and John Lamb. Lamb was the youngest child, with a sister 11 years older named Mary and an even older brother named John; there were four others who did not survive infancy. His father John Lamb was a lawyer 's clerk and spent most of his professional life as the assistant to a barrister named Samuel Salt, who lived in the Inner Temple in the legal district of London. It was there in Crown Office Row that Charles Lamb was born and spent his youth. Lamb created a portrait of his father in his "Elia on the Old Benchers '' under the name Lovel. Lamb 's older brother was too much his senior to be a youthful companion to the boy but his sister Mary, being born eleven years before him, was probably his closest playmate. Lamb was also cared for by his paternal aunt Hetty, who seems to have had a particular fondness for him. A number of writings by both Charles and Mary suggest that the conflict between Aunt Hetty and her sister - in - law created a certain degree of tension in the Lamb household. However, Charles speaks fondly of her and her presence in the house seems to have brought a great deal of comfort to him.
Some of Lamb 's fondest childhood memories were of time spent with Mrs Field, his maternal grandmother, who was for many years a servant to the Plummer family, who owned a large country house called Blakesware, near Widford, Hertfordshire. After the death of Mrs Plummer, Lamb 's grandmother was in sole charge of the large home and, as Mr Plummer was often absent, Charles had free rein of the place during his visits. A picture of these visits can be glimpsed in the Elia essay Blakesmoor in H -- shire.
Why, every plank and panel of that house for me had magic in it. The tapestried bed - rooms -- tapestry so much better than painting -- not adorning merely, but peopling the wainscots -- at which childhood ever and anon would steal a look, shifting its coverlid (replaced as quickly) to exercise its tender courage in a momentary eye - encounter with those stern bright visages, staring reciprocally -- all Ovid on the walls, in colours vivider than his descriptions.
Little is known about Charles 's life before he was seven other than that Mary taught him to read at a very early age and he read voraciously. It is believed that he suffered from smallpox during his early years, which forced him into a long period of convalescence. After this period of recovery Lamb began to take lessons from Mrs Reynolds, a woman who lived in the Temple and is believed to have been the former wife of a lawyer. Mrs Reynolds must have been a sympathetic schoolmistress because Lamb maintained a relationship with her throughout his life and she is known to have attended dinner parties held by Mary and Charles in the 1820s. E.V. Lucas suggests that sometime in 1781 Charles left Mrs Reynolds and began to study at the Academy of William Bird.
His time with William Bird did not last long, however, because by October 1782 Lamb was enrolled in Christ 's Hospital, a charity boarding school chartered by King Edward VI in 1553. A thorough record of Christ 's Hospital is to be found in several essays by Lamb as well as The Autobiography of Leigh Hunt and the Biographia Literaria of Samuel Taylor Coleridge, with whom Charles developed a friendship that would last for their entire lives. Despite the school 's brutality, Lamb got along well there, due in part, perhaps, to the fact that his home was not far distant, thus enabling him, unlike many other boys, to return often to its safety. Years later, in his essay "Christ 's Hospital Five and Thirty Years Ago '', Lamb described these events, speaking of himself in the third person as "L ''.
"I remember L. at school; and can well recollect that he had some peculiar advantages, which I and other of his schoolfellows had not. His friends lived in town, and were near at hand; and he had the privilege of going to see them, almost as often as he wished, through some invidious distinction, which was denied to us. ''
Christ 's Hospital was a typical English boarding school and many students later wrote of the terrible violence they suffered there. The upper master (i.e. principal or headteacher) of the school from 1778 to 1799 was Reverend James Boyer, a man renowned for his unpredictable and capricious temper. In one famous story Boyer was said to have knocked one of Leigh Hunt 's teeth out by throwing a copy of Homer at him from across the room. Lamb seemed to have escaped much of this brutality, in part because of his amiable personality and in part because Samuel Salt, his father 's employer and Lamb 's sponsor at the school, was one of the institute 's governors.
Charles Lamb suffered from a stutter and this "inconquerable impediment '' in his speech deprived him of Grecian status at Christ 's Hospital, thus disqualifying him for a clerical career. While Coleridge and other scholarly boys were able to go on to Cambridge, Lamb left school at fourteen and was forced to find a more prosaic career. For a short time he worked in the office of Joseph Paice, a London merchant, and then, for 23 weeks, until 8 February 1792, held a small post in the Examiner 's Office of the South Sea House. Its subsequent downfall in a pyramid scheme after Lamb left (the South Sea Bubble) would be contrasted to the company 's prosperity in the first Elia essay. On 5 April 1792 he went to work in the Accountant 's Office for the British East India Company, the death of his father 's employer having ruined the family 's fortunes. Charles would continue to work there for 25 years, until his retirement with pension (the "superannuation '' he refers to in the title of one essay).
In 1792 while tending to his grandmother, Mary Field, in Hertfordshire, Charles Lamb fell in love with a young woman named Ann Simmons. Although no epistolary record exists of the relationship between the two, Lamb seems to have spent years wooing her. The record of the love exists in several accounts of Lamb 's writing. "Rosamund Gray '' is a story of a young man named Allen Clare who loves Rosamund Gray but their relationship comes to nothing because of her sudden death. Miss Simmons also appears in several Elia essays under the name "Alice M ''. The essays "Dream Children '', "New Year 's Eve '', and several others, speak of the many years that Lamb spent pursuing his love that ultimately failed. Miss Simmons eventually went on to marry a silversmith and Lamb called the failure of the affair his "great disappointment ''.
Both Charles and his sister Mary suffered a period of mental illness. As he himself confessed in a letter, Charles spent six weeks in a mental facility during 1795:
Coleridge, I know not what suffering scenes you have gone through at Bristol. My life has been somewhat diversified of late. The six weeks that finished last year and began this your very humble servant spent very agreeably in a mad house at Hoxton -- I am got somewhat rational now, and do n't bite any one. But mad I was -- and many a vagary my imagination played with me, enough to make a volume if all told. My Sonnets I have extended to the number of nine since I saw you, and will some day communicate to you.
However, Mary Lamb 's illness was particularly strong, and it led her to become aggressive on a fatal occasion. On 22 September 1796, while preparing dinner, Mary became angry with her apprentice, roughly shoving the little girl out of her way and pushing her into another room. Her mother, Elizabeth, began yelling at her for this, and Mary suffered a mental breakdown as her mother continued yelling at her. A terrible event occurred: she took the kitchen knife she had been holding, unsheathed it, and approached her mother, who was sitting down. Mary, "worn down to a state of extreme nervous misery by attention to needlework by day and to her mother at night '', was seized with acute mania and stabbed her mother in the heart with a table knife. Charles ran into the house soon after the murder and took the knife out of Mary 's hand.
Later in the evening, Charles found a local place for Mary in a private mental facility called Fisher House, which had been found with the help of a doctor friend of his. While reports were published by the media, Charles wrote a letter to Samuel Taylor Coleridge in connection to the matricide:
MY dearest friend -- White or some of my friends or the public papers by this time may have informed you of the terrible calamities that have fallen on our family. I will only give you the outlines. My poor dear dearest sister in a fit of insanity has been the death of her own mother. I was at hand only time enough to snatch the knife out of her grasp. She is at present in a mad house, from whence I fear she must be moved to an hospital. God has preserved to me my senses, -- I eat and drink and sleep, and have my judgment I believe very sound. My poor father was slightly wounded, and I am left to take care of him and my aunt. Mr Norris of the Bluecoat school has been very very kind to us, and we have no other friend, but thank God I am very calm and composed, and able to do the best that remains to do. Write, -- as religious a letter as possible -- but no mention of what is gone and done with. -- With me "the former things are passed away, '' and I have something more to do that (than) to feel. God almighty have us all in his keeping.
Charles took over responsibility for Mary after refusing his brother John 's suggestion that they have her committed to a public lunatic asylum. Lamb used a large part of his relatively meagre income to keep his beloved sister in the private "madhouse '' in Islington. With the help of friends, Lamb succeeded in obtaining his sister 's release from what would otherwise have been lifelong imprisonment. Although there was no legal status of "insanity '' at the time, the jury returned the verdict of "lunacy '' which was how she was freed from guilt of willful murder, on the condition that Charles take personal responsibility for her safekeeping.
The 1799 death of John Lamb was something of a relief to Charles because his father had been mentally incapacitated for a number of years since suffering a stroke. The death of his father also meant that Mary could come to live again with him in Pentonville, and in 1800 they set up a shared home at Mitre Court Buildings in the Temple, where they would live until 1809.
In 1800, Mary 's illness came back and Charles had to take her back again to the asylum, probably Bethlehem Hospital. In those days, Charles sent a letter to Coleridge, in which he admitted he felt melancholic and lonely, adding "I almost wish that Mary were dead. ''
Later she would come back, and both he and his sister would enjoy an active and rich social life. Their London quarters became a kind of weekly salon for many of the most outstanding theatrical and literary figures of the day. In 1869, a club, The Lambs, was formed in London to carry on their salon tradition. The actor Henry James Montague founded the club 's New York counterpart in 1874.
Charles Lamb, having been to school with Samuel Coleridge, counted Coleridge as perhaps his closest, and certainly his oldest, friend. On his deathbed, Coleridge had a mourning ring sent to Lamb and his sister. Fortuitously, Lamb 's first publication was in 1796, when four sonnets by "Mr Charles Lamb of the India House '' appeared in Coleridge 's Poems on Various Subjects. In 1797 he contributed additional blank verse to the second edition, and met the Wordsworths, William and Dorothy, on his short summer holiday with Coleridge at Nether Stowey, thereby also striking up a lifelong friendship with William. In London, Lamb became familiar with a group of young writers who favoured political reform, including Percy Bysshe Shelley, William Hazlitt, and Leigh Hunt.
Lamb continued to clerk for the East India Company and doubled as a writer in various genres, his tragedy, John Woodvil, being published in 1802. His farce, Mr H, was performed at Drury Lane in 1807, where it was roundly booed. In the same year, Tales from Shakespeare (Charles handled the tragedies; his sister Mary, the comedies) was published, and became a best seller for William Godwin 's "Children 's Library ''.
On 20 July 1819, at age 44, Lamb, who, because of family commitments, had never married, fell in love with an actress, Fanny Kelly, of Covent Garden, and besides writing her a sonnet he also proposed marriage. She refused him, and he died a bachelor.
His collected essays, under the title Essays of Elia, were published in 1823 ("Elia '' being the pen name Lamb used as a contributor to The London Magazine).
The Essays of Elia would be criticised in the Quarterly Review (January, 1823) by Robert Southey, who thought its author to be irreligious. When Charles read the review, entitled "The Progress of Infidelity '', he was filled with indignation, and wrote a letter to his friend Bernard Barton, where Lamb declared he hated the review, and emphasised that his words "meant no harm to religion ''. First, Lamb did not want to retort, since he actually admired Southey; but later he felt the need to write a letter "Elia to Southey '', in which he complained and expressed that the fact that he was a dissenter of the Church, did not make him an irreligious man. The letter would be published in The London Magazine, on October, 1823:
Rightly taken, Sir, that Paper was not against Graces, but Want of Grace; not against the ceremony, but the carelessness and slovenliness so often observed in the performance of it... You have never ridiculed, I believe, what you thought to be religion, but you are always girding at what some pious, but perhaps mistaken folks, think to be so.
A further collection called The Last Essays of Elia was published in 1833, shortly before Lamb 's death. Also, in 1834, Samuel Coleridge died. The funeral was confined only to the family of the writer, so Lamb was prevented from attending and only wrote a letter to Rev. James Gilman, a very close (word missing), expressing his condolences.
He died of a streptococcal infection, erysipelas, contracted from a minor graze on his face sustained after slipping in the street, on 27 December 1834. He was 59. From 1833 till their deaths, Charles and Mary lived at Bay Cottage, Church Street, Edmonton, north of London (now part of the London Borough of Enfield). Lamb is buried in All Saints ' Churchyard, Edmonton. His sister, who was ten years his senior, survived him for more than a dozen years. She is buried beside him.
Lamb 's first publication was the inclusion of four sonnets in Coleridge 's Poems on Various Subjects, published in 1796 by Joseph Cottle. The sonnets were significantly influenced by the poems of Burns and the sonnets of William Bowles, a largely forgotten poet of the late 18th century. Lamb 's poems garnered little attention and are seldom read today. As he himself came to realise, he was a much more talented prose stylist than poet. Indeed, one of the most celebrated poets of the day -- William Wordsworth -- wrote to John Scott as early as 1815 that Lamb "writes prose exquisitely '' -- and this was five years before Lamb began The Essays of Elia for which he is now most famous.
Notwithstanding, Lamb 's contributions to Coleridge 's second edition of the Poems on Various Subjects showed significant growth as a poet. These poems included The Tomb of Douglas and A Vision of Repentance. Because of a temporary fallout with Coleridge, Lamb 's poems were to be excluded in the third edition of the Poems though as it turned out a third edition never emerged. Instead, Coleridge 's next publication was the monumentally influential Lyrical Ballads co-published with Wordsworth. Lamb, on the other hand, published a book entitled Blank Verse with Charles Lloyd, the mentally unstable son of the founder of Lloyds Bank. Lamb 's most famous poem was written at this time and entitled The Old Familiar Faces. Like most of Lamb 's poems, it is unabashedly sentimental, and perhaps for this reason it is still remembered and widely read today, being often included in anthologies of British and Romantic period poetry. Of particular interest to Lambarians is the opening verse of the original version of The Old Familiar Faces, which is concerned with Lamb 's mother, whom Mary Lamb killed. It was a verse that Lamb chose to remove from the edition of his Collected Work published in 1818:
I had a mother, but she died, and left me,
Died prematurely in a day of horrors -
In the final years of the 18th century, Lamb began to work on prose, first in a novella entitled Rosamund Gray, which tells the story of a young girl whose character is thought to be based on Ann Simmons, an early love interest. Although the story is not particularly successful as a narrative because of Lamb 's poor sense of plot, it was well thought of by Lamb 's contemporaries and led Shelley to observe, "what a lovely thing is Rosamund Gray! How much knowledge of the sweetest part of our nature in it! '' (Quoted in Barnett, page 50)
In the first years of the 19th century, Lamb began a fruitful literary cooperation with his sister Mary. Together they wrote at least three books for William Godwin 's Juvenile Library. The most successful of these was Tales From Shakespeare, which ran through two editions for Godwin and has been published dozens of times in countless editions ever since. The book contains artful prose summaries of some of Shakespeare 's most well - loved works. According to Lamb, he worked primarily on Shakespeare 's tragedies, while Mary focused mainly on the comedies.
Lamb 's essay "On the Tragedies of Shakespeare Considered with Reference to their Fitness for Stage Representation '', which was originally published in the Reflector in 1811 with the title "On Garrick, and Acting; and the Plays of Shakspeare, considered with reference to their fitness for Stage Representation '', has often been taken as the ultimate Romantic dismissal of the theatre. In the essay, Lamb argues that Shakespeare should be read, rather than performed, in order to protect Shakespeare from butchering by mass commercial performances. While the essay certainly criticises contemporary stage practice, it also develops a more complex reflection on the possibility of representing Shakespearean dramas:
Shakespeare 's dramas are for Lamb the object of a complex cognitive process that does not require sensible data, but only imaginative elements that are suggestively elicited by words. In the altered state of consciousness that the dreamlike experience of reading stands for, Lamb can see Shakespeare 's own conceptions mentally materialized.
Besides contributing to Shakespeare 's reception with his and his sister 's book Tales From Shakespeare, Lamb also contributed to the recovery of acquaintance with Shakespeare 's contemporaries. Accelerating the increasing interest of the time in the older writers, and building for himself a reputation as an antiquarian, in 1808 Lamb compiled a collection of extracts from the old dramatists, Specimens of the English Dramatic Poets Who Lived About the Time of Shakespeare. This also contained critical "characters '' of the old writers, which added to the flow of significant literary criticism, primarily of Shakespeare and his contemporaries, from Lamb 's pen. Immersion in seventeenth - century authors, such as Robert Burton and Sir Thomas Browne, also changed the way Lamb wrote, adding a distinct flavour to his writing style. Lamb 's friend, the essayist William Hazlitt, thus characterised him: "Mr. Lamb... does not march boldly along with the crowd... He prefers bye - ways to highways. When the full tide of human life pours along to some festive show, to some pageant of a day, Elia would stand on one side to look over an old book - stall, or stroll down some deserted pathway in search of a pensive description over a tottering doorway, or some quaint device in architecture, illustrative of embryo art and ancient manners. Mr. Lamb has the very soul of an antiquarian... ''
Although he did not write his first Elia essay until 1820, Lamb 's gradual perfection of the essay form for which he eventually became famous began as early as 1811 in a series of open letters to Leigh Hunt 's Reflector. The most famous of these early essays is The Londoner, in which Lamb famously derides the contemporary fascination with nature and the countryside. He would continue to fine - tune his craft, experimenting with different essayistic voices and personae, for the better part of the next quarter century.
It has been pointed out that spirituality played an important role in Lamb 's personal life, and that, although he was not a churchman, and disliked organised religion, he yet "sought consolation in religion, '' as shown by letters to Samuel Taylor Coleridge and Bernard Barton, in which he described the New Testament as his "best guide '' for life, and where he talked about how he used to read the Psalms for one or two hours without getting tired. Other papers have also dealt with his Christian beliefs. As his friend Samuel Coleridge, Lamb was sympathetic to Priestleyan Unitarianism and was a dissenter, yet, he was described by Coleridge himself as one whose "faith in Jesus ha (d) been preserved '' even after the family tragedy. Wordsworth also described him as a firm Christian in the poem Written After the Death of Charles Lamb. Alfred Ainger, in his work Charles Lamb, writes that Lamb 's religion had become "an habit ''.
The poems "On The Lord 's Prayer '', "A Vision Of Repentance '', "The Young Catechist '', "Composed at Midnight '', "Suffer Little Children, And Forbid Them Not, To Come Unto Me '', "Written a twelvemonth after the Events '', "Charity '', "Sonnet To A Friend '' and "David '' reflect much about Lamb 's faith, whereas the poem "Living Without God In The World '' has been called a "poetic attack '' to unbelief, in which Lamb expresses his disgust for atheism attributing its nature to pride.
Anne Fadiman notes regretfully that Lamb is not widely read in modern times: "I do not understand why so few other readers are clamoring for his company... (he) is kept alive largely through the tenuous resuscitations of university English departments ''.
Notwithstanding, there has always been a small but enduring following for Lamb 's works, as the long - running and still - active Charles Lamb Bulletin demonstrates. Because of his notoriously quirky, even bizarre, style, he has been more of a "cult favourite '' than an author with mass popular or scholarly appeal.
Lamb was honoured by The Latymer School, a grammar school in Edmonton, a suburb of London where he lived for a time; it has six houses, one of which, "Lamb '', is named after Charles.
William Wordsworth composed an epitaph - poem "Written After The Death Of Charles Lamb '' (1835; 1836), in which he exalts the moral character of his friend. Sir Edward Elgar titled an orchestral work "Dream Children '' having in mind Lamb 's essay of that name.
|
when was the last time man united won the fa cup | List of FA Cup finals - wikipedia
The Football Association Challenge Cup, commonly known as the FA Cup, is a knockout competition in English football, organised by and named after The Football Association (the FA). It is the oldest existing football competition in the world, having commenced in the 1871 -- 72 season. The tournament is open to all clubs in the top 10 levels of the English football league system, although a club 's home stadium must meet certain requirements prior to entering the tournament. The competition culminates at the end of the league season usually in May with the FA Cup Final, officially named The Football Association Challenge Cup Final Tie, which has traditionally been regarded as the showpiece finale of the English football season.
All of the final venues, apart from four, have been in London with most being played at the original Wembley Stadium, which was used from 1923 until the stadium closed in 2000. The other venues that were used for the final prior to 1923 were Kennington Oval, Crystal Palace, Stamford Bridge and Lillie Bridge, all in London, Goodison Park in Liverpool and Fallowfield Stadium and Old Trafford in Manchester. The Millennium Stadium in Cardiff hosted the final for six years (2001 -- 2006), while the new Wembley Stadium was under construction. Other grounds have been used for replays, which until 1999 took place if the initial match ended in a draw. The new Wembley Stadium has been the permanent venue of the final since 2007.
As of 2017, the record for the most wins is held by Arsenal with 13 victories. The cup has been won by the same team in two or more consecutive years on ten occasions, and four teams have won consecutive finals more than once: Wanderers, Blackburn Rovers, Tottenham Hotspur and Arsenal. The cup has been won by a non-English team once. The cup is currently held by Arsenal, who defeated Chelsea in the 2017 final.
The winners of the first tournament were Wanderers, a team of former public schoolboys based in London, who went on to win the competition five times in its first seven seasons. The early winners of the competition were all teams of wealthy amateurs from the south of England, but in 1883, Blackburn Olympic became the first team from the north to win the cup, defeating Old Etonians. Upon his team 's return to Blackburn, Olympic captain Albert Warburton proclaimed: "The Cup is very welcome to Lancashire. It 'll have a good home and it 'll never go back to London. ''
With the advent of professionalism at around the same time, the amateur teams quickly faded from prominence in the competition. The leading professional clubs formed The Football League in 1888. Since then, one non-league team has won the cup. Tottenham Hotspur, then of the Southern League, defeated Sheffield United of The Football League to win the 1901 final. A year later Sheffield United returned to the final and won the cup, which then remained in the hands of Northern and Midland clubs until Tottenham won it again in 1921. In 1927, Cardiff City, a team which plays in the English football league system despite being based in Wales, won the cup, the only non-English club to do so. Scottish club Queens Park reached the final twice in the early years of the competition.
Newcastle United enjoyed a brief spell of FA Cup dominance in the 1950s, winning the trophy three times in five years, and in the 1960s, Tottenham Hotspur enjoyed a similar spell of success, with three wins in seven seasons. This marked the start of a successful period for London - based clubs, with 11 wins in 22 seasons. Teams from the second tier of English football, at the time called the Second Division, experienced an unprecedented run of cup success between 1973 and 1980. Sunderland won the cup in 1973, Southampton repeated the feat in 1976, and West Ham United won in 1980, the most recent victory by a team from outside the top division.
Until 1999, a draw in the final would result in the match being replayed at a later date; since then the final has always been decided on the day, with a penalty shootout as required. As of 2017 a penalty shoot - out has been required on only two occasions, in the 2005 and 2006 finals. The competition did not take place during the First and Second World Wars, other than in the 1914 -- 15 season, when it was completed, and the 1939 -- 40 season, when it was abandoned during the qualifying rounds.
All teams are English, except where marked (Scottish) or (Welsh).
A. ^ The official attendance for the 1923 final was reported as 126,047, but the actual figure is believed to be anywhere between 150,000 and 300,000.
Teams shown in italics are no longer in existence. Additionally, Queen 's Park ceased to be eligible to enter the FA Cup after a Scottish Football Association ruling in 1887.
|
when did the tyrannosaurus rules as apex predator | Tyrannosaurus - wikipedia
Tyrannosaurus is a genus of coelurosaurian theropod dinosaur. The species Tyrannosaurus rex (rex meaning "king '' in Latin), often colloquialy called simply T. rex or T - Rex, is one of the most well - represented of the large theropods. Tyrannosaurus lived throughout what is now western North America, on what was then an island continent known as Laramidia. Tyrannosaurus had a much wider range than other tyrannosaurids. Fossils are found in a variety of rock formations dating to the Maastrichtian age of the upper Cretaceous Period, 68 to 66 million years ago. It was the last known member of the tyrannosaurids, and among the last non-avian dinosaurs to exist before the Cretaceous -- Paleogene extinction event.
Like other tyrannosaurids, Tyrannosaurus was a bipedal carnivore with a massive skull balanced by a long, heavy tail. Relative to its large and powerful hindlimbs, Tyrannosaurus forelimbs were short but unusually powerful for their size and had two clawed digits. The most complete specimen measures up to 12.3 m (40 ft) in length, up to 3.66 meters (12 ft) tall at the hips, and according to most modern estimates 8.4 metric tons (9.3 short tons) to 14 metric tons (15.4 short tons) in weight. Although other theropods rivaled or exceeded Tyrannosaurus rex in size, it is still among the largest known land predators and is estimated to have exerted the largest bite force among all terrestrial animals. By far the largest carnivore in its environment, Tyrannosaurus rex was most likely an apex predator, preying upon hadrosaurs, armoured herbivores like ceratopsians and ankylosaurs, and possibly sauropods. Some experts have suggested the dinosaur was primarily a scavenger. The question of whether Tyrannosaurus was an apex predator or a pure scavenger was among the longest ongoing debates in paleontology.
More than 50 specimens of Tyrannosaurus rex have been identified, some of which are nearly complete skeletons. Soft tissue and proteins have been reported in at least one of these specimens. The abundance of fossil material has allowed significant research into many aspects of its biology, including its life history and biomechanics. The feeding habits, physiology and potential speed of Tyrannosaurus rex are a few subjects of debate. Its taxonomy is also controversial, as some scientists consider Tarbosaurus bataar from Asia to be a second Tyrannosaurus species while others maintain Tarbosaurus is a separate genus. Several other genera of North American tyrannosaurids have also been synonymized with Tyrannosaurus.
As the archetypal theropod, Tyrannosaurus is one of the best - known dinosaurs since the 20th century, and has been featured in film, advertising, and postal stamps, as well as many other types of media.
Tyrannosaurus rex was one of the largest land carnivores of all time; the largest complete specimen, located at the Field Museum of Natural History under the name FMNH PR2081 and nicknamed Sue, measured 12.3 meters (40 ft) long, and was 3.66 meters (12 ft) tall at the hips, and according to the most recent studies estimated to have weighed between 8.4 metric tons (9.3 short tons) to 14 metric tons (15.4 short tons) when alive. Not every adult Tyrannosaurus specimen recovered is as big. Historically average adult mass estimates have varied widely over the years, from as low as 4.5 metric tons (5.0 short tons), to more than 7.2 metric tons (7.9 short tons), with most modern estimates ranging between 5.4 metric tons (6.0 short tons) and 8.0 metric tons (8.8 short tons). Hutchinson et al. (2011) found that the maximum weight of Sue, the largest complete Tyrannosaurus specimen, was between 9.5 and 18.5 metric tons (9.3 -- 18.2 long tons; 10.5 -- 20.4 short tons), though the authors stated that their upper and lower estimates were based on models with wide error bars and that they "consider (them) to be too skinny, too fat, or too disproportionate '' and provided a mean estimate at 14 metric tons (15.4 short tons) for this specimen. Packard et al. (2009) tested dinosaur mass estimation procedures on elephants and concluded that those of dinosaurs are flawed and produce over-estimations; thus, the weight of Tyrannosaurus, as well as other dinosaurs, could have been much less. Other estimations have concluded that the largest known Tyrannosaurus specimens had masses approaching or exceeding 9 tonnes.
Due to the relatively small number of recovered specimens and the large population of individuals present at any given time when Tyrannosaurus was alive, there could have easily been larger specimens than those currently known including "Sue '', though discovery of these largest individuals may be forever untenable due to the incomplete nature of the fossil record. Holtz has also suggested that "it is very reasonable to suspect that there were individuals that were 10, 15, or even 20 percent larger than Sue in any T. rex population. ''
The neck of Tyrannosaurus rex formed a natural S - shaped curve like that of other theropods, but was short and muscular to support the massive head. The forelimbs had only two clawed fingers, along with an additional small metacarpal representing the remnant of a third digit. In contrast the hind limbs were among the longest in proportion to body size of any theropod. The tail was heavy and long, sometimes containing over forty vertebrae, in order to balance the massive head and torso. To compensate for the immense bulk of the animal, many bones throughout the skeleton were hollow, reducing its weight without significant loss of strength.
The largest known Tyrannosaurus rex skull measures up to 1.52 meters (5 ft) in length. Large fenestrae (openings) in the skull reduced weight and provided areas for muscle attachment, as in all carnivorous theropods. But in other respects Tyrannosaurus 's skull was significantly different from those of large non-tyrannosauroid theropods. It was extremely wide at the rear but had a narrow snout, allowing unusually good binocular vision. The skull bones were massive and the nasals and some other bones were fused, preventing movement between them; but many were pneumatized (contained a "honeycomb '' of tiny air spaces) which may have made the bones more flexible as well as lighter. These and other skull - strengthening features are part of the tyrannosaurid trend towards an increasingly powerful bite, which easily surpassed that of all non-tyrannosaurids. The tip of the upper jaw was U-shaped (most non-tyrannosauroid carnivores had V - shaped upper jaws), which increased the amount of tissue and bone a tyrannosaur could rip out with one bite, although it also increased the stresses on the front teeth.
The teeth of Tyrannosaurus rex displayed marked heterodonty (differences in shape). The premaxillary teeth at the front of the upper jaw were closely packed, D - shaped in cross-section, had reinforcing ridges on the rear surface, were incisiform (their tips were chisel - like blades) and curved backwards. The D - shaped cross-section, reinforcing ridges and backwards curve reduced the risk that the teeth would snap when Tyrannosaurus bit and pulled. The remaining teeth were robust, like "lethal bananas '' rather than daggers, more widely spaced and also had reinforcing ridges. Those in the upper jaw were larger than those in all but the rear of the lower jaw. The largest found so far is estimated to have been 30.5 centimeters (12 in) long including the root when the animal was alive, making it the largest tooth of any carnivorous dinosaur yet found.
While there is no direct evidence for Tyrannosaurus rex having had feathers, many scientists now consider it likely that T. rex had feathers on at least parts of its body, due to their presence in related species. Mark Norell of the American Museum of Natural History summarized the balance of evidence by stating that: "we have as much evidence that T. rex was feathered, at least during some stage of its life, as we do that australopithecines like Lucy had hair. ''
The first evidence for feathers in tyrannosauroids came from the small species Dilong paradoxus, found in the Yixian Formation of China, and reported in 2004. As with many other coelurosaurian theropods discovered in the Yixian, the fossil skeleton was preserved with a coat of filamentous structures which are commonly recognized as the precursors of feathers. Because all known skin impressions from larger tyrannosauroids known at the time showed evidence of scales, the researchers who studied Dilong speculated that feathers may correlate negatively with body size -- that juveniles may have been feathered, then shed the feathers and expressed only scales as the animal became larger and no longer needed insulation to stay warm. Subsequent discoveries showed that even some large tyrannosauroids had feathers covering much of their bodies, casting doubt on the hypothesis that they were a size - related feature.
While skin impressions from a Tyrannosaurus rex specimen nicknamed "Wyrex '' (BHI 6230) discovered in Montana in 2002, as well as some other giant tyrannosauroid specimens, show at least small patches of mosaic scales, others, such as Yutyrannus huali (which was up to 9 meters (30 ft) long and weighed about 1,400 kilograms (3,100 lb)), preserve feathers on various sections of the body, strongly suggesting that its whole body was covered in feathers. It is possible that the extent and nature of feather covering in tyrannosauroids may have changed over time in response to body size, a warmer climate, or other factors. In 2017, based on skin impressions found on the tail, ilium and neck of the "Wyrex '' (BHI 6230) specimen and other closely related tyrannosaurids, it was suggested that large - bodied tyrannosaurids were scaly and, if partly feathered, these were limited to the dorsum.
A study in 2016 proposed that large theropods like Tyrannosaurus had teeth covered in lips like extant lizards instead of bare teeth like crocodilians. This was based on the presence of enamel, which according to the study needs to remain hydrated, an issue not faced by aquatic animals like crocodilians or toothless animals like birds.
Based on comparisons of bone texture of Daspletosaurus with extant crocodilians, a detailed study in 2017 by Thomas D. Carr et al. found that tyrannosaurs had large, flat scales on their snouts. At the center of these scales were small keratinised patches. In crocodilians, such patches cover bundles of sensory neurons that can detect mechanical, thermal and chemical stimuli. They proposed that tyrannosaurs probably also had bundles of sensory neurons under their facial scales and may have used them to identify objects, measure the temperature of their nests and gently pick - up eggs and hatchlings. Although the study did not explicitly discuss the evidence for or against lips, many major news outlets considered it as evidence against tyrannosaurs having lips. Comparisons with crocodilian facial tissue and Thomas D. Carr 's personal interpretation of the findings were cited as support for the conclusion that tyrannosaurs did not have lips.
Henry Fairfield Osborn, president of the American Museum of Natural History, named Tyrannosaurus rex in 1905. The generic name is derived from the Greek words τύραννος (tyrannos, meaning "tyrant '') and σαῦρος (sauros, meaning "lizard ''). Osborn used the Latin word rex, meaning "king '', for the specific name. The full binomial therefore translates to "tyrant lizard the king '' or "King Tyrant Lizard '', emphasizing the animal 's size and perceived dominance over other species of the time.
Teeth from what is now documented as a Tyrannosaurus rex were found in 1874 by Arthur Lakes near Golden, Colorado. In the early 1890s, John Bell Hatcher collected postcranial elements in eastern Wyoming. The fossils were believed to be from a large species of Ornithomimus (O. grandis) but are now considered Tyrannosaurus rex remains. Vertebral fragments found by Edward Drinker Cope in western South Dakota in 1892 and assigned to Manospondylus gigas have also been recognized as belonging to Tyrannosaurus rex.
Barnum Brown, assistant curator of the American Museum of Natural History, found the first partial skeleton of Tyrannosaurus rex in eastern Wyoming in 1900. H.F. Osborn originally named this skeleton Dynamosaurus imperiosus in a paper in 1905. Brown found another partial skeleton in the Hell Creek Formation in Montana in 1902. Osborn used this holotype to describe Tyrannosaurus rex in the same paper in which D. imperiosus was described. In 1906, Osborn recognized the two as synonyms, and acted as first revisor by selecting Tyrannosaurus as the valid name. The original Dynamosaurus material resides in the collections of the Natural History Museum, London.
In total, Brown found five Tyrannosaurus partial skeletons. In 1941, Brown 's 1902 find was sold to the Carnegie Museum of Natural History in Pittsburgh, Pennsylvania. Brown 's fourth and largest find, also from Hell Creek, is on display in the American Museum of Natural History in New York.
The first named fossil specimen which can be attributed to Tyrannosaurus rex consists of two partial vertebrae (one of which has been lost) found by Edward Drinker Cope in 1892. Cope believed that they belonged to an "agathaumid '' (ceratopsid) dinosaur, and named them Manospondylus gigas, meaning "giant porous vertebra '' in reference to the numerous openings for blood vessels he found in the bone. The M. gigas remains were later identified as those of a theropod rather than a ceratopsid, and H.F. Osborn recognized the similarity between M. gigas and Tyrannosaurus rex as early as 1917. Owing to the fragmentary nature of the Manospondylus vertebrae, Osborn did not synonymize the two genera.
In June 2000, the Black Hills Institute located the type locality of M. gigas in South Dakota and unearthed more tyrannosaur bones there. These were judged to represent further remains of the same individual, and to be identical to those of Tyrannosaurus rex. According to the rules of the International Code of Zoological Nomenclature (ICZN), the system that governs the scientific naming of animals, Manospondylus gigas should therefore have priority over Tyrannosaurus rex, because it was named first. The Fourth Edition of the ICZN, which took effect on January 1, 2000, states that "the prevailing usage must be maintained '' when "the senior synonym or homonym has not been used as a valid name after 1899 '' and "the junior synonym or homonym has been used for a particular taxon, as its presumed valid name, in at least 25 works, published by at least 10 authors in the immediately preceding 50 years... '' Tyrannosaurus rex may qualify as the valid name under these conditions and would most likely be considered a nomen protectum ("protected name '') under the ICZN if it is ever formally published on, which it has not yet been. Manospondylus gigas could then be deemed a nomen oblitum ("forgotten name '').
Sue Hendrickson, amateur paleontologist, discovered the most complete (approximately 85 %) and the largest Tyrannosaurus fossil skeleton known in the Hell Creek Formation near Faith, South Dakota, on August 12, 1990. This Tyrannosaurus, nicknamed Sue in her honor, was the object of a legal battle over its ownership. In 1997 this was settled in favor of Maurice Williams, the original land owner. The fossil collection was purchased by the Field Museum of Natural History at auction for $7.6 million, making it the most expensive dinosaur skeleton to date. From 1998 to 1999 Field Museum of Natural History preparators spent over 25,000 man - hours taking the rock off each of the bones. The bones were then shipped off to New Jersey where the mount was made. The finished mount was then taken apart, and along with the bones, shipped back to Chicago for the final assembly. The mounted skeleton opened to the public on May 17, 2000 in the great hall (Stanley Field Hall) at the Field Museum of Natural History. A study of this specimen 's fossilized bones showed that Sue reached full size at age 19 and died at age 28, the longest any tyrannosaur is known to have lived. Early speculation that Sue may have died from a bite to the back of the head was not confirmed. Though subsequent study showed many pathologies in the skeleton, no bite marks were found. Damage to the back of the skull may have been caused by post-mortem trampling. Recent speculation indicates that Sue may have died of starvation after contracting a parasitic infection from eating diseased meat; the resulting infection would have caused inflammation in the throat, ultimately leading Sue to starve because she could no longer swallow food. This hypothesis is substantiated by smooth - edged holes in her skull which are similar to those caused in modern - day birds that contract the same parasite.
Another Tyrannosaurus, nicknamed Stan, in honor of amateur paleontologist Stan Sacrison, was found in the Hell Creek Formation near Buffalo, South Dakota, in the spring of 1987. It was not collected until 1992, as it was mistakenly thought to be a Triceratops skeleton. Stan is 63 % complete and is on display in the Black Hills Institute of Geological Research in Hill City, South Dakota, after an extensive world tour during 1995 and 1996. This tyrannosaur, too, was found to have many bone pathologies, including broken and healed ribs, a broken (and healed) neck and a spectacular hole in the back of its head, about the size of a Tyrannosaurus tooth.
In the summer of 2000, Jack Horner discovered five Tyrannosaurus skeletons near the Fort Peck Reservoir in Montana. One of the specimens was reported to be perhaps the largest Tyrannosaurus ever found.
In 2001, a 50 % complete skeleton of a juvenile Tyrannosaurus was discovered in the Hell Creek Formation in Montana, by a crew from the Burpee Museum of Natural History of Rockford, Illinois. Dubbed Jane, the find was initially considered the first known skeleton of the pygmy tyrannosaurid Nanotyrannus but subsequent research has revealed that it is more likely a juvenile Tyrannosaurus. It is the most complete and best preserved juvenile example known to date. Jane has been examined by Jack Horner, Pete Larson, Robert Bakker, Greg Erickson, and several other renowned paleontologists, because of the uniqueness of her age. Jane is currently on exhibit at the Burpee Museum of Natural History in Rockford, Illinois.
In a press release in 2006, Montana State University revealed that it possessed the largest Tyrannosaurus skull yet discovered. Discovered in the 1960s and only recently reconstructed, the skull measured 59 inches (150 cm) long compared to the 55.4 inches (141 cm) of Sue 's skull, a difference of 6.5 %.
Tyrannosaurus is the type genus of the superfamily Tyrannosauroidea, the family Tyrannosauridae, and the subfamily Tyrannosaurinae; in other words it is the standard by which paleontologists decide whether to include other species in the same group. Other members of the tyrannosaurine subfamily include the North American Daspletosaurus and the Asian Tarbosaurus, both of which have occasionally been synonymized with Tyrannosaurus. Tyrannosaurids were once commonly thought to be descendants of earlier large theropods such as megalosaurs and carnosaurs, although more recently they were reclassified with the generally smaller coelurosaurs.
In 1955, Soviet paleontologist Evgeny Maleev named a new species, Tyrannosaurus bataar, from Mongolia. By 1965, this species had been renamed Tarbosaurus bataar. Despite the renaming, many phylogenetic analyses have found Tarbosaurus bataar to be the sister taxon of Tyrannosaurus rex, and it has often been considered an Asian species of Tyrannosaurus. A recent redescription of the skull of Tarbosaurus bataar has shown that it was much narrower than that of Tyrannosaurus rex and that during a bite, the distribution of stress in the skull would have been very different, closer to that of Alioramus, another Asian tyrannosaur. A related cladistic analysis found that Alioramus, not Tyrannosaurus, was the sister taxon of Tarbosaurus, which, if true, would suggest that Tarbosaurus and Tyrannosaurus should remain separate. The discovery and description of Qianzhousaurus in 2014, would disprove this and reveal that Alioramus belonged to the clade Alioramini. The discovery of the tyrannosaurid Lythronax further indicates that Tarbosaurus and Tyrannosaurus are closely related, forming a clade with fellow Asian tyrannosaurid Zhuchengtyrannus, with Lythronax being their sister taxon. A further study from 2016 by Steve Brusatte, Thomas Carr et al., also indicates Tyrannosaurus may have been an immigrant from Asia, as well as a possible descendant of Tarbosaurus. The study further indicates the possibility that Tyrannosaurus may have driven other tyrannosaurids that were native to North America extinct through competition. Other finds in 2006 indicate giant tyrannosaurs may have been present in North America as early as 75 million years ago. Whether or not this specimen belongs to Tyrannosaurus rex, a new species of Tyrannosaurus, or a new genus entirely is still unknown.
Other tyrannosaurid fossils found in the same formations as Tyrannosaurus rex were originally classified as separate taxa, including Aublysodon and Albertosaurus megagracilis, the latter being named Dinotyrannus megagracilis in 1995. These fossils are now universally considered to belong to juvenile Tyrannosaurus rex. A small but nearly complete skull from Montana, 60 centimeters (2.0 ft) long, may be an exception. This skull was originally classified as a species of Gorgosaurus (G. lancensis) by Charles W. Gilmore in 1946, but was later referred to a new genus, Nanotyrannus. Opinions remain divided on the validity of N. lancensis. Many paleontologists consider the skull to belong to a juvenile Tyrannosaurus rex. There are minor differences between the two species, including the higher number of teeth in N. lancensis, which lead some scientists to recommend keeping the two genera separate until further research or discoveries clarify the situation.
Below is the cladogram of Tyrannosauridae based on the phylogenetic analysis conducted by Loewen et al. in 2013.
Gorgosaurus libratus
Albertosaurus sarcophagus
Dinosaur Park tyrannosaurid
Daspletosaurus torosus
Two Medicine tyrannosaurid
Teratophoneus curriei
Bistahieversor sealeyi
Lythronax argestes
Tyrannosaurus rex
Tarbosaurus bataar
Zhuchengtyrannus magnus
The identification of several specimens as juvenile Tyrannosaurus rex has allowed scientists to document ontogenetic changes in the species, estimate the lifespan, and determine how quickly the animals would have grown. The smallest known individual (LACM 28471, the "Jordan theropod '') is estimated to have weighed only 30 kg (66 lb), while the largest, such as FMNH PR2081 (Sue) most likely weighed about 5,650 kg (12,460 lb). Histologic analysis of Tyrannosaurus rex bones showed LACM 28471 had aged only 2 years when it died, while Sue was 28 years old, an age which may have been close to the maximum for the species.
Histology has also allowed the age of other specimens to be determined. Growth curves can be developed when the ages of different specimens are plotted on a graph along with their mass. A Tyrannosaurus rex growth curve is S - shaped, with juveniles remaining under 1,800 kg (4,000 lb) until approximately 14 years of age, when body size began to increase dramatically. During this rapid growth phase, a young Tyrannosaurus rex would gain an average of 600 kg (1,300 lb) a year for the next four years. At 18 years of age, the curve plateaus again, indicating that growth slowed dramatically. For example, only 600 kg (1,300 lb) separated the 28 - year - old Sue from a 22 - year - old Canadian specimen (RTMP 81.12. 1). A 2004 histological study performed by different workers corroborates these results, finding that rapid growth began to slow at around 16 years of age.
Another study corroborated the latter study 's results but found the growth rate to be much faster, finding it to be around 1800 kilograms (4000 lbs). Although these results were much higher than previous estimations, the authors noted that these results significantly lowered the great difference between its actual growth rate and the one which would be expected of an animal of its size. The sudden change in growth rate at the end of the growth spurt may indicate physical maturity, a hypothesis which is supported by the discovery of medullary tissue in the femur of a 16 to 20 - year - old Tyrannosaurus rex from Montana (MOR 1125, also known as B - rex). Medullary tissue is found only in female birds during ovulation, indicating that B - rex was of reproductive age. Further study indicates an age of 18 for this specimen. In 2016, it was finally confirmed by Mary Higby Schweitzer and Lindsay Zanno et al that the soft tissue within the femur of MOR 1125 was medullary tissue. This also confirmed the identity of the specimen as a female. The discovery of medullary bone tissue within Tyrannosaurus may prove valuable in determining the sex of other dinosaur species in future examinations, as the chemical makeup of medullary tissue is unmistakable. Other tyrannosaurids exhibit extremely similar growth curves, although with lower growth rates corresponding to their lower adult sizes.
Over half of the known Tyrannosaurus rex specimens appear to have died within six years of reaching sexual maturity, a pattern which is also seen in other tyrannosaurs and in some large, long - lived birds and mammals today. These species are characterized by high infant mortality rates, followed by relatively low mortality among juveniles. Mortality increases again following sexual maturity, partly due to the stresses of reproduction. One study suggests that the rarity of juvenile Tyrannosaurus rex fossils is due in part to low juvenile mortality rates; the animals were not dying in large numbers at these ages, and so were not often fossilized. This rarity may also be due to the incompleteness of the fossil record or to the bias of fossil collectors towards larger, more spectacular specimens. In a 2013 lecture, Thomas Holtz Jr. suggested that dinosaurs "lived fast and died young '' because they reproduced quickly whereas mammals have long life spans because they take longer to reproduce. Gregory S. Paul also writes that Tyrannosaurus reproduced quickly and died young, but attributes their short life spans to the dangerous lives they lived.
As the number of known specimens increased, scientists began to analyze the variation between individuals and discovered what appeared to be two distinct body types, or morphs, similar to some other theropod species. As one of these morphs was more solidly built, it was termed the ' robust ' morph while the other was termed ' gracile '. Several morphological differences associated with the two morphs were used to analyze sexual dimorphism in Tyrannosaurus rex, with the ' robust ' morph usually suggested to be female. For example, the pelvis of several ' robust ' specimens seemed to be wider, perhaps to allow the passage of eggs. It was also thought that the ' robust ' morphology correlated with a reduced chevron on the first tail vertebra, also ostensibly to allow eggs to pass out of the reproductive tract, as had been erroneously reported for crocodiles.
In recent years, evidence for sexual dimorphism has been weakened. A 2005 study reported that previous claims of sexual dimorphism in crocodile chevron anatomy were in error, casting doubt on the existence of similar dimorphism between Tyrannosaurus rex sexes. A full - sized chevron was discovered on the first tail vertebra of Sue, an extremely robust individual, indicating that this feature could not be used to differentiate the two morphs anyway. As Tyrannosaurus rex specimens have been found from Saskatchewan to New Mexico, differences between individuals may be indicative of geographic variation rather than sexual dimorphism. The differences could also be age - related, with ' robust ' individuals being older animals.
Only a single Tyrannosaurus rex specimen has been conclusively shown to belong to a specific sex. Examination of B - rex demonstrated the preservation of soft tissue within several bones. Some of this tissue has been identified as a medullary tissue, a specialized tissue grown only in modern birds as a source of calcium for the production of eggshell during ovulation. As only female birds lay eggs, medullary tissue is only found naturally in females, although males are capable of producing it when injected with female reproductive hormones like estrogen. This strongly suggests that B - rex was female, and that she died during ovulation. Recent research has shown that medullary tissue is never found in crocodiles, which are thought to be the closest living relatives of dinosaurs, aside from birds. The shared presence of medullary tissue in birds and theropod dinosaurs is further evidence of the close evolutionary relationship between the two.
Modern representations in museums, art, and film show Tyrannosaurus rex with its body approximately parallel to the ground and tail extended behind the body to balance the head.
Like many bipedal dinosaurs, Tyrannosaurus rex was historically depicted as a ' living tripod ', with the body at 45 degrees or less from the vertical and the tail dragging along the ground, similar to a kangaroo. This concept dates from Joseph Leidy 's 1865 reconstruction of Hadrosaurus, the first to depict a dinosaur in a bipedal posture. In 1915, convinced that the creature stood upright, Henry Fairfield Osborn, former president of the American Museum of Natural History, further reinforced the notion in unveiling the first complete Tyrannosaurus rex skeleton arranged this way. It stood in an upright pose for 77 years, until it was dismantled in 1992.
By 1970, scientists realized this pose was incorrect and could not have been maintained by a living animal, as it would have resulted in the dislocation or weakening of several joints, including the hips and the articulation between the head and the spinal column. The inaccurate AMNH mount inspired similar depictions in many films and paintings (such as Rudolph Zallinger 's famous mural The Age of Reptiles in Yale University 's Peabody Museum of Natural History) until the 1990s, when films such as Jurassic Park introduced a more accurate posture to the general public.
When Tyrannosaurus rex was first discovered, the humerus was the only element of the forelimb known. For the initial mounted skeleton as seen by the public in 1915, Osborn substituted longer, three - fingered forelimbs like those of Allosaurus. A year earlier, Lawrence Lambe described the short, two - fingered forelimbs of the closely related Gorgosaurus. This strongly suggested that Tyrannosaurus rex had similar forelimbs, but this hypothesis was not confirmed until the first complete Tyrannosaurus rex forelimbs were identified in 1989, belonging to MOR 555 (the "Wankel rex ''). The remains of Sue also include complete forelimbs. Tyrannosaurus rex arms are very small relative to overall body size, measuring only 1 meter (3.3 ft) long, and some scholars have labelled them as vestigial. The bones show large areas for muscle attachment, indicating considerable strength. This was recognized as early as 1906 by Osborn, who speculated that the forelimbs may have been used to grasp a mate during copulation. It has also been suggested that the forelimbs were used to assist the animal in rising from a prone position.
Another possibility is that the forelimbs held struggling prey while it was killed by the tyrannosaur 's enormous jaws. This hypothesis may be supported by biomechanical analysis. Tyrannosaurus rex forelimb bones exhibit extremely thick cortical bone, which have been interpreted as evidence that they were developed to withstand heavy loads. The biceps brachii muscle of an adult Tyrannosaurus rex was capable of lifting 199 kilograms (439 lb) by itself; other muscles such as the brachialis would work along with the biceps to make elbow flexion even more powerful. The M. biceps muscle of T. rex was 3.5 times as powerful as the human equivalent. A Tyrannosaurus rex forearm had a limited range of motion, with the shoulder and elbow joints allowing only 40 and 45 degrees of motion, respectively. In contrast, the same two joints in Deinonychus allow up to 88 and 130 degrees of motion, respectively, while a human arm can rotate 360 degrees at the shoulder and move through 165 degrees at the elbow. The heavy build of the arm bones, strength of the muscles, and limited range of motion may indicate a system evolved to hold fast despite the stresses of a struggling prey animal. In the first detailed scientific description of Tyrannosaurus forelimbs, paleontologists Kenneth Carpenter and Matt Smith dismissed notions that the forelimbs were useless or that Tyrannosaurus rex was an obligate scavenger.
According to paleontologist Steven Stanley from the University of Hawaii, the roughly 1 meter long arms of a Tyrannosaurus rex were used for slashing prey. Especially by juvenile dinosaurs as their arms grow slower in proportion to their bodies and a younger Tyrannosaurus rex would have proportionally much longer arms than an adult one.
In the March 2005 issue of Science, Mary Higby Schweitzer of North Carolina State University and colleagues announced the recovery of soft tissue from the marrow cavity of a fossilized leg bone from a Tyrannosaurus rex. The bone had been intentionally, though reluctantly, broken for shipping and then not preserved in the normal manner, specifically because Schweitzer was hoping to test it for soft tissue. Designated as the Museum of the Rockies specimen 1125, or MOR 1125, the dinosaur was previously excavated from the Hell Creek Formation. Flexible, bifurcating blood vessels and fibrous but elastic bone matrix tissue were recognized. In addition, microstructures resembling blood cells were found inside the matrix and vessels. The structures bear resemblance to ostrich blood cells and vessels. Whether an unknown process, distinct from normal fossilization, preserved the material, or the material is original, the researchers do not know, and they are careful not to make any claims about preservation. If it is found to be original material, any surviving proteins may be used as a means of indirectly guessing some of the DNA content of the dinosaurs involved, because each protein is typically created by a specific gene. The absence of previous finds may be the result of people assuming preserved tissue was impossible, therefore not looking. Since the first, two more tyrannosaurs and a hadrosaur have also been found to have such tissue - like structures. Research on some of the tissues involved has suggested that birds are closer relatives to tyrannosaurs than other modern animals.
In studies reported in Science in April 2007, Asara and colleagues concluded that seven traces of collagen proteins detected in purified Tyrannosaurus rex bone most closely match those reported in chickens, followed by frogs and newts. The discovery of proteins from a creature tens of millions of years old, along with similar traces the team found in a mastodon bone at least 160,000 years old, upends the conventional view of fossils and may shift paleontologists ' focus from bone hunting to biochemistry. Until these finds, most scientists presumed that fossilization replaced all living tissue with inert minerals. Paleontologist Hans Larsson of McGill University in Montreal, who was not part of the studies, called the finds "a milestone '', and suggested that dinosaurs could "enter the field of molecular biology and really slingshot paleontology into the modern world ''.
Subsequent studies in April 2008 confirmed the close connection of Tyrannosaurus rex to modern birds. Postdoctoral biology researcher Chris Organ at Harvard University announced, "With more data, they would probably be able to place T. rex on the evolutionary tree between alligators and chickens and ostriches. '' Co-author John M. Asara added, "We also show that it groups better with birds than modern reptiles, such as alligators and green anole lizards. ''
The presumed soft tissue was called into question by Thomas Kaye of the University of Washington and his co-authors in 2008. They contend that what was really inside the tyrannosaur bone was slimy biofilm created by bacteria that coated the voids once occupied by blood vessels and cells. The researchers found that what previously had been identified as remnants of blood cells, because of the presence of iron, were actually framboids, microscopic mineral spheres bearing iron. They found similar spheres in a variety of other fossils from various periods, including an ammonite. In the ammonite they found the spheres in a place where the iron they contain could not have had any relationship to the presence of blood. Schweitzer has strongly criticized Kaye 's claims and argues that there is no reported evidence that biofilms can produce branching, hollow tubes like those noted in her study. San Antonio, Schweitzer and colleagues published an analysis in 2011 of what parts of the collagen had been recovered, finding that it was the inner parts of the collagen coil that had been preserved, as would have been expected from a long period of protein degradation. Other research challenges the identification of soft tissue as biofilm and confirms finding "branching, vessel - like structures '' from within fossilized bone.
As of 2014, it is not clear if Tyrannosaurus was endothermic (warm - blooded). Tyrannosaurus, like most dinosaurs, was long thought to have an ectothermic ("cold - blooded '') reptilian metabolism. The idea of dinosaur ectothermy was challenged by scientists like Robert T. Bakker and John Ostrom in the early years of the "Dinosaur Renaissance '', beginning in the late 1960s. Tyrannosaurus rex itself was claimed to have been endothermic ("warm - blooded ''), implying a very active lifestyle. Since then, several paleontologists have sought to determine the ability of Tyrannosaurus to regulate its body temperature. Histological evidence of high growth rates in young Tyrannosaurus rex, comparable to those of mammals and birds, may support the hypothesis of a high metabolism. Growth curves indicate that, as in mammals and birds, Tyrannosaurus rex growth was limited mostly to immature animals, rather than the indeterminate growth seen in most other vertebrates.
Oxygen isotope ratios in fossilized bone are sometimes used to determine the temperature at which the bone was deposited, as the ratio between certain isotopes correlates with temperature. In one specimen, the isotope ratios in bones from different parts of the body indicated a temperature difference of no more than 4 to 5 ° C (7 to 9 ° F) between the vertebrae of the torso and the tibia of the lower leg. This small temperature range between the body core and the extremities was claimed by paleontologist Reese Barrick and geochemist William Showers to indicate that Tyrannosaurus rex maintained a constant internal body temperature (homeothermy) and that it enjoyed a metabolism somewhere between ectothermic reptiles and endothermic mammals. Other scientists have pointed out that the ratio of oxygen isotopes in the fossils today does not necessarily represent the same ratio in the distant past, and may have been altered during or after fossilization (diagenesis). Barrick and Showers have defended their conclusions in subsequent papers, finding similar results in another theropod dinosaur from a different continent and tens of millions of years earlier in time (Giganotosaurus). Ornithischian dinosaurs also showed evidence of homeothermy, while varanid lizards from the same formation did not. Even if Tyrannosaurus rex does exhibit evidence of homeothermy, it does not necessarily mean that it was endothermic. Such thermoregulation may also be explained by gigantothermy, as in some living sea turtles.
Two isolated fossilized footprints have been tentatively assigned to Tyrannosaurus rex. The first was discovered at Philmont Scout Ranch, New Mexico, in 1983 by American geologist Charles Pillmore. Originally thought to belong to a hadrosaurid, examination of the footprint revealed a large ' heel ' unknown in ornithopod dinosaur tracks, and traces of what may have been a hallux, the dewclaw - like fourth digit of the tyrannosaur foot. The footprint was published as the ichnogenus Tyrannosauripus pillmorei in 1994, by Martin Lockley and Adrian Hunt. Lockley and Hunt suggested that it was very likely the track was made by a Tyrannosaurus rex, which would make it the first known footprint from this species. The track was made in what was once a vegetated wetland mud flat. It measures 83 centimeters (33 in) long by 71 centimeters (28 in) wide.
A second footprint that may have been made by a Tyrannosaurus was first reported in 2007 by British paleontologist Phil Manning, from the Hell Creek Formation of Montana. This second track measures 72 centimeters (28 in) long, shorter than the track described by Lockley and Hunt. Whether or not the track was made by Tyrannosaurus is unclear, though Tyrannosaurus and Nanotyrannus are the only large theropods known to have existed in the Hell Creek Formation.
A set of footprints in Glenrock, Wyoming dating to the Maastrichtian stage of the late cretaceous and hailing from the Lance Formation were described by Scott Persons, Phil Currie et al. in January 2016, and are believed to belong to either a juvenile Tyrannosaurus rex or the dubious tyrannosaurid genus Nanotyrannus lancensis. From measurements and based on the positions of the footprints, the animal was believed to be traveling at a walking speed of around 2.8 to 5 miles per hour and was estimated to have a hip height of 1.56 m (5.1 ft) to 2.06 m (6.8 ft). A follow - up paper appeared in 2017, increasing the speed estimations by 50 - 80 %.
There are two main issues concerning the locomotory abilities of Tyrannosaurus: how well it could turn; and what its maximum straight - line speed was likely to have been. Both are relevant to the debate about whether it was a hunter or a scavenger.
Tyrannosaurus may have been slow to turn, possibly taking one to two seconds to turn only 45 ° -- an amount that humans, being vertically oriented and tailless, can spin in a fraction of a second. The cause of the difficulty is rotational inertia, since much of Tyrannosaurus ' mass was some distance from its center of gravity, like a human carrying a heavy timber horizontally -- although it might have reduced the average distance by arching its back and tail and pulling its head and forelimbs close to its body, rather like the way ice skaters pull their arms closer in order to spin faster.
Scientists have produced a wide range of maximum speed estimates, mostly around 11 meters per second (40 km / h; 25 mph), but a few as low as 5 -- 11 meters per second (18 -- 40 km / h; 11 -- 25 mph), and a few as high as 20 meters per second (72 km / h; 45 mph). Researchers have to rely on various estimating techniques because, while there are many tracks of very large theropods walking, so far none have been found of very large theropods running -- and this absence may indicate that they did not run. Scientists who think that Tyrannosaurus was able to run point out that hollow bones and other features that would have lightened its body may have kept adult weight to a mere 4.5 metric tons (5.0 short tons) or so, or that other animals like ostriches and horses with long, flexible legs are able to achieve high speeds through slower but longer strides. Some have also argued that Tyrannosaurus had relatively larger leg muscles than any animal alive today, which could have enabled fast running at 40 -- 70 kilometers per hour (25 -- 43 mph).
Jack Horner and Don Lessem argued in 1993 that Tyrannosaurus was slow and probably could not run (no airborne phase in mid-stride), because its ratio of femur (thigh bone) to tibia (shin bone) length was greater than 1, as in most large theropods and like a modern elephant. Holtz (1998) noted that tyrannosaurids and some closely related groups had significantly longer distal hindlimb components (shin plus foot plus toes) relative to the femur length than most other theropods, and that tyrannosaurids and their close relatives had a tightly interlocked metatarsus that more effectively transmitted locomotory forces from the foot to the lower leg than in earlier theropods ("metatarsus '' means the foot bones, which function as part of the leg in digitigrade animals). He therefore concluded that tyrannosaurids and their close relatives were the fastest large theropods. Thomas Holtz Jr. echoed these sentiments in his 2013 lecture, stating that the giant allosaurs had shorter feet for the same body size than Tyrannosaurus, whereas Tyrannosaurus had longer, skinnier and more interlocked feet for the same body size; attributes of faster moving animals.
A study by Eric Snively and Anthony P. Russel published in 2003 also found that the tyrannosaurid arctometatarsals and elastic ligaments worked together in what he called a ' tensile keystone model ' to strengthen the feet of Tyrannosaurus, increase the animal 's stability and add greater resistance to dissociation over that of other theropod families; while still allowing resiliency that is otherwise reduced in ratites, horses, giraffids and other animals with metapodia to a single element. The study also pointed out that elastic ligaments in larger vertebrates could store and return relatively more elastic strain energy, which could have improved locomotor efficiency and decrease the strain energy transferred to the bones. The study suggested that this mechanism could have worked efficiently in tyrannosaurids as well. Hence, the study involved identifying the type of ligaments attached to the metatarsals, then how they functioned together and comparing it to those of other theropods and modern day analogs. The scientists found that arctometatarsals may have enabled tyrannosaurid feet to absorb forces such as linear deceleration, lateral acceleration and torsion more effectively than those of other theropods. It is also stated in their study that this may imply, though not demonstrate, that tyrannosaurids such as Tyrannosaurus had greater agility than other large theropods without an arctometatarsus.
Christiansen (1998) estimated that the leg bones of Tyrannosaurus were not significantly stronger than those of elephants, which are relatively limited in their top speed and never actually run (there is no airborne phase), and hence proposed that the dinosaur 's maximum speed would have been about 11 meters per second (40 km / h; 25 mph), which is about the speed of a human sprinter. But he also noted that such estimates depend on many dubious assumptions.
Farlow and colleagues (1995) have argued that a Tyrannosaurus weighing 5.4 metric tons (6.0 short tons) to 7.3 metric tons (8.0 short tons) would have been critically or even fatally injured if it had fallen while moving quickly, since its torso would have slammed into the ground at a deceleration of 6 g (six times the acceleration due to gravity, or about 60 meters / s2) and its tiny arms could not have reduced the impact. Giraffes have been known to gallop at 50 kilometers per hour (31 mph), despite the risk that they might break a leg or worse, which can be fatal even in a safe environment such as a zoo. Thus it is possible that Tyrannosaurus also moved fast when necessary and had to accept such risks.
In another study, Gregory S. Paul pointed out that the flexed kneed and digitigrade adult Tyrannosaurus were much better adapted for running than elephants or humans, pointing out that Tyrannosaurus had a large ilium bone and cnemial crest that would have supported large muscles needed for running. He also mentioned that Alexander 's (1989) formula to calculate speed by bone strength was only partly reliable. He suggests that the formula is overly sensitive to bone length; making long bones artificially weak. He also pointed out that the lowered risk of being wounded in combat may have been worth the risk of Tyrannosaurus falling while running.
Most recent research on Tyrannosaurus locomotion does not support speeds faster than 40 kilometers per hour (25 mph), i.e. moderate - speed running. For example, a 2002 paper in Nature used a mathematical model (validated by applying it to three living animals, alligators, chickens, and humans; later eight more species including emus and ostriches) to gauge the leg muscle mass needed for fast running (over 40 km / h or 25 mph). They found that proposed top speeds in excess of 40 kilometers per hour (25 mph) were infeasible, because they would require very large leg muscles (more than approximately 40 -- 86 % of total body mass). Even moderately fast speeds would have required large leg muscles. This discussion is difficult to resolve, as it is unknown how large the leg muscles actually were in Tyrannosaurus. If they were smaller, only 18 kilometers per hour (11 mph) walking or jogging might have been possible.
A study in 2007 used computer models to estimate running speeds, based on data taken directly from fossils, and claimed that Tyrannosaurus rex had a top running speed of 8 meters per second (29 km / h; 18 mph). An average professional football (soccer) player would be slightly slower, while a human sprinter can reach 12 meters per second (43 km / h; 27 mph). These computer models predict a top speed of 17.8 meters per second (64 km / h; 40 mph) for a 3 - kilogram (6.6 lb) Compsognathus (probably a juvenile individual).
In 2010, Scott Persons, a graduate student from the University of Alberta, proposed that Tyrannosaurus 's speed may have been enhanced by strong tail muscles. He found that theropods such as T. rex had certain muscle arrangements that are different from modern day birds and mammals but with some similarities to modern reptiles. He concluded that the caudofemoralis muscles which link the tail bones and the upper leg bones could have assisted Tyrannosaurus in leg retraction and enhanced its running ability, agility and balance. The caudofemoralis muscle would have been a key muscle in femoral retraction; pulling back the leg at the femur. The study also found that theropod skeletons such as those of Tyrannosaurus had adaptations (such as elevated transverse processes in the tail vertebrae) to enable the growth of larger tail muscles and that Tyrannosaurus 's tail muscle mass may have been underestimated by over 25 percent and perhaps as much as 45 percent. The caudofemoralis muscle was found to comprise 58 percent of the muscle mass in the tail of Tyrannosaurus. Tyrannosaurus also had the largest absolute and relative caudofemoralis muscle mass out of the three extinct organisms in the study. This is because Tyrannosaurus also had additional adaptations to enable large tail muscles; the elongation of its tail 's hemal arches. According to Persons, the increase in tail muscle mass would have moved the center of mass closer to the hindquarters and hips which would have lessened the strain on the leg muscles to support its weight; improving its overall balance and agility. This would also have made the animal less front - heavy, thus reducing rotational inertia. Persons also notes that the tail is also rich in tendons and septa which could have been stores of elastic energy, and thereby improved locomotive efficiency. Persons adds that this means non-avian theropods actually had broader tails than previously depicted, as broad or broader laterally than dorsoventrally near the base.
Heinrich Mallison from Berlin 's Museum of Natural History also presented a theory in 2011, suggesting that Tyrannosaurus and many other dinosaurs may have achieved relatively high speeds through short rapid strides instead of the long strides employed by modern birds and mammals when running, likening their movement to power - walking. This, according to Mallison, would have been achievable irrespective of joint strength and lessened the need for additional muscle mass in the legs, particularly at the ankles. To support his theory, Mallison assessed the limbs of various dinosaurs and found that they were different from those of modern mammals and birds; having their stride length greatly limited by their skeletons, but also having relatively large muscles at the hindquarters. He found a few similarities between the muscles in dinosaurs and race - walkers; having less muscle mass in the ankles but more at the hindquarters. John Hutchinson advised caution regarding this theory, suggesting that they must first look into dinosaur muscles to see how frequently they could have contracted.
A study in July 2017 by a team of researchers led by William Sellers from the University of Manchester found that an adult Tyrannosaurus was incapable of running due to very high skeletal loads. The study used the latest computing technology to test its findings. The researchers used two different structural mechanical systems to create the computer model. The weight they settled on for their calculations was a conservative estimate of 7 tons. The model showed that speeds above 11 mph (18 km / h) would have probably shattered the leg bones of Tyrannosaurus. The finding may mean that running was also not possible for other giant theropod dinosaurs like Giganotosaurus, Mapusaurus and Acrocanthosaurus.
Another study in July 2017 by researchers from the German Centre for Integrative Biodiversity Research (iDiv) found that the top speed of Tyrannosaurus was around 17 mph (27 km / h). Other dinosaurs including Triceratops, Velociraptor and Brachiosaurus were also analyzed in the study, as were many living animals like elephants, cheetahs and rabbits. The speed of Tyrannosaurus was calculated by factoring its weight in conjunction with the medium upon which it travelled (in the case of the theropod, land) and by the assumptions that: one; animals reach their maximum speeds during comparatively short sprints, and two; Newton 's laws of motion dictate that mass has to overcome inertia. It found that large animals like Tyrannosaurus exhaust their energy reserves long before they reach their theoretical top speed, resulting in a parabola - like relationship between size and speed. The equation can calculate the top speed of an animal with almost 90 % accuracy and can be applied to both living and extinct animals.
Those who argue that Tyrannosaurus was incapable of running estimate the top speed of Tyrannosaurus at about 17 kilometers per hour (11 mph). This is still faster than its most likely prey species, hadrosaurids and ceratopsians. In addition, some advocates of the idea that Tyrannosaurus was a predator claim that tyrannosaur running speed is not important, since it may have been slow but still faster than its probable prey. Thomas Holtz also noted that Tyrannosaurus had proportionately longer feet than the animals it hunted: duck - billed dinosaurs and horned dinosaurs. Paul and Christiansen (2000) argued that at least the later ceratopsians had upright forelimbs and the larger species may have been as fast as rhinos. Healed Tyrannosaurus bite wounds on ceratopsian fossils are interpreted as evidence of attacks on living ceratopsians (see below). If the ceratopsians that lived alongside Tyrannosaurus were fast, that casts doubt on the argument that Tyrannosaurus did not have to be fast to catch its prey.
A study conducted by Lawrence Witmer and Ryan Ridgely of Ohio University found that Tyrannosaurus shared the heightened sensory abilities of other coelurosaurs, highlighting relatively rapid and coordinated eye and head movements, as well as an enhanced ability to sense low frequency sounds that would allow tyrannosaurs to track prey movements from long distances and an enhanced sense of smell. A study published by Kent Stevens of the University of Oregon concluded that Tyrannosaurus had keen vision. By applying modified perimetry to facial reconstructions of several dinosaurs including Tyrannosaurus, the study found that Tyrannosaurus had a binocular range of 55 degrees, surpassing that of modern hawks, and had 13 times the visual acuity of a human, thereby surpassing the visual acuity of an eagle which is only 3.6 times that of a person. This would have allowed Tyrannosaurus to discern objects as far as 6 km (3.7 mi) away, which is greater than the 1.6 km (1 mi) that a human can see.
Thomas Holtz Jr. would note that high depth perception of Tyrannosaurus may have been due to the prey it had to hunt; noting that it had to hunt horned dinosaurs such as Triceratops, armored dinosaurs such as Ankylosaurus and the duck - billed dinosaurs may have had complex social behaviors. He would suggest that this made precision more crucial for Tyrannosaurus enabling it to, "get in, get that blow in and take it down. '' In contrast, Acrocanthosaurus had limited depth perception because they hunted large sauropods, which were relatively rare during the time of Tyrannosaurus.
Tyrannosaurus had very large olfactory bulbs and olfactory nerves relative to their brain size, the organs responsible for a heightened sense of smell. This suggests that the sense of smell was highly developed, and implies that tyrannosaurs could detect carcasses by scent alone across great distances. The sense of smell in tyrannosaurs may have been comparable to modern vultures, which use scent to track carcasses for scavenging. Research on the olfactory bulbs has shown that Tyrannosaurus rex had the most highly developed sense of smell of 21 sampled non-avian dinosaur species.
Somewhat unusually among theropods, T. rex had a very long cochlea. The length of the cochlea is often related to hearing acuity, or at least the importance of hearing in behavior, implying that hearing was a particularly important sense to tyrannosaurs. Specifically, data suggests that Tyrannosaurus rex heard best in the low - frequency range, and that low - frequency sounds were an important part of tyrannosaur behavior.
A study by Grant R. Hurlburt, Ryan C. Ridgely and Lawrence Witmer obtained estimates for Encephalization Quotients (EQs), based on reptiles and birds, as well as estimates for the ratio of cerebrum to brain mass. The study concluded that Tyrannosaurus had the relatively largest brain of all adult non-avian dinosaurs with the exception of certain small maniraptoriforms (Bambiraptor, Troodon and Ornithomimus). The study found that Tyrannosaurus 's relative brain size was still within the range of modern reptiles, being at most 2 standard deviations above the mean of non-avian reptile EQs. The estimates for the ratio of cerebrum mass to brain mass would range from 47.5 to 49.53 percent. According to the study, this is more than the lowest estimates for extant birds (44.6 percent), but still close to the typical ratios of the smallest sexually mature alligators which range from 45.9 -- 47.9 percent.
A study in 2012 by Karl Bates and Peter Falkingham demonstrated that Tyrannosaurus had the most powerful bite of any terrestrial animal that has ever lived. They found that an adult Tyrannosaurus could have exerted 35,000 to 57,000 N (7,868 to 12,814 lbf) of force in the back teeth. Even higher estimates were made by professor Mason B. Meers of the University of Tampa in 2003. In his study, Meers estimated a possible bite force of 183,000 to 235,000 N (41,140 to 52,830 lbf). A study in 2017 by Greg Erikson and Paul Gignac and published in the journal Scientific Reports found that Tyrannosaurus could exert bite forces of 8,526 N to 34,522 N (1,917 to 7,761 lbf) and tooth pressures of 718 to 2,974 MPa (104,137 to 431,342 psi). This allowed it to crush bones during repetitive biting and fully exploit the carcasses of large dinosaurs, giving it access to the mineral salts and marrow within bone that smaller carnivores could not access. Research done by Stephan Lautenschlager et al. from the University of Bristol, reveals Tyrannosaurus was also capable of a maximum jaw gape of around 80 degrees, a necessary adaptation for a wide range of jaw angles in order to power the creature 's strong bite.
The debate about whether Tyrannosaurus was a predator or a pure scavenger is as old as the debate about its locomotion. Lambe (1917) described a good skeleton of Tyrannosaurus close relative Gorgosaurus and concluded that it and therefore also Tyrannosaurus was a pure scavenger, because the Gorgosaurus teeth showed hardly any wear. This argument is no longer taken seriously, because theropods replaced their teeth quite rapidly. Ever since the first discovery of Tyrannosaurus most scientists have speculated that it was a predator; like modern large predators it would readily scavenge or steal another predator 's kill if it had the opportunity.
Paleontologist Jack Horner has been a major advocate of the idea that Tyrannosaurus was exclusively a scavenger and did not engage in active hunting at all, though Horner himself has claimed that he never published this idea in the peer - reviewed scientific literature and used it mainly as a tool to teach a popular audience, particularly children, the dangers of making assumptions in science (such as assuming T. rex was a hunter) without using evidence. Nevertheless, Horner presented several arguments in the popular literature to support the pure scavenger hypothesis:
Other evidence suggests hunting behavior in Tyrannosaurus. The eye sockets of tyrannosaurs are positioned so that the eyes would point forward, giving them binocular vision slightly better than that of modern hawks. Horner also pointed out that the tyrannosaur lineage had a history of steadily improving binocular vision. It is not obvious why natural selection would have favored this long - term trend if tyrannosaurs had been pure scavengers, which would not have needed the advanced depth perception that stereoscopic vision provides. In modern animals, binocular vision is found mainly in predators.
A skeleton of the hadrosaurid Edmontosaurus annectens has been described from Montana with healed tyrannosaur - inflicted damage on its tail vertebrae. The fact that the damage seems to have healed suggests that the Edmontosaurus survived a tyrannosaur 's attack on a living target, i.e. the tyrannosaur had attempted active predation. There is also evidence for an aggressive interaction between a Triceratops and a Tyrannosaurus in the form of partially healed tyrannosaur tooth marks on a Triceratops brow horn and squamosal (a bone of the neck frill); the bitten horn is also broken, with new bone growth after the break. It is not known what the exact nature of the interaction was, though: either animal could have been the aggressor. Since the Triceratops wounds healed, it is most likely that the Triceratops survived the encounter and managed to overcome the Tyrannosaurus. Paleontologist Peter Dodson estimates that in a battle against a bull Triceratops, the Triceratops had the upper hand and would successfully defend itself by inflicting fatal wounds to the Tyrannosaurus using its sharp horns.
When examining Sue, paleontologist Pete Larson found a broken and healed fibula and tail vertebrae, scarred facial bones and a tooth from another Tyrannosaurus embedded in a neck vertebra. If correct, these might be strong evidence for aggressive behavior between tyrannosaurs but whether it would have been competition for food and mates or active cannibalism is unclear. Further recent investigation of these purported wounds has shown that most are infections rather than injuries (or simply damage to the fossil after death) and the few injuries are too general to be indicative of intraspecific conflict. Some researchers argue that if Tyrannosaurus were a scavenger, another dinosaur had to be the top predator in the Amerasian Upper Cretaceous. Top prey were the larger marginocephalians and ornithopods. The other tyrannosaurids share so many characteristics that only small dromaeosaurs and troodontids remain as feasible top predators. In this light, scavenger hypothesis adherents have suggested that the size and power of tyrannosaurs allowed them to steal kills from smaller predators, although they may have had a hard time finding enough meat to scavenge, being outnumbered by smaller theropods. Most paleontologists accept that Tyrannosaurus was both an active predator and a scavenger like most large carnivores.
Tyrannosaurus may have had infectious saliva used to kill its prey. This theory was first proposed by William Abler. Abler examined the teeth of tyrannosaurids between each tooth serration; the serrations may have held pieces of carcass with bacteria, giving Tyrannosaurus a deadly, infectious bite much like the Komodo dragon was thought to have. Jack Horner regards Tyrannosaurus tooth serrations as more like cubes in shape than the serrations on a Komodo monitor 's teeth, which are rounded. All forms of saliva contain possibly hazardous bacteria, so the prospect of it being used as a method of predation is disputable.
Tyrannosaurus, and most other theropods, probably primarily processed carcasses with lateral shakes of the head, like crocodilians. The head was not as maneuverable as the skulls of allosauroids, due to flat joints of the neck vertebrae.
A study from Currie, Horner, Erickson and Longrich in 2010 has been put forward as evidence of cannibalism in the genus Tyrannosaurus. They studied some Tyrannosaurus specimens with tooth marks in the bones, attributable to the same genus. The tooth marks were identified in the humerus, foot bones and metatarsals, and this was seen as evidence for opportunistic scavenging, rather than wounds caused by intraspecific combat. In a fight, they proposed it would be difficult to reach down to bite in the feet of a rival, making it more likely that the bite marks were made in a carcass. As the bite marks were made in body parts with relatively scanty amounts of flesh, it is suggested that the Tyrannosaurus was feeding on a carcass in which the more fleshy parts had already been consumed. They were also open to the possibility that other tyrannosaurids practiced cannibalism. Other evidence for cannibalism has been unearthed.
Philip J. Currie of the University of Alberta has suggested that Tyrannosaurus may have been pack animals. Currie compared Tyrannosaurus rex favorably to related species Tarbosaurus bataar and Albertosaurus sarcophagus, fossil evidence from which Currie had previously used to suggest that they lived in packs. Currie pointed out that a find in South Dakota preserved three Tyrannosaurus rex skeletons in close proximity to each other. After using CT scanning, Currie stated that Tyrannosaurus would have been capable of such complex behavior, because its brain size is three times greater than what would be expected for an animal of its size. Currie elaborated that Tyrannosaurus had a larger brain - to - body - size proportion than crocodiles and three times more than plant eating dinosaurs such as Triceratops of the same size. Currie believed Tyrannosaurus to be six times smarter than most dinosaurs and other reptiles. Because the available prey, such as Triceratops and Ankylosaurus, were well - armored, and that others were fast - moving, it would have been necessary for Tyrannosaurus to hunt in groups. Currie speculated that juveniles and adults would have hunted together, with the faster juveniles chasing down the prey and the more powerful adults making the kill, by analogy to modern - day pack hunters where each member contributes a skill.
Currie 's pack - hunting hypothesis has been harshly criticized by other scientists. Brian Switek, writing for The Guardian in 2011, noted that Currie 's pack hypothesis has not been presented as research in a peer - reviewed scientific journal, but primarily in relation to a television special and tie - in book called Dino Gangs. Switek also noted that Currie 's argument for pack hunting in Tyrannosaurus rex is primarily based on analogy to a different species, Tarbosaurus bataar, and that the supposed evidence for pack hunting in T. bataar itself has not yet been published and subjected to scientific scrutiny. According to Switek and other scientists who have participated in panel discussions about the Dino Gangs television program, the evidence for pack hunting in Tarbosaurus and Albertosaurus is weak, based primarily on the association of several skeletons, for which numerous alternative explanations have been proposed (e.g. drought or floods forcing numerous specimens together to die in one place). In fact, Switek notes that the Albertosaurus bonebed site, on which Currie has based most of the interpretations of supposed pack hunting in related species, preserves geological evidence of just such a flood. Switek said, "bones alone are not enough to reconstruct dinosaur behaviour. The geological context in which those bones are found -- the intricate details of ancient environments and the pace of prehistoric time -- are essential to investigating the lives and deaths of dinosaurs, '' and noted that Currie must first describe the geological evidence from other tyrannosaur bonebed sites before jumping to conclusions about social behavior. Switek described the sensational claims provided in press releases and news stories surrounding the Dino Gangs program as "nauseating hype '' and noted that the production company responsible for the program, Atlantic Productions, has a poor record involving exaggerating claims about new fossil discoveries, most notably the controversial claim it published regarding the supposed early human ancestor Darwinius, which soon turned out to be a relative of lemurs instead.
Lawrence Witmer pointed out that social behavior ca n't be determined by brain endocasts and the brains of solitary leopards are identical to those of a cooperatively hunting lion; estimated brain sizes only show that an animal may have hunted in groups. In his opinion, the brains of tyrannosaurs were large enough for what he dubs "communal hunting '', a semi-organized behavior that falls between solitary and cooperative hunting. Witmer claims that communal hunting is a step towards the evolution of cooperative hunting. He found it hard to believe that tyrannosaurs would n't have exploited the opportunity to join others in making a kill, and thus decrease risk and increase their chances of success.
On July 23, 2014, evidence, for the first time, in the form of fossilized trackways in Canada, showed that tyrannosaurs may have hunted in groups.
In 2001, Bruce Rothschild and others published a study examining evidence for stress fractures and tendon avulsions in theropod dinosaurs and the implications for their behavior. Since stress fractures are caused by repeated trauma rather than singular events they are more likely to be caused by regular behavior than other types of injuries. Of the 81 Tyrannosaurus foot bones examined in the study one was found to have a stress fracture, while none of the 10 hand bones were found to have stress fractures. The researchers found tendon avulsions only among Tyrannosaurus and Allosaurus. An avulsion injury left a divot on the humerus of Sue the T. rex, apparently located at the origin of the deltoid or teres major muscles. The presence of avulsion injuries being limited to the forelimb and shoulder in both Tyrannosaurus and Allosaurus suggests that theropods may have had a musculature more complex than and functionally different from those of birds. The researchers concluded that Sue 's tendon avulsion was probably obtained from struggling prey. The presence of stress fractures and tendon avulsions in general provides evidence for a "very active '' predation - based diet rather than obligate scavenging.
A 2009 study showed that holes in the skulls of several specimens that were previously explained by intraspecific attacks might have been caused by Trichomonas - like parasites that commonly infect avians. Further evidence of intraspecific attack were found by Joseph Peterson and his colleagues in the juvenile Tyrannosaurus nicknamed Jane. Peterson and his team found that Jane 's skull showed healed puncture wounds on the upper jaw and snout which they believe came from another juvenile Tyrannosaurus. Subsequent CT scans of Jane 's skull would further confirm the team 's hypothesis, showing that the puncture wounds came from a traumatic injury and that there was subsequent healing. The team would also state that Jane 's injuries were structurally different from the parasite - induced lesions found in Sue and that Jane 's injuries were on her face whereas the parasite that infected Sue caused lesions to the lower jaw.
Tyrannosaurus lived during what is referred to as the Lancian faunal stage (Maastrichtian age) at the end of the Late Cretaceous. Tyrannosaurus ranged from Canada in the north to at least Texas and New Mexico in the south of Laramidia. During this time Triceratops was the major herbivore in the northern portion of its range, while the titanosaurian sauropod Alamosaurus "dominated '' its southern range. Tyrannosaurus remains have been discovered in different ecosystems, including inland and coastal subtropical, and semi-arid plains.
Several notable Tyrannosaurus remains have been found in the Hell Creek Formation. During the Maastrichtian this area was subtropical, with a warm and humid climate. The flora consisted mostly of angiosperms, but also included trees like dawn redwood (Metasequoia) and Araucaria. Tyrannosaurus shared this ecosystem with Triceratops, related ceratopsians Nedoceratops, Tatankaceratops and Torosaurus, the hadrosaurid Edmontosaurus annectens and possibly a species of Parasaurolophus, the armored dinosaurs Denversaurus, Edmontonia and Ankylosaurus, the dome headed dinosaurs Pachycephalosaurus, Stygimoloch, Sphaerotholus, and Dracorex, the hypsilophodont Thescelosaurus, and the theropods Ornithomimus, Struthiomimus, Orcomimus, Acheroraptor, Dakotaraptor, Richardoestesia, Paronychodon, Pectinodon, and Troodon.
Another formation with tyrannosaur remains is the Lance Formation of Wyoming. This has been interpreted as a bayou environment similar to today 's Gulf Coast. The fauna was very similar to Hell Creek, but with Struthiomimus replacing its relative Ornithomimus. The small ceratopsian Leptoceratops also lived in the area.
In its southern range Tyrannosaurus lived alongside the titanosaur Alamosaurus, the ceratopsians Torosaurus, Bravoceratops and Ojoceratops, hadrosaurs which consisted of a species of Edmontosaurus, Kritosaurus and a possible species of Gryposaurus, the nodosaur Glyptodontopelta, the oviraptorid Ojoraptosaurus, possible species of the theropods Troodon and Richardoestesia, and the pterosaur Quetzalcoatlus. The region is thought to have been dominated by semi-arid inland plains, following the probable retreat of the Western Interior Seaway as global sea levels fell.
Tyrannosaurus may have also inhabited Mexico 's Lomas Coloradas formation in Sonora. Though skeletal evidence is lacking, six shed and broken teeth from the fossil bed have been thoroughly compared with other theropod genera and appear to be identical to those of Tyrannosaurus. If true, the evidence indicates the range of Tyrannosaurus was possibly more extensive than previously believed. It is possible that tyrannosaurs were originally Asian species, migrating to North America before the end of the Cretaceous period.
Since it was first described in 1905, Tyrannosaurus rex has become the most widely recognized dinosaur species in popular culture. It is the only dinosaur that is commonly known to the general public by its full scientific name (binomial name) and the scientific abbreviation T. rex has also come into wide usage. Robert T. Bakker notes this in The Dinosaur Heresies and explains that a name like "Tyrannosaurus rex '' is just irresistible to the tongue. ''
Notes
Citations
|
how many major diagnostic categories are there in the ms-drg system | Major Diagnostic Category - wikipedia
The Major Diagnostic Categories (MDC) are formed by dividing all possible principal diagnoses (from ICD - 9 - CM) into 25 mutually exclusive diagnosis areas. MDC codes, like diagnosis - related group (DRG) codes, are primarily a claims and administrative data element unique to the United States medical care reimbursement system. DRG codes also are mapped, or grouped, into MDC codes.
The diagnoses in each MDC correspond to a single organ system or cause and, in general, are associated with a particular medical specialty. MDC 1 to MDC 23 are grouped according to principal diagnoses. Patients are assigned to MDC 24 (Multiple Significant Trauma) with at least two significant trauma diagnosis codes (either as principal or secondaries) from different body site categories. Patients assigned to MDC 25 (HIV Infections) must have a principal diagnosis of an HIV Infection or a principal diagnosis of a significant HIV related condition and a secondary diagnosis of an HIV Infection.
MDC 0, unlike the others, can be reached from a number of diagnosis / procedure situations, all related to transplants. This is due to the expense involved for the transplants so designated and because these transplants can be needed for a number of reasons which do not all come from one diagnosis domain. DRGs which reach MDC 0 are assigned to the MDC for the principal diagnosis instead of to the MDC associated with the designated DRG.
|
who has home field for the world series | World Series - wikipedia
The World Series is the annual championship series of Major League Baseball (MLB) in North America, contested since 1903 between the American League (AL) champion team and the National League (NL) champion team. The winner of the World Series championship is determined through a best - of - seven playoff, and the winning team is awarded the Commissioner 's Trophy. As the series is played in October (and occasionally November), during the fall season in North America, it is sometimes referred to as the Fall Classic.
Prior to 1969, the team with the best regular season win - loss record in each league automatically advanced to the World Series; since then each league has conducted a championship series (ALCS and NLCS) preceding the World Series to determine which teams will advance. As of 2016, the World Series has been contested 112 times, with the AL winning 64 and the NL winning 48.
The 2016 World Series took place between the Cleveland Indians and the Chicago Cubs. Seven games were played, with the Cubs victorious after game seven, played in Cleveland. The final score was 8 -- 7; the game went into extra innings after a tied score of 6 -- 6. This was the third World Series won by the Cubs but only their first since 1908, a period of 108 years. With the Cubs ' record - long title drought finally ended, the Indians ' championship dry spell of 68 years and counting -- the Indians last won the Series in 1948 -- is currently the longest - running Series title absence.
In the American League, the New York Yankees have played in 40 World Series and won 27, the Philadelphia / Kansas City / Oakland Athletics have played in 14 and won 9, and the Boston Red Sox have played in 12 and won 8, including the first World Series. In the National League, the St. Louis Cardinals have appeared in 19 and won 11, the New York / San Francisco Giants have played in 20 and won 8, the Brooklyn / Los Angeles Dodgers have appeared in 18 and won 6, and the Cincinnati Reds have appeared in 9 and won 5.
As of 2016, no team has won consecutive World Series championships since the New York Yankees in 1998, 1999, and 2000 -- the longest such drought in Major League Baseball history.
Until the formation of the American Association in 1882 as a second major league, the National Association of Professional Base Ball Players (1871 -- 1875) and then the National League (founded 1876) represented the top level of organized baseball in the United States. All championships were awarded to the team with the best record at the end of the season, without a postseason series being played. From 1884 to 1890, the National League and the American Association faced each other in a series of games at the end of the season to determine an overall champion. These series were disorganized in comparison to the modern World Series, with the terms arranged through negotiation of the owners of the championship teams beforehand. The number of games played ranged from as few as three in 1884 (Providence defeated New York three games to zero), to a high of fifteen in 1887 (Detroit beat St. Louis ten games to five). Both the 1885 and 1890 Series ended in ties, each team having won three games with one tie game.
The series was promoted and referred to as "The Championship of the United States '', "World 's Championship Series '', or "World 's Series '' for short. In his book Krakatoa: The Day the World Exploded: August 27, 1883, Simon Winchester mentions in passing that the World Series was named for the New York World newspaper, but this view is disputed.
The 19th - century competitions are, however, not officially recognized as part of World Series history by Major League Baseball, as it considers 19th - century baseball to be a prologue to the modern baseball era. Until about 1960, some sources treated the 19th - century Series on an equal basis with the post-19th - century series. After about 1930, however, many authorities list the start of the World Series in 1903 and discuss the earlier contests separately. (For example, the 1929 World Almanac and Book of Facts lists "Baseball World 's Championships 1884 -- 1928 '' in a single table, but the 1943 edition lists "Baseball World Championships 1903 -- 1942 ''.)
Following the collapse of the American Association after the 1891 season, the National League was again the only major league. The league championship was awarded in 1892 by a playoff between half - season champions. This scheme was abandoned after one season. Beginning in 1893 -- and continuing until divisional play was introduced in 1969 -- the pennant was awarded to the first - place club in the standings at the end of the season. For four seasons, 1894 -- 1897, the league champions played the runners - up in the post season championship series called the Temple Cup. A second attempt at this format was the Chronicle - Telegraph Cup series, which was played only once, in 1900.
In 1901, the American League was formed as a second major league. No championship series were played in 1901 or 1902 as the National and American Leagues fought each other for business supremacy (in 1902, the top teams instead opted to compete in a football championship).
After two years of bitter competition and player raiding (in 1902, the AL and NL champions even went so far as to challenge each other to a tournament in football after the end of the baseball season), the National and American Leagues made peace and, as part of the accord, several pairs of teams squared off for interleague exhibition games after the 1903 season. These series were arranged by the participating clubs, as the 1880s World 's Series matches had been. One of them matched the two pennant winners, Pittsburg Pirates of the NL and Boston Americans (later known as the Red Sox) of the AL; that one is known as the 1903 World Series. It had been arranged well in advance by the two owners, as both teams were league leaders by large margins. Boston upset Pittsburg by five games to three, winning with pitching depth behind Cy Young and Bill Dinneen and with the support of the band of Royal Rooters. The Series brought much civic pride to Boston and proved the new American League could beat the Nationals.
The 1904 Series, if it had been held, would have been between the AL 's Boston Americans (Boston Red Sox) and the NL 's New York Giants (now the San Francisco Giants). At that point there was no governing body for the World Series nor any requirement that a Series be played. Thus the Giants ' owner, John T. Brush, refused to allow his team to participate in such an event, citing the "inferiority '' of the upstart American League. John McGraw, the Giants ' manager, even went so far as to say that his Giants were already "world champions '' since they were the champions of the "only real major league ''. At the time of the announcement, their new cross-town rivals, the New York Highlanders (now the New York Yankees), were leading the AL, and the prospect of facing the Highlanders did not please Giants management. Boston won on the last day of the season, and the leagues had previously agreed to hold a World 's Championship Series in 1904, but it was not binding, and Brush stuck to his original decision. In addition to political reasons, Brush also factually cited the lack of rules under which money would be split, where games would be played, and how they would be operated and staffed.
During the winter of 1904 -- 1905, however, feeling the sting of press criticism, Brush had a change of heart and proposed what came to be known as the "Brush Rules '', under which the series were played subsequently. One rule was that player shares would come from a portion of the gate receipts for the first four games only. This was to discourage teams from "fixing '' early games in order to prolong the series and make more money. Receipts for later games would be split among the two clubs and the National Commission, the governing body for the sport, which was able to cover much of its annual operating expense from World Series revenue. Most importantly, the now - official and compulsory World 's Series matches were operated strictly by the National Commission itself, not by the participating clubs.
With the new rules in place and the National Commission in control, McGraw 's Giants made it to the 1905 Series, and beat the Philadelphia A 's four games to one. The Series was subsequently held annually, until 1994, when it was canceled due to a players ' strike.
The list of postseason rules evolved over time. In 1925, Brooklyn owner Charles Ebbets persuaded others to adopt as a permanent rule the 2 -- 3 -- 2 pattern used in 1924. Prior to 1924, the pattern had been to alternate by game or to make another arrangement convenient to both clubs. The 2 -- 3 -- 2 pattern has been used ever since save for the 1943 and 1945 World Series, which followed a 3 -- 4 pattern due to World War II travel restrictions; in 1944, the normal pattern was followed because both teams were based in the same home stadium.
Gambling and game - fixing had been a problem in professional baseball from the beginning; star pitcher Jim Devlin was banned for life in 1877, when the National League was just two years old. Baseball 's gambling problems came to a head in 1919, when eight players of the Chicago White Sox were alleged to have conspired to throw the 1919 World Series.
The Sox had won the Series in 1917 and were heavy favorites to beat the Cincinnati Reds in 1919, but first baseman Chick Gandil had other plans. Gandil, in collaboration with gambler Joseph "Sport '' Sullivan, approached his teammates and got six of them to agree to throw the Series: starting pitchers Eddie Cicotte and Lefty Williams, shortstop Swede Risberg, left fielder Shoeless Joe Jackson, center fielder Happy Felsch, and utility infielder Fred McMullin. Third baseman Buck Weaver knew of the fix but declined to participate, hitting. 324 for the series from 11 hits and committing no errors in the field. The Sox, who were promised $100,000 for cooperating, proceeded to lose the Series in eight games, pitching poorly, hitting poorly and making many errors. Though he took the money, Jackson insisted to his death that he played to the best of his ability in the series (he was the best hitter in the series, including having hit the series ' only home run, but had markedly worse numbers in the games the White Sox lost).
During the Series, writer and humorist Ring Lardner had facetiously called the event the "World 's Serious ''. The Series turned out to indeed have serious consequences for the sport. After rumors circulated for nearly a year, the players were suspended in September 1920.
The "Black Sox '' were acquitted in a criminal conspiracy trial. However, baseball in the meantime had established the office of Commissioner in an effort to protect the game 's integrity, and the first commissioner, Kenesaw Mountain Landis, banned all of the players involved, including Weaver, for life. The White Sox would not win a World Series again until 2005.
The events of the 1919 Series, segueing into the "live ball '' era, marked a point in time of change of the fortunes of several teams. The two most prolific World Series winners to date, the New York Yankees and the St. Louis Cardinals, did not win their first championship until the 1920s; and three of the teams that were highly successful prior to 1920 (the Boston Red Sox, Chicago White Sox and the Chicago Cubs) went the rest of the 20th century without another World Series win. The Red Sox and White Sox finally won again in 2004 and 2005, respectively. The Cubs had to wait over a century (until the 2016 season) for their next trophy. They did not appear in the Fall Classic from 1945 until 2016, the longest drought of any MLB club.
The New York Yankees purchased Babe Ruth from the Boston Red Sox after the 1919 season, appeared in their first World Series two years later in 1921, and became frequent participants thereafter. Over a period of 45 years from 1920 to 1964, the Yankees played in 29 World Series championships, winning 20. The team 's dynasty reached its apex between 1947 and 1964, when the Yankees reached the World Series 15 times in eighteen years, helped by an agreement with the Kansas City Athletics (after that team moved from Philadelphia during 1954 -- 1955 offseason) whereby the teams made several deals advantageous to the Yankees (until ended by new Athletics ' owner Charles O. Finley). During that span, the Yankees played in all World Series except 1948, 1954, and 1959, winning ten. From 1949 to 1953, the Yankees won the World Series five years in a row; from 1936 -- 1939 the Yankees won four World Series Championships in a row. There are only two other occasions when a team has won at least three consecutive World Series: 1972 to 1974 by the Oakland Athletics, and 1998 to 2000 by the New York Yankees.
In an 18 - year span from 1947 to 1964, except for 1948 and 1959, the World Series was played in New York City, featuring at least one of the three teams located in New York at the time. The Dodgers and Giants moved to California after the 1957 season, leaving the Yankees as the lone team in the city until the Mets were enfranchised in 1962. During this period, other than 1948, 1954, and 1959, the Yankees represented the American League in the World Series.
In the years 1947, 1949, 1951 -- 1953, and 1955 -- 1956, both teams in the World Series were from New York, with the Yankees playing against either the Dodgers or Giants.
In 1958, the Brooklyn Dodgers and New York Giants took their long - time rivalry to the west coast, moving to Los Angeles and San Francisco, respectively, bringing Major League Baseball west of St. Louis and Kansas City.
The Dodgers were the first of the two clubs to contest a World Series on the west coast, defeating the Chicago White Sox in 1959. The 1962 Giants made the first California World Series appearance of that franchise, losing to the Yankees. The Dodgers made three World Series appearances in the 1960s: a 1963 win over the Yankees, a 1965 win over the Minnesota Twins and a 1966 loss to the Baltimore Orioles.
In 1968, the Kansas City Athletics relocated to Oakland and the following year 1969, the National League granted a franchise to San Diego as the San Diego Padres. The A 's became a powerful dynasty, winning three consecutive World Series from 1972 -- 1974. In 1974, the A 's played the Dodgers in the first all - California World Series. The Padres have two World Series appearances (a 1984 loss to the Detroit Tigers, and a 1998 loss to the New York Yankees).
The Dodgers won two more World Series in the 1980s (1981, 1988). The A 's again went to three straight world series, from 1988 -- 1990, winning once. 1988 and 1989 were all - California series as the A 's lost to the Dodgers and beat the Giants, respectively. The Giants have been in four World Series ' in the new millennium, losing in 2002 to the Anaheim Angels (the most - recent all - California series), and winning in 2010 (Rangers), 2012 (Tigers), and 2014 (Royals).
Prior to 1969, the National League and the American League each crowned its champion (the "pennant winner '') based on the best win - loss record at the end of the regular season.
A structured playoff series began in 1969, when both the National and American Leagues were reorganized into two divisions each, East and West. The two division winners within each league played each other in a best - of - five League Championship Series to determine who would advance to the World Series. In 1985, the format changed to best - of - seven.
The National League Championship Series (NLCS) and American League Championship Series (ALCS), since the expansion to best - of - seven, are always played in a 2 -- 3 -- 2 format: Games 1, 2, 6 and 7 are played in the stadium of the team that has home - field advantage, and Games 3, 4 and 5 are played in the stadium of the team that does not.
MLB night games started being held in 1935 by the Cincinnati Reds, but the World Series remained a strictly daytime event for years thereafter. In the final game of the 1949 World Series, a Series game was finished under lights for the first time. The first scheduled night World Series game was Game 4 of the 1971 World Series at Three Rivers Stadium. Afterward, World Series games were frequently scheduled at night, when television audiences were larger. Game 6 of the 1987 World Series was the last World Series game played in the daytime, indoors at the Metrodome in Minnesota. (The last World Series played outdoors during the day was the final game of the 1984 series in Detroit 's Tiger Stadium.)
During this seven - year period, only three teams won the World Series: the Oakland Athletics from 1972 to 1974, Cincinnati Reds in 1975 and 1976, and New York Yankees in 1977 and 1978. This is the only time in World Series history in which three teams have won consecutive series in succession. This period was book - ended by World Championships for the Pittsburgh Pirates, in 1971 and 1979.
However, the Baltimore Orioles made three consecutive World Series appearances: 1969 (losing to the "amazing '' eight - year - old franchise New York Mets), 1970 (beating the Reds in their first World Series appearance of the decade), and 1971 (losing to the Pittsburgh Pirates, as well their 1979 appearance, when they again lost to the Pirates), and the Los Angeles Dodgers ' back - to - back World Series appearances in 1977 and 1978 (both losses to the New York Yankees), as well in 1974 losing against the cross-state rival Oakland Athletics.
Game 6 of the 1975 World Series is regarded by most as one of the greatest World Series games ever played. It found the Boston Red Sox winning in the 12th inning in Fenway Park, defeating the Cincinnati Reds to force a seventh and deciding game. The game is best remembered for its exciting lead changes, nail - biting turns of events, and a game - winning walk - off home run by Carlton Fisk, resulting in a 7 -- 6 Red Sox victory.
The National and American Leagues operated under essentially identical rules until 1973, when the American League adopted the designated hitter (DH) rule, allowing its teams to use another hitter to bat in place of the (usually) weak - hitting pitcher. The National League did not adopt the DH rule. This presented a problem for the World Series, whose two contestants would now be playing their regular - season games under different rules. From 1973 to 1975, the World Series did not include a DH. Starting in 1976, the World Series allowed for the use of a DH in even - numbered years only. (The Cincinnati Reds swept the 1976 Series in four games, using the same nine - man lineup in each contest. Dan Driessen was the Reds ' DH during the series, thereby becoming the National League 's first designated hitter.) Finally, in 1986, baseball adopted the current rule in which the DH is used for World Series games played in the AL champion 's park but not the NL champion 's. Thus, the DH rule 's use or non-use can affect the performance of the home team.
The 1984 Detroit Tigers gained distinction as just the third team in major league history (after the 1927 New York Yankees and 1955 Brooklyn Dodgers) to lead a season wire - to - wire, from opening day through their World Series victory. In the process, Tigers skipper Sparky Anderson became the first manager to win a World Series title in both leagues, having previously won in 1975 and 1976 with the Cincinnati Reds.
The 1988 World Series is remembered for the iconic home run by the Los Angeles Dodgers ' Kirk Gibson with two outs in the bottom of the ninth inning. The Dodgers were huge underdogs against the 104 - win Oakland Athletics, who had swept the Boston Red Sox in the ALCS. Baseball 's top relief pitcher, Dennis Eckersley, closed out all four games in the ALCS, and he appeared ready to do the same in Game 1 against a Dodgers team trailing 4 - 3 in the ninth. After getting the first two outs, Eckersley walked Mike Davis of the Dodgers, who were playing without Gibson, their best position player and the NL MVP. Gibson had injured himself in the NLCS and was expected to miss the entire World Series. Yet, despite not being able to walk without a noticeable limp, Gibson surprised all in attendance at Dodger Stadium (and all watching on TV) by pinch - hitting. After two quick strikes and then working the count full, Gibson hit a home run to right, inspiring iconic pronouncements by two legendary broadcasters calling the game, Vin Scully (on TV) and Jack Buck (on radio). On NBC, as Gibson limped around the bases, Scully famously exclaimed, "the impossible has happened! '' and on radio, Buck equally famously exclaimed, "I do n't believe what I just saw! '' Gibson 's home run set the tone for the series, as the Dodgers went on to beat the A 's 4 games to 1. The severity of Gibson 's injury prevented him from playing in any of the remaining games.
When the 1989 World Series began, it was notable chiefly for being the first ever World Series matchup between the two San Francisco Bay Area teams, the San Francisco Giants and Oakland Athletics. Oakland won the first two games at home, and the two teams crossed the bridge to San Francisco to play Game 3 on Tuesday, October 17. ABC 's broadcast of Game 3 began at 5 pm local time, approximately 30 minutes before the first pitch was scheduled. At 5: 04, while broadcasters Al Michaels and Tim McCarver were narrating highlights and the teams were warming up, the Loma Prieta earthquake occurred (having a surface - wave magnitude of 7.1 with an epicenter ten miles (16 km) northeast of Santa Cruz, California). The earthquake caused substantial property and economic damage in the Bay Area and killed 63 people. Television viewers saw the video signal deteriorate and heard Michaels say "I 'll tell you what, we 're having an earth -- '' before the feed from Candlestick Park was lost. Fans filing into the stadium saw Candlestick sway visibly during the quake. Television coverage later resumed, using backup generators, with Michaels becoming a news reporter on the unfolding disaster. Approximately 30 minutes after the earthquake, Commissioner Fay Vincent ordered the game to be postponed. Fans, workers, and the teams evacuated a blacked out (although still sunlit) Candlestick. Game 3 was finally played on October 27, and Oakland won that day and the next to complete a four - game sweep.
World Series games were contested outside of the United States for the first time in 1992, with the Toronto Blue Jays defeating the Atlanta Braves in six games. The World Series returned to Canada in 1993, with the Blue Jays victorious again, this time against the Philadelphia Phillies in six games. No other Series has featured a team from outside of the United States. Toronto is the only expansion team to win successive World Series titles. The 1993 World Series was also notable for being only the second championship concluded by a home run and the first concluded by a come - from - behind homer, after Joe Carter 's three - run shot in the bottom of the ninth inning sealed an 8 -- 6 Toronto win in Game 6. The first Series to end with a homer was the 1960 World Series, when Bill Mazeroski hit a ninth - inning solo shot in Game 7 to win the championship for the Pittsburgh Pirates.
In 1994, each league was restructured into three divisions, with the three division winners and the newly introduced wild card winner advancing to a best - of - five playoff round (the "division series ''), the National League Division Series (NLDS) and American League Division Series (ALDS). The team with the best league record is matched against the wild card team, unless they are in the same division, in which case, the team with the second - best record plays against the wild card winner. The remaining two division winners are pitted against each other. The winners of the series in the first round advance to the best - of - seven NLCS and ALCS. Due to a players ' strike, however, the NLDS and ALDS were not played until 1995. Beginning in 1998, home field advantage was given to the team with the better regular season record, with the exception that the Wild Card team can not get home - field advantage.
After the boycott of 1904, the World Series was played every year until 1994 despite World War I, the global influenza pandemic of 1918 -- 1919, the Great Depression of the 1930s, America 's involvement in World War II, and even an earthquake in the host cities of the 1989 World Series. A breakdown in collective bargaining led to a strike in August 1994 and the eventual cancellation of the rest of the season, including the playoffs.
As the labor talks began, baseball franchise owners demanded a salary cap in order to limit payrolls, the elimination of salary arbitration, and the right to retain free agent players by matching a competitor 's best offer. The Major League Baseball Players Association (MLBPA) refused to agree to limit payrolls, noting that the responsibility for high payrolls lay with those owners who were voluntarily offering contracts. One difficulty in reaching a settlement was the absence of a commissioner. When Fay Vincent was forced to resign in 1992, owners did not replace him, electing instead to make Milwaukee Brewers owner Bud Selig acting commissioner. Thus the commissioner, responsible for ensuring the integrity and protecting the welfare of the game, was an interested party rather than a neutral arbiter, and baseball headed into the 1994 work stoppage without an independent commissioner for the first time since the office was founded in 1920.
The previous collective bargaining agreement expired on December 31, 1993, and baseball began the 1994 season without a new agreement. Owners and players negotiated as the season progressed, but owners refused to give up the idea of a salary cap and players refused to accept one. On August 12, 1994, the players went on strike. After a month passed with no progress in the labor talks, Selig canceled the rest of the 1994 season and the postseason on September 14. The World Series was not played for the first time in 90 years. The Montreal Expos, now the Washington Nationals, were the best team in baseball at the time of the stoppage, with a record of 74 -- 40 (since their founding in 1969, the Expos / Nationals have never played in a World Series.)
The labor dispute lasted into the spring of 1995, with owners beginning spring training with replacement players. However, the MLBPA returned to work on April 2, 1995 after a federal judge, future U.S. Supreme Court justice Sonia Sotomayor, ruled that the owners had engaged in unfair labor practices. The season started on April 25 and the 1995 World Series was played as scheduled, with Atlanta beating Cleveland four games to two.
The 2001 World Series was the first World Series to end in November, due to the week - long delay in the regular season after the September 11 attacks. Game 4 had begun on Oct. 31 but went into extra innings and ended early on the morning of Nov. 1, the first time the Series had been played in November. Yankee shortstop Derek Jeter won the game with a 12th inning walk - off home run and was dubbed "Mr. November '' by elements of the media -- echoing the media 's designation of Reggie Jackson as "Mr. October '' for his slugging achievements during the 1977 World Series.
With the 2006 World Series victory by the St. Louis Cardinals, Tony La Russa became the second manager to a win a World Series in both the American and National Leagues.
Prior to 2003, home - field advantage in the World Series alternated from year to year between the NL and AL. After the 2002 Major League Baseball All - Star Game ended in a tie, MLB decided to award home - field advantage in the World Series to the winner of the All - Star Game. Originally implemented as a two - year trial from 2003 to 2004, the practice was extended.
The American League had won every All - Star Game since this change until 2010 and thus enjoyed home - field advantage from 2002, when it also had home - field advantage based on the alternating schedule, through 2009. From 2003 to 2010, the AL and NL had each won the World Series four times, but none of them had gone the full seven games. Since then, the 2011, 2014, and 2016 World Series have gone the full seven games.
This rule is subject to debate, with various writers feeling that home - field advantage should be decided based on the regular season records of the participants, not on an exhibition game played several months earlier. Some writers especially questioned the integrity of this rule after the 2014 All - Star Game, when St. Louis Cardinals pitcher Adam Wainwright suggested that he intentionally gave Derek Jeter some easy pitches to hit in the New York Yankees ' shortstop 's final All - Star appearance before he retired at the end of that season.
As Bob Ryan of The Boston Globe wrote in July 2015 about the rule:
"So now we have a game that 's not real baseball determining which league hosts Games 1, 2, 6, and 7 in the World Series. It 's not a game if pitchers throw one inning. It 's not a game if managers try to get everyone on a bloated roster into the game. It 's not a game if every franchise, no matter how wretched, has to put a player on the team... If the game is going to count, tell the managers to channel their inner Connie Mack and go for it. ''
However, within the last seven seasons, home - field advantage, in terms of deciding World Series games, has not necessarily worked for teams of said games. Since 2014, the home team has not won the deciding game of a World Series.
The San Francisco Giants won the World Series in 2010, 2012, and 2014 while failing to qualify to play in the postseason in the intervening seasons.
The Texas Rangers were twice only one strike away from winning their first World Series title in 2011, but the St. Louis Cardinals scored late twice in Game 6 to force a Game 7.
The Kansas City Royals reached the World Series in 2014, which was their first appearance in the postseason since winning the series in 1985. At the time, it was the longest postseason drought in baseball. They lost in 7 games to the Giants. The following season, the Royals finished with the American League 's best record, and won a second consecutive American League pennant. They defeated the New York Mets in the World Series 4 - 1, capturing their first title in 30 years.
In 2016, the Chicago Cubs ended their 108 - year long drought without a World Series title by defeating the Cleveland Indians, rallying from a 3 -- 1 Series deficit in the process. That extended Cleveland 's World Series title drought to 68 years and counting -- the Indians last won the Series in 1948 -- now the longest title drought in the majors.
Beginning in 2017, home field advantage in the World Series will be awarded to the league champion team with the better regular season win - loss record. If both league champions have the same record, the second tie - breaker would be head - to - head record, and if that does not resolve it, the third tie - breaker would be best divisional record.
American League (AL) teams have won 64 of the 112 World Series played (57 %). The New York Yankees have won 27 titles, accounting for 24 % of all series played and 42 % of the wins by American League teams. The St. Louis Cardinals have won 11 World Series, accounting for 10 % of all series played and 23 % of the 48 National League victories. At least one New York team has been in 54 World Series (48 %) of Series played. When the first modern World Series was played in 1903, there were eight teams in each league. These 16 franchises, all of which are still in existence, have each won at least two World Series titles.
The number of teams was unchanged until 1961, with fourteen "expansion teams '' joining MLB since then. Twelve have played in a World Series (the Mariners and Expos / Nationals being the two exceptions). The expansion teams have won ten of the 22 Series (45 %) in which they have played, which is 9 % of all 112 series played since 1903. In 2015, the first World Series featuring only expansion teams was played between the Kansas City Royals and New York Mets.
This information is up to date through the present time:
When two teams share the same state or metropolitan area, fans often develop strong loyalties to one and antipathies towards the other, sometimes building on already - existing rivalries between cities or neighborhoods. Before the introduction of interleague play in 1997, the only opportunity for two teams playing in the same area but in different leagues to face each other in official competition would have been in a World Series.
The first city to host an entire World Series was Chicago in 1906, when the Chicago White Sox beat the Chicago Cubs in six games.
Fourteen "Subway Series '' have been played entirely within New York City, all including the American League 's New York Yankees. Thirteen of them matched the Yankees with either the New York Giants or the Brooklyn Dodgers of the National League. The initial instances occurred in 1921 and 1922, when the Giants beat the Yankees in consecutive World Series that were not technically "subway series '' since the teams shared the Polo Grounds as their home ballpark. The Yankees finally beat the Giants the following year, their first in their brand - new Yankee Stadium, and won the two teams ' three subsequent Fall Classic match - ups in 1936, 1937 and 1951. The Yankees faced Brooklyn seven times in October, winning their first five meetings in 1941, 1947, 1949, 1952 and 1953, before losing to the Dodgers in 1955, Brooklyn 's sole World Championship. The last Subway Series involving the original New York ballclubs came in 1956, when the Yankees again beat the Dodgers. The trio was separated in 1958 when the Dodgers and Giants moved to California (although the Yankees subsequently met and beat the now - San Francisco Giants in 1962, and played the now - Los Angeles Dodgers four times, losing to them in a four - game sweep in 1963, beating them back - to - back in 1977 and 1978 and losing to them in 1981). An all - New York Series did not recur until 2000, when the Yankees defeated the New York Mets in five games.
The last World Series played entirely in one ballpark was the 1944 "Streetcar Series '' between the St. Louis Cardinals and the St. Louis Browns. The Cardinals won in six games, all held in their shared home, Sportsman 's Park.
The 1989 World Series, sometimes called the "Bay Bridge Series '' or the "BART Series '' (after the connecting transit line), featured the Oakland Athletics and the San Francisco Giants, teams that play just across San Francisco Bay from each other. The series is most remembered for the major earthquake that struck the San Francisco Bay Area just before game 3 was scheduled to begin. The quake caused significant damage to both communities and severed the Bay Bridge that connects them, forcing the postponement of the series. Play resumed ten days later, and the A 's swept the Giants in four games (the earthquake disruption of the Series almost completely overshadowed the fact that the 1989 Series represented a resumption after many decades of the October rivalry between the Giants and the A 's dating back to the early years of the 20th Century, when the then - New York Giants had defeated the then - Philadelphia Athletics in 1905, and had lost to them in 1911 and again in 1913).
The historic rivalry between Northern and Southern California added to the interest in the Oakland Athletics - Los Angeles Dodgers series in 1974 and 1988 and in the San Francisco Giants ' series against the then - Anaheim Angels in 2002.
Other than the St. Louis World Series of 1944, the only postseason tournament held entirely within Missouri was the I - 70 Series in 1985 (named for the Interstate Highway connecting the two cities) between the St. Louis Cardinals and the Kansas City Royals, who won at home in the seventh game.
Going into the 2017 season, there has never been an in - state World Series between the teams in Ohio (Cleveland Indians and Cincinnati Reds), Florida (Tampa Bay Rays and Florida Marlins), Texas (Texas Rangers and Houston Astros -- who now both play in the American League since the Astros changed leagues in 2013, making any future joint World Series appearance an impossibility unless one of the teams switches leagues), or Pennsylvania (the Philadelphia Phillies and the Pittsburgh Pirates have been traditional National League rivals going back to the late 19th Century). Neither the Phillies nor the Pirates ever faced the Athletics in October during the latter team 's tenure in Philadelphia, through 1954. The Boston Red Sox never similarly faced the Braves while the latter team played in Boston through 1952. There also was never an all - Canada World Series between the Toronto Blue Jays and the former Montreal Expos, who never won a National League pennant when they played in that Canadian city from 1969 through 2004. The Expos became the Washington Nationals in 2005 -- raising the possibility of a potential future "I - 95 World Series '' between the National League team and the AL 's Baltimore Orioles, who play just 50 miles to the north of Washington. Finally, the Los Angeles and / or Anaheim Angels have never faced off in October against either the Dodgers or against the San Diego Padres for bragging rights in Southern California, although all three of those teams have appeared in the World Series at various times.
At the time the first modern World Series began in 1903, each league had eight clubs, all of which survive today (although sometimes in a different city or with a new nickname), comprising the "original sixteen ''.
When the World Series was first broadcast on television in 1947, it was only televised to a few surrounding areas via coaxial inter-connected stations: New York City; Philadelphia; Schenectady / Albany, New York; Washington, D.C. and surrounding suburbs / environs. In 1948, games in Boston were only seen in the Northeast. Meanwhile, games in Cleveland were only seen in the Midwest and Pittsburgh. The games were open to all channels with a network affiliation. In all, the 1948 World Series was televised to fans in seven Midwestern cities: Cleveland, Chicago, Detroit, Milwaukee, St. Louis, and Toledo. By 1949, World Series games could now be seen east of the Mississippi River. The games were open to all channels with a network affiliation. By 1950, World Series games could be seen in most of the country, but not all. 1951 marked the first time that the World Series was televised coast to coast. Meanwhile, 1955 marked the first time that the World Series was televised in color.
^ *: Not currently broadcasting Major League Baseball.
^ * *: Per the current broadcast agreement, the World Series will be televised by Fox through 2021.
^ * * *: Gillette, which sponsored World Series telecasts exclusively from roughly 1947 to 1965 (prior to 1966, the Series announcers were chosen by the Gillette Company along with the Commissioner of Baseball and NBC), paid for airtime on DuMont 's owned - and - operated Pittsburgh affiliate, WDTV (now KDKA - TV) to air the World Series. In the meantime, Gillette also bought airtime on ABC, CBS, and NBC. More to the point, in some cities, the World Series was broadcast on three different stations at once.
^ * * * *: NBC was originally scheduled to televise the entire 1995 World Series; however, due to the cancellation of the 1994 Series (which had been slated for ABC, who last televised a World Series in 1989), coverage ended up being split between the two networks. Game 5 is, to date, the last Major League Baseball game to be telecast by ABC (had there been a Game 7, ABC would 've televised it). This was the only World Series to be produced under the "Baseball Network '' umbrella (a revenue sharing joint venture between Major League Baseball, ABC, and NBC). In July 1995, both networks announced that they would be pulling out of what was supposed to be a six - year - long venture. NBC would next cover the 1997 (NBC 's first entirely since 1988) and 1999 World Series over the course of a five - year - long contract, in which Fox would cover the World Series in even numbered years (1996, 1998, and 2000).
Despite its name, the World Series remains solely the championship of the major - league baseball teams in the United States and Canada, although MLB, its players, and North American media sometimes informally refer to World Series winners as "world champions of baseball ''.
The United States, Canada, and Mexico (Liga Méxicana de Béisbol, established 1925) were the only professional baseball countries until a few decades into the 20th century. The first Japanese professional baseball efforts began in 1920. The current Japanese leagues date from the late 1940s (after World War II). Various Latin American leagues also formed around that time.
By the 1990s, baseball was played at a highly skilled level in many countries. Reaching North America 's high - salary major leagues is the goal of many of the best players around the world, which gives a strong international flavor to the Series. Many talented players from Latin America, the Caribbean, the Pacific Rim, and elsewhere now play in the majors. One notable exception is Cuban citizens, because of the political tensions between the US and Cuba since 1959 (yet a number of Cuba 's finest ballplayers have still managed to defect to the United States over the past half - century to play in the American professional leagues). Japanese professional players also have a difficult time coming to the North American leagues. They become free agents only after nine years playing service in the NPB, although their Japanese teams may at any time "post '' them for bids from MLB teams, which commonly happens at the player 's request.
Several tournaments feature teams composed only of players from one country, similar to national teams in other sports. The World Baseball Classic, sponsored by Major League Baseball and sanctioned by the sport 's world governing body, the World Baseball Softball Confederation (WBSC), uses a format similar to the FIFA World Cup to promote competition between nations every four years. The WBSC has since added the Premier12, a tournament also involving national teams; the first event was held in 2015, and is planned to be held every four years (in the middle of the World Baseball Classic cycle). The World Baseball Classic is held in March and the Premier12 is held in November, allowing both events to feature top - level players from all nations. The predecessor to the WBSC as the sport 's international governing body, the International Baseball Federation, also sponsored a Baseball World Cup to crown a world champion. However, because the World Cup was held during the Northern Hemisphere summer, during the playing season of almost all top - level leagues, its teams did not feature the best talent from each nation. As a result, baseball fans paid little or no attention to the World Cup and generally disregarded its results. The Caribbean Series features competition among the league champions from Mexico, Puerto Rico, the Dominican Republic, and Venezuela but unlike the FIFA Club World Cup, there is no club competition that features champions from all professional leagues across the world.
Rooftop view of a 1903 World Series game in Boston
Game action in the 1906 Series in Chicago (the only all - Chicago World Series to date)
Bill Wambsganss completes his unassisted triple play in 1920
Washington 's Bucky Harris scores his home run in the fourth inning of Game 7 (October 10, 1924)
The Chicago Cubs celebrate winning the 2016 World Series, which ended the club 's 108 - year championship drought.
|
when does interest start on student loans canada | Student loans in Canada - wikipedia
Student loans in Canada help post-secondary students pay for their education in Canada. The federal government funds the Canada Student Loan Program (CSLP) and the provinces may fund their own programs or run in parallel with the CSLP. In addition, Canadian banks offer commercial loans targeted for students in professional programs.
Canadian citizens, permanent residents of Canada living in any province for over a year, and protected persons are normally eligible for loans provided by the federal government, through the CSLP, in addition to loans provided by their province of residence.
Loans issued to full - time students are interest free while a student is in full - time studies. Students receiving a Canada Student Loan (CSL) for the first time on or after August 1, 1995, are eligible for up to 340 weeks (~ 6.5 years) of interest - free assistance. Students in doctoral programs are eligible for an additional 60 weeks, up to 400 weeks (~ 7.5 years). Students with permanent disabilities and students who received their first CSL prior to August 1, 1995 are eligible for up to 520 weeks of assistance (10 years).
As the length of North American graduate degree programs often exceed this 400 week maximum, students considering graduate study are advised to think carefully before taking out student loans. For example, an honours BA from a Canadian University takes four years, assuming satisfactory progress. MA programs in Canada vary in length from 1 -- 3 years, with two years being the average minimum. A PhD takes, on average, 5 years to complete, although many students take significantly longer than this. Assuming a graduate student completes an honours BA (5 years), an MA (2 years), and a PhD (5 years), one can expect to be in university for at least 12 years. This is significantly longer than the 400 weeks maximum allotted to complete a degree by the National student loan program, and graduate students can easily find themselves in a position where they no longer qualify for student loans. Whether in receipt of student loans or not, students in full - time study are not required to repay their student loans; however, interest begins to accumulate immediately upon reaching the lifetime limit: quoting directly from NSLSC, "Once a lifetime limit has been reached, interest starts to accumulate on your loan. ''.
Student financial assistance is available for students in part - time studies. Beginning January 1, 2012, the Government of Canada eliminated interest on student loans while borrowers are in - study. Student loan borrowers begin repaying their student loans six months after they graduate or leave school, although interest begins accumulating right away. Grants may supplement loans to aid students who face particular barriers to accessing post-secondary education, such as students with permanent disabilities or students from low - income families.
Students must apply for the Canadian and provincial loans through their provincial government. The rules for what determines your province of residence vary, but normally it is defined as where you have most recently lived for at least 12 consecutive months, not including any time you spent as a full - time student at a post-secondary institution. In most cases, the province of residence is the province one lived in before becoming a post-secondary student.
Canada Student Loans of up to $210 per week of full - time study or 60 % of the student 's assessed need (the lesser of these) can be issued per loan year (August 1 -- July 31). Loans issued through provincial programs will normally provide students with enough funding to cover the balance of their assessed need. Part - time loans can be made, but a student can not be more than $10,000 in debt on part - time loans at any one time. All Canadian students were also eligible for the Canada Millennium Scholarship Foundation Bursary (CMS Grant) until the program ended in 2008. There are also other grants provided by students ' province of residence.
Prior to 1964, the national student loan program was known as the Dominion - Provincial Student Loan Program. This program was a matching grant partnership system between the federal and provincial governments. It was started in 1939 and ended with the start to the CSLP in 1964.
Some text from the Department of Human Resources and Social Development Canada:
The CSLP was created in 1964. Since its inception, the Program has supplemented the financial resources available to eligible students from other sources to assist in their pursuit of post-secondary education. Between 1964 and 1995, loans were provided by financial institutions to post-secondary students who were approved to receive financial assistance. The financial institutions also administered the loan repayment process. In return, the Government of Canada guaranteed each Canada Student Loan that was issued, by reimbursing the financial institution the full amount of loans that went into default.
In 1995, several important changes were made to Canada Student Loans. First, the Canada Student Financial Assistance Act was proclaimed, replacing the existing Canada Student Loans Act (which still remains in force to this day) reflecting the changing needs of the parties involved in the loan process, including the conferred responsibility of the collection of defaulted loans to the banks themselves. The Government of Canada developed a formalized "risk - shared '' agreement with several financial institutions, whereby the institution would assume responsibility for the possible risk of defaulted loans in return for a fixed payment from the Government which correlated with the amount of loans that were expected to be, or were, in default in each calendar year. During this period, the weekly federal loan amount was increased to a maximum of $165.
On July 31, 2000, the risk - shared arrangement between the Government of Canada and participating financial institutions came to an end. The Government of Canada now directly finances all new loans issued on or after August 1, 2000. The administration of Canada Student Loans has become the responsibility of the National Student Loans Service Centre (NSLSC). There are two divisions of the NSLSC, one to manage loans for students attending public institutions and the other to administer loans for students attending private institutions. Defaulted Canada Student Loans disbursed under this new regime are now collected by the Canada Revenue Agency which, by Order in Council dated August 1, 2005, became responsible for the collection of all debts due under programs administered by Human Resources and Social Development Canada.
Due to the close nature of the CSLP and the provincial student loan programs, the changes in 1995 and 2000 were largely mirrored by the provincial programs. As a result of these changes, students who attended school before and after these transition years may find that they have up to 6 different loans to manage (pre-1995 federal & provincial; 1995 - 2000 federal & provincial; and post-2000 federal & provincial). The extent to which this is possible depends largely on a student 's province of residence.
Most charter banks in Canada have specific programs for students in professional programs (e.g., medicine) that can provide more funds than usual in the form of a line of credit, sometimes with lower interest rates as well. Students may also be eligible for government loans that are interest free while in school on top of this line of credit, as private loans do not count against government loans / grants.
The March 2011 federal budget announced a Canada Student Loan forgiveness programme for medical and nursing students to complement other health human resources strategies to expand the provision of primary health services. The programme is meant to encourage and support new family physicians, nurse practitioners and nurses to practise in underserved rural or remote communities of the country, including communities that provide health services to First Nations and Inuit populations.
The Canada Student Loan (sometimes referred to as the National Student Loan) is administered by National Student Loan Service Centre under contract to Human Resources and Social (Skills) Development Canada (HRSDC). Students have the choice of opting for a fixed interest rate of prime interest rate + 5 %, or a floating interest rate of prime interest rate + 2.5 %. Newfoundland and Labrador and Prince Edward Island were the only provinces where there was no interest on the provincial loan, but as of March 28, 2014, the government of Nova Scotia also eliminated interest for all graduates who entered repayment after Nov. 1, 2007.
Based on the HRSDC student loan calculator, and assuming an average prime interest rate of 4.5 %, (as of December 2011, the rate is 5.5 %) a standard 10 - year (114 month) repayment period, and a loan of $30,000:
- if the Floating Interest option is selected, monthly payments will be $361.02 (principal and interest), resulting in total payments of $41,156.77 ($30,000 principal + $11,156.77 interest) over the life of the repayment.
- if the Fixed Interest option is selected, monthly payments will be $400.50 (principal and interest), resulting in payments of $45,657.54 ($30,000 principal + $16,657.54 interest).
CSLP offers a number of programs to assist students who find themselves facing financial difficulty during repayment. Among these programs are:
|
where does while you were sleeping take place | While you were sleeping (film) - wikipedia
While You Were Sleeping is a 1995 American romantic comedy film directed by Jon Turteltaub and written by Daniel G. Sullivan and Fredric Lebow. It stars Sandra Bullock as Lucy, a Chicago Transit Authority token collector, and Bill Pullman as Jack, the brother of a man whose life she saves, along with Peter Gallagher as Peter, the man who is saved, Peter Boyle and Glynis Johns as members of Peter 's family, and Jack Warden as longtime family friend and neighbor.
Lucy Eleanor Moderatz (Sandra Bullock) is a lonely fare token collector for the Chicago Transit Authority, stationed at the Randolph / Wabash station. She has a secret crush on a handsome commuter named Peter Callaghan (Peter Gallagher), although they are complete strangers. On Christmas Day, she rescues him from the oncoming Chicago "L '' train after a group of muggers push him onto the tracks. He falls into a coma, and she accompanies him to the hospital, where a nurse overhears her musing aloud, "I was going to marry him. '' Misinterpreting her, the nurse tells his family that she is his fiancée.
At first she is too caught up in the panic to explain the truth. She winds up keeping the secret for a number of reasons: she is embarrassed, Peter 's grandmother Elsie (Glynis Johns) has a heart condition, and Lucy quickly comes to love being a part of Peter 's big, loving family. One night, thinking she is alone while visiting Peter, she confesses about her predicament. Peter 's godfather, Saul (Jack Warden), overhears the truth and later confronts her, but tells her he will keep her secret, because the accident has brought the family closer.
With no family and few friends, Lucy becomes so captivated with the quirky Callaghans and their unconditional love for her that she can not bring herself to hurt them by revealing that Peter does not even know her. She spends a belated Christmas with them and then meets Peter 's younger brother Jack (Bill Pullman), who is supposed to take over his father 's furniture business. He is suspicious of her at first, but he falls in love with her as they spend time together. They develop a close friendship and soon she falls in love with him as well.
After New Year 's Eve, Peter wakes up. He does not know Lucy, so it is assumed that he must have amnesia. She and Peter spend time together, and Saul persuades Peter to propose to her "again ''; she agrees even though she is in love with Jack. When Jack visits her the day before the wedding, she gives him a chance to change her mind, asking him if he can give her a reason not to marry Peter. He replies that he can not, leaving her disappointed.
On the day of the wedding, just as the priest begins the ceremony, Lucy finally confesses everything and tells the family she loves Jack rather than Peter. At this point, Peter 's real fiancée Ashley Bartlett Bacon (Ally Walker), who happens to be married herself, arrives and also demands the wedding be stopped. As the family argues, Lucy slips out unnoticed, unsure of her future.
Some time later, while Lucy is at work, Jack places an engagement ring in the token tray of her booth. She lets him into the booth, and with the entire Callaghan family watching, he proposes to her. In the last scenes of the film, they kiss at the end of their wedding, then leave on a CTA train for their honeymoon. She narrates that he fulfilled her dream of going to Florence, Italy, and explains that, when Peter asked when she fell in love with Jack, she replied, "it was while you were sleeping. ''
Julia Roberts was offered the role of Lucy Moderatz but turned it down.
The film was a tremendous success, grossing a total of $182,057,016 worldwide against an estimated $17,000,000 budget. It made $9,288,915 on its opening weekend of April 21 -- 23, 1995. It was the thirteenth - highest grosser of 1995 in the United States. Also, the film is recognized by American Film Institute in these lists:
|
why are economists interested in measure of elasticity | Elasticity (economics) - wikipedia
In economics, elasticity is the measurement of how an economic variable responds to a change in another. It gives answers to questions such as:
An elastic variable (with elasticity value greater than 1) is one which responds more than proportionally to changes in other variables. In contrast, an inelastic variable (with elasticity value less than 1) is one which changes less than proportionally in response to changes in other variables. A variable can have different values of its elasticity at different starting points: for example, the quantity of a good supplied by producers might be elastic at low prices but inelastic at higher prices, so that a rise from an initially low price might bring on a more - than - proportionate increase in quantity supplied while a rise from an initially high price might bring on a less - than - proportionate rise in quantity supplied.
Elasticity can be quantified as the ratio of the percentage change in one variable to the percentage change in another variable, when the latter variable has a causal influence on the former. A more precise definition is given in terms of differential calculus. It is a tool for measuring the responsiveness of one variable to changes in another, causative variable. Elasticity has the advantage of being a unitless ratio, independent of the type of quantities being varied. Frequently used elasticities include price elasticity of demand, price elasticity of supply, income elasticity of demand, elasticity of substitution between factors of production and elasticity of intertemporal substitution.
Elasticity is one of the most important concepts in neoclassical economic theory. It is useful in understanding the incidence of indirect taxation, marginal concepts as they relate to the theory of the firm, and distribution of wealth and different types of goods as they relate to the theory of consumer choice. Elasticity is also crucially important in any discussion of welfare distribution, in particular consumer surplus, producer surplus, or government surplus.
In empirical work an elasticity is the estimated coefficient in a linear regression equation where both the dependent variable and the independent variable are in natural logs. Elasticity is a popular tool among empiricists because it is independent of units and thus simplifies data analysis.
A major study of the price elasticity of supply and the price elasticity of demand for US products was undertaken by Joshua Levy and Trevor Pollock in the late 1960s.
The price elasticity of supply measures how the amount of a good that a supplier wishes to supply changes in response to a change in price. In a manner analogous to the price elasticity of demand, it captures the extent of horizontal movement along the supply curve relative to the extent of vertical movement. If the price elasticity of supply is zero the supply of a good supplied is "totally inelastic '' and the quantity supplied is fixed.
Elasticity of scale or output elasticity measures the percentage change in output induced by a collective percent change in the usages of all inputs. A production function or process is said to exhibit constant returns to scale if a percentage change in inputs results in an equal percentage in outputs (an elasticity equal to 1). It exhibits increasing returns to scale if a percentage change in inputs results in greater percentage change in output (an elasticity greater than 1). The definition of decreasing returns to scale is analogous.
Price elasticity of demand is a measure used to show the responsiveness, or elasticity, of the quantity demanded of a good or service to a change in its price. More precisely, it gives the percentage change in quantity demanded in response to a one percent change in price (ceteris paribus, i.e. holding constant all the other determinants of demand, such as income).
Cross-price elasticity of demand is a measure of the responsiveness of the demand for one product to changes in the price of a different product. It is the ratio of percentage change in the former to the percentage change in the latter. If it is positive, the goods are called substitutes because a rise in the price of the other good causes consumers to substitute away from buying as much of the other good as before and into buying more of this good. If it is negative, the goods are called complements.
The concept of elasticity has an extraordinarily wide range of applications in economics. In particular, an understanding of elasticity is fundamental in understanding the response of supply and demand in a market.
Some common uses of elasticity include:
In some cases the discrete (non-infinitesimal) arc elasticity is used instead. In other cases, such as modified duration in bond trading, a percentage change in output is divided by a unit (not percentage) change in input, yielding a semi-elasticity instead.
|
how do points on your driver license work | Point system (driving) - wikipedia
A penalty point or demerit point system is one in which a driver 's licensing authority, police force, or other organization issues cumulative demerits, or points to drivers on conviction for road traffic offenses. Points may either be added or subtracted, depending on the particular system in use. A major offense may lead to more than the maximum allowed points being issued. Points are typically applied after driving offenses are committed, and cancelled a defined time, typically a few years, afterwards, or after other conditions are met; if the total exceeds a specified limit the offender may be disqualified from driving for a time, or the driving license may be revoked. Fines and other penalties may be applied additionally, either for an offense or after a certain number of points have been accumulated.
The primary purpose of such point systems is to identify, deter, and penalize repeat offenders of traffic laws, while streamlining the legal process. Germany introduced a demerit point system, in 1974, and one was introduced in New York at about that time.
In jurisdictions which use a point system, the police or licensing authorities (as specified by law) maintain, for each driver, a driving score -- typically an integer number specified in points. Traffic offenses, such as speeding or disobeying traffic signals, are each assigned a certain number of points, and when a driver is determined to be guilty of a particular offense (by whatever means appropriate in the region 's legal system), the corresponding number of points are added to the driver 's total. When the driver 's total exceeds a certain threshold, the driver may face additional penalties, be required to attend safety classes or driver training, be subject to re-examination, or lose his / her driving privileges.
The threshold (s) to determine additional penalties may vary based on the driver 's experience level, prior driving record, age, educational level attained, and other factors. In particular, it is common to set a lower threshold for young, inexperienced motorists.
In some jurisdictions, points can also be added if the driver is found to be significantly at fault in a traffic accident. Points can be removed from a driver 's score by the simple passage of time, by a period of time with no violations or accidents, or by the driver 's completion of additional drivers ' training or traffic safety training.
Major traffic offenses, such as hit and run or drunk driving may or may not be handled within the point system. Such offenses often carry a mandatory suspension of driving privileges, and may incur penalties such as imprisonment.
Traffic laws are the responsibility of the State and Territory Governments. Demerit points are used in all states and territories, and road authorities share information about interstate offenses.
In all states, drivers holding a full, unrestricted license will be disqualified from driving after accumulating 12 demerit points or more within a three - year period, except in New South Wales, where drivers are allowed 13 points in a three - year period. Those who can prove they are professional drivers are allowed an additional point. The minimum suspension period is three months, plus one further month for every extra four demerit points beyond the license 's limit, with a cap in most states of five months (for 8 points or more over the suspension trigger; e.g. 20 points or more on a full license). An alternative to initially accepting the suspension, a driver can apply for a "good behavior '' period of 12 months. In most states, drivers under a good behavior period who accumulate one or two further points (except Victoria does not allow any further offenses) have their license suspended for double the original period.
Most states also provide for immediate suspension of a license, instead of or in addition to demerit points, in certain extreme circumstances. These generally include offenses for driving under the influence of alcohol or other drugs, or for greatly excessive speed.
Provisional licence holders are allowed different numbers of demerit points over the lifetime of their licence, depending on their licence class, before being suspended from driving for three months. Holders of a P1 licence, which lasts 12 -- 18 months (but can be renewed), are suspended after accumulating 4 points, while P2 licence holders are suspended after 7 points in a 24 - to 30 - month period (but can be renewed). Speeding offences for provisional licence holders are set to a minimum of four points, meaning that P1 holders will be suspended after one speeding offence of any speed.
During holiday periods, double demerit points apply for speeding, seatbelt and helmet - related offences. Offences in school zones attract more demerit points than in other areas. Automatic suspensions apply for all drink - and drug - driving offences, as well as speeding by more than 30 km / h.
Victoria introduced a demerit points suspension scheme in 1970. Learner and probationary drivers are sent a combined option - suspension notice for accumulating 5 points or more over any 12 - month period. An option notice allows for either a 12 - month bond or a three - month minimum suspension. If a driver breaches the bond by incurring one demerit point in the 12 - month period, their licence is suspended for a minimum of six months. A limit of 12 points in any three - year period with the same option applies for full licence holders. The list of traffic offences and their respective points is in schedule 3 of the Road Safety (Drivers) Regulations 2009.
In Victoria, drunk - driving offences only result in immediate licence cancellation for unrestricted drivers with a blood alcohol concentration of 0.05 or higher. Readings lower than this have the option of a 10 - point penalty being imposed of being taken immediately to court; this option still results in a minimum four - month suspension for novice drivers. Automatic suspensions apply for higher level charges, and re-licensing may require an order to install an interlocking device onto the vehicle. Automatic suspension periods of at least 1 month also apply for speeding by greater than 25 km / h over the speed limit, or any speed greater than 130 km / h.
In South Australia, if a traffic offence is committed against the Road Traffic Act 1961 or the Australian Road Rules 1999, demerit points may be incurred against a driver 's licence. The number of points incurred depends on the offence and how likely it is to cause a crash. If 12 or more demerit points are accumulated in any three - year period, a driver will be disqualified from holding or obtaining a driver 's licence or permit. Each three - year period is calculated based on the dates the offences were committed.
If a driver accumulates:
Demerit points are incurred whether the offence is committed in South Australia or interstate.
A demerit points scheme was introduced into the Northern Territory on 1 September 2007. Offences that accrue points include speeding, failing to obey a red traffic light or level crossing signal, failing to wear a seatbelt, drink driving, using a mobile phone, failure to display L or P plates, street racing, burnouts and causing damage.
Learner and provisional drivers are subject to suspension for accumulating 5 points or more over a 12 - month period. The three - year limit of 12 points still applies.
In Queensland provisional or learner drivers are entitled to accumulate 4 demerit points, and open licence holders 12 demerit points, without it affecting their licence. A driver who exceeds their demerit point threshold may elect to lose their licence for a period of 3 months or elect a good driving behaviour period which allows them to incur only one demerit point offences without it affecting their licence. If whilst on the good driving behaviour period a driver incurs more than one demerit point then they will lose their licence for a minimum of 6 months unless a Magistrates Court grants a special hardship licence
Bulgaria has implemented a penalty point system with a total of 34 points, introduced in 1999.
Denmark has a penalty point system that penalizes drivers with a klip ("cut / stamp '') for certain traffic violations. The term klip refers to a klippekort ("punch card ticket ''). If a driver with a non-probationary license accumulates three penalty points, then police conditionally suspend the driver 's license. To get a new license, suspended drivers must pass both written and practical drivers examinations. Drivers who have been suspended and first - time drivers must avoid collecting two penalty points for a three - year probationary period; if the driver has not accumulated any penalty points, then the driver is allowed an extra penalty point so they can have three maximum. Penalty points are deleted from the police database three years after they were assessed. Police can also unconditionally ban people from driving.
In England and Wales, penalty points are given by courts for some of the traffic offences listed in Schedule 2 of the Road Traffic Offenders Act 1988. Where points are given, the minimum is 2 points for some lesser offences and the maximum 11 points for the most serious offences; some incidents can result in points being given for multiple offences or for multiple occurrences of the same offence (typically for having more than one defective tyre); the majority of applicable offences attract 3 or more penalty points. The giving of penalty points is obligatory for most applicable offences, but the number of points, and the giving of points for some of several offences, can be discretionary. Points remain on the driver 's record, and an endorsement is made upon the driver 's licence, for four years from conviction (eleven years for drink - and drug - related convictions). Twelve points on the licence within three years make the driver liable to disqualification; however this is not automatic, but must be decided by a law court.
Since the introduction of the Road Traffic (New Drivers) Act 1995, if a person in the two years after passing their first practical test accumulates six penalty points, their licence is revoked by the DVLA, and the driver has to reapply and pay for the provisional licence, drive as a learner, and pay for and take the theory and practical tests before receiving a full licence again. In the case of egregious offences, the court may order the driver to pass an extended driving test before the licence is returned, even beyond the two - year probation period.
Since 11 October 2004 there has been mutual recognition of driver disqualification arising from the penalty points given in England and Wales (and / or Scotland) with Northern Ireland; before that date disqualification in England and Wales would only have extended to Scotland by virtue of the driver registration system covering only Great Britain.
The driver registration system is separate from that of Great Britain with different laws covering penalty points and the offences to which they apply. In other respects the application of the system is similar to that in England and Wales. Offences to which penalty points apply are indicated in Schedule 1 of the Road Traffic Offenders (Northern Ireland) Order 1996.
Road traffic laws are mostly shared with, or similar to those of, England and Wales, although Scotland is a separate jurisdiction. The driver registration system currently covers all of Great Britain and the Road Traffic Offenders Act 1988 currently governs the penalty points system in Scotland. The main differences in the penalty points provisions of the 1988 Act are the theft and homicide offences attracting penalty points indicated in Schedule 2 Part II ("Other Offences '') which are not common between Scots Law and English Law.
The Federal Motor Transport Authority (Kraftfahrt - Bundesamt) located in Flensburg, operates an 8 - point system for committed traffic offences. This system was introduced in May 2014, replacing the previous 18 - points system that dates back to 1974. Points expire after 2.5 to 10 years depending on the type and severity of each committed offence. Under certain circumstances points can be reduced by attending formal training events. Obtaining eight or more points will result in a revocation of the driving licence; once revoked the licences will only reinstated after a psychological assessment following the ban. Information about own points can be obtained any time free of charge.
In the Republic of Ireland, twelve points accrued results in six months ' disqualification. 38 regulatory offences notified by post incur 1 - 2 point penalties on payment of a fine. 10 more serious offences require a mandatory court appearance and incur 3 - 5 point penalties. The most serious offences are outside the penalty point system and incur automatic driving bans, and in some cases imprisonment.
In Italy the driver has 20 points by default, and receives a bonus of 2 points for every 2 years of correct behavior, with a maximum of 30 points.
Each traffic violation incurs a specific point penalty (for example, ignoring a traffic light involves a penalty of 6 points). If the driver loses all points, the driving license is revoked.
In case of the second alcohol abuse in 2 years, the driving license will be revoked.
A suspension is effective from when the driver is personally served with the suspension notice and they must surrender their driving license to the person giving them the notice.
Since March 30, 2002, The Netherlands has a point system for starting drivers (5 years starting from the moment you first passed a driving test or 7 years if you passed before reaching the age of 18). A driver reaching 2 points in 5 years will lose the driving licence and has to pass a driving test again in order to be regain the licence. On October 1, 2014 this limit was lowered from 3 to 2 points. Drivers can get a point for:
Some of these violations could also directly result in loss of the licence, however when a driver has 2 points the licence is automatically revoked and a driving test has to be passed again, whereas normally the violation would only result in the licence being suspended for several months. However, in Dutch media the effectiveness as been doubted, it was said that points were being given but not always correctly registered.
The system is called "prikkbelastning '' with prikk (er) meaning point (s). Points are assessed to a driver 's license for traffic violations which do not by themselves result in immediate revocation of the license.
After July 1, 2011, the normal penalty for most traffic violations, such as failing to yield or failing to stop at red lights, is three dots in addition to the fine. Speeding violations of between 10 and 15 km / h (where the speed limit is 60 km / h or less), or between 15 and 20 km / h (where the speed limit is 70 km / h or more) result in two dots, for speeding violations below this no dots are assessed. Young drivers between 18 - 20 are penalized with twice the number of dots.
A driver reaching 8 dots in three years loses his or her driving license for 6 months. Each dot is deleted when three years have passed since the violation took place. When the driving privileges are restored after the six - month ban, the dots which caused the suspension are deleted.
When a driver accumulates 15 or more points within a two - year period, their licence is automatically suspended for one month.
Ontario uses a 15 - point system where points are "added '' to a driver 's record following a conviction, though Ontario 's point system is unrelated to safe driving behaviour (a lone driver using a high - occupancy vehicle lane in Ontario will earn three demerit points).
Ontario drivers guilty of driving offences in other Canadian provinces, as well as the States of New York and Michigan, will see demerit points added to their driving record just as if the offence happened in Ontario.
The point system is applied in different ways, or not at all, in different states. If a red light running traffic violation is captured by red light camera, no points are assessed. Aspects of a motorist 's driving record (including points) may be reported to insurance companies, who may use them in determining what rate to charge the motorist, and whether to renew or cancel an insurance policy.
Arizona uses a point system where your license will be suspended if 8 points are accumulated in one year. Offenses that lead to this are the following:
There are other offenses that can count toward this (e.g. HOV lane misuse is a 3 - point offense)
Drivers who accumulate tickets for moving violations may be considered negligent operators and can lose their right to drive. Major offenses, such as hit and run, reckless driving, and driving under the influence, earn 2 points and remain on record for 13 years. Less serious offenses earn 1 point which remain for 39 months (3 years, 3 months).
A driver is considered negligent if they accumulate:
Suspension or Revocation by Department of Motor Vehicles (DMV)
Negligent drivers can be put on probation for one year (including a six - month suspension) or lose their privilege to drive. At the end of the suspension or revocation period, drivers need to re-apply for a license to drive.
DMV will revoke a license after conviction for hit - and - run or reckless driving.
Suspension by Judge
A judge may suspend license following conviction for:
When a driver is cited for a traffic violation, the judge may offer the driver the opportunity to attend a Traffic Violator School, this would include any online traffic school if the court allows. Drivers may participate once in any 18 - month period to have a citation dismissed from their driving record this way. Upon dismissal of the citation, all record of the citation is removed and no points are accumulated.
Regardless of the number of points accumulated, many serious offenses involving a vehicle are punishable by heavy fines or imprisonment.
Colorado uses an accumulating point system according to the Colorado Division of Motor Vehicles Point Schedule. Suspension of driving privileges can result from as few as 6 points in 12 months by a driver under 18 years old. Points remain on the driver 's motor vehicle record for 7 years. Some motor vehicle offenses carry 12 points per incident, which could result in immediate suspension of the drivers license. Multiple traffic violation convictions can also result in a suspension of the drivers license if a sufficient number of points are accumulated during a 12 - or 24 - month period.
Florida uses a point system similar to that of Colorado. The Florida Department of Highway Safety and Motor Vehicles is the department responsible for the issuance of Driver 's Licenses in the state and will also track points issued to drivers who are licensed within the state. The following are point values assigned for the following infractions.
Speeding
Speeding Fines are doubled when the infraction occurs within an active school zone or a construction zone.
Moving Violations
Any person who collects a certain amount of points within a given time frame will have their license automatically revoked by the state for the length of time listed.
Any driver under the age of 18 who accumulates six or more points within a 12 - month period is automatically restricted for one year to driving for business purposes only. If additional points are accumulated the restriction is extended for 90 days for every additional point received.
If a driver license was suspended in the state of Florida for points or as a habitual (but not DUI) traffic offender, or by court order, the holder must complete an advanced driver improvement course before driving privileges are reinstated.
Points issued against a driver 's license in Florida remain on the license for at least 10 years.
The State of Florida issues its citizens points against their driver 's license for infractions occurring anywhere in the United States.
In Massachusetts point system is known as Safe Driver Insurance Plan (SDIP). This encourages safe driving with lower premiums for drivers who do not cause accidents or commit traffic violations, and by ensuring that high - risk drivers pay a greater share of insurance costs. The points are accumulated over a six - year period, and reduced for sustained periods of safe driving.
The Motor Vehicle Commission of New Jersey has a point system. If the motorist receives 6 points or more within a period 3 years or more, he / she will be forced to pay a surcharge annually for three years, which does include court fees and other penalties. If 12 points or more are accumulated on the motorist 's license, then his / her license will be suspended. Other offenses that lead to automatic suspension of the motorist 's license are the following:
The points range from 2 - 8 points, depending on the severity of the offense. Red light camera violations are not worth any points. The motorists can deduct points from their driving records. 3 points may be deducted one year after either the motorist 's last moving violation and no violations for at least one year before. The motorist must also complete an approved driver improvement program. 2 points may be deducted if the motorist completes a defensive driving course. However, the motorist may receive point reductions every five years for every course he / she takes.
New York Statutes has a point system; after 11 points or 3 speeding tickets in 18 months, a driver 's privileges are subject to suspension, with the possibility of requesting a review hearing. Points are counted from the date of the incident (usually the date of the ticket) rather than the date of conviction. For out - of - state offenses, New York State Department of Motor Vehicles does not record point violations, with the exception of violations from Quebec and Ontario.
North Carolina operates two parallel point systems: one for DMV license suspension purposes and one for insurance purposes.
The DMV point system assigns 2 to 4 points upon conviction or an admission of guilt for most moving violations; non-moving violations carry no points. A driver 's license is suspended for 60 days on the first suspension if twelve points are assessed against the license within a three - year period. Serious offenses, such as DWI and excessive speeding (more than 15 mph over the limit at a travelled speed of greater than 55 mph), result in an immediate suspension on conviction. Points are not assessed for up to two granted Prayers for Judgment Continued (PJC) within a five - year period, though some serious offenses (such as DUI, passing a stopped school bus, and speeding in excess of 25 mph over the posted speed limit) are ineligible for a PJC.
The insurance point system assigns points differently, assigning points to incidences of at - fault accidents and moving violations. Rather than using the points for a license suspension, the points lead to insurance surcharges of approximately 25 - 35 % per point assessed. Notably, points are assessed for insurance purposes even if the license is suspended. Only points within the three years preceding the policy purchase date are considered, and a single PJC per household within the three - year period does not result in points assigned.
Incidents from out - of - state are treated as though they occurred in North Carolina for point assessment purposes.
The State of Ohio institutes the following points system. Any Ohio driver convicted of a traffic violation is assessed a specific number of penalty points according to the type of violation. Should that driver be convicted of a second or subsequent offense within two years after the first violation, the point assessment for the new violation is added to the previous total. The number of penalty points given to a violator are assessed by the court system. Following is a schedule of point assessments for specific violations: Six - Point Violations 1. Homicide by vehicle 2. Operating a vehicle while under the influence of alcohol and / or any drug of abuse 3. Failure to stop and disclose identity at the scene of a collision 4. Willingly fleeing or eluding a law enforcement officer 5. Racing 6. Operating a vehicle without the consent of the owner 7. Using a vehicle in the commission of a felony, or committing any crime punishable as a felony under Ohio motor vehicle laws Four - Point Violations Willful or wanton disregard of the safety of persons or property. Two - Point Violations 1. All moving violations and some speed offenses 2. Operating a motor vehicle in violation of a restriction imposed by the Registrar of the Bureau of Motor Vehicles
A speeding violation may result in four points, two points or no points depending on the speed limit in effect and the number of miles per hour (mph) by which the speed limit was exceeded: Exceeding any speed limit by 30 mph or more results in four points. If the speed limit is 55 mph or more, exceeding the limit by more than 10 but less than 30 mph results in two points. If the speed limit is less than 55 mph, exceeding the limit by more than five but less than 30 mph results in two points. Exceeding any speed limit in an amount less than stated above results in no points.
A driver who has accumulated six points in a two - year period will receive a letter from the Registrar of Motor Vehicles warning that the law provides the following penalties for drivers accumulating 12 or more points in a two - year period: 1. Driving privileges will be suspended for six months. 2. Proof of financial responsibility (see this chapter) must be filed with the Bureau of Motor Vehicles and maintained for three years to five years. 3. After the suspension is served, a remedial driving course approved by the director of the Ohio Department of Public Safety must be taken. The course must include a minimum of 25 percent of the number of classroom hours devoted to instruction on driver attitude. 4. Pay a reinstatement fee of $40. 5. Must take the complete driver exam
A person who has accumulated at least two but no more than 11 points for traffic violations may earn a two - point credit toward his or her driving record by completing an approved remedial driving course. Senate Bill 123, effective January 1, 2004, permits individuals to enroll in remedial driving classes in order to receive a two - point credit, up to five times in a lifetime, once every three years.
The Department of Traffic of Pennsylvania has a point system that follows:
If the motorist accumulates six or more points on his / her license, the motorist is in danger of losing his / her license. If the motorist is under 18 years of age and has 6 points or more on his / her license or receives a ticket for speeding 26 miles over the posted speed limit, then the license will be suspended. Every first suspension period lasts 90 days. All subsequent suspensions will last 120 days.
The points range from 2 - 5 points, depending on the severity of the offense. The Pennsylvania DOT has the right to immediately suspend the motorist 's license if the following occurs:
The suspension / point system is as follows in the state of Pennsylvania:
In South Carolina, if a motorist has six or more points on his / her driving record, a warning letter will be sent to the motorist 's home address. If the motorist accumulates 12 or more points, then the license will be suspended. Motorists may reduce their points by taking a Defensive Driving Course. This course can not be taken online and the course must be taken in the state of South Carolina. In addition, the course must be taken after the motorist has been assessed points on his / her license. However, point reductions may be made within a three year period. If by any chance the motorist 's license is in danger of being suspended, the course must be taken prior to the suspension start time. The points range from 2 - 6 points, depending on the severity of the offense. If a motorist receives a ticket for a DUI, then the license is automatically suspended.
In Texas most moving violations are worth two points, but three points are assessed in the case that an accident was caused. A license can not be suspended as a result of point accumulation, however; instead, after six points have been accumulated, the driver must pay a "Driver Responsibility Surcharge '' of $100 plus $25 per additional point each year that the license has six or more points recorded. Other convictions carry penalties that remain on the license for three years after conviction, such as a DWI conviction ($1000 -- $2000), driving without a license ($100), or driving without insurance ($250). The license is suspended if the surcharges are not paid. Points clear from the license after three years, but the actual convictions clear from the record after five years, except for DWI convictions, which never expire.
In Brazil, all traffic violations incur a certain number of demerit points, depending on their severity, according to the 1997 Brazilian Traffic Code. If a driver accumulates more than 20 points (5 points for provisional drivers), the driving license is suspended and the driver has to take a traffic education course in order to regain the right (privilege) to drive. However, some infractions incur in immediate license suspension regardless of current point tally, such as drunk driving, engaging in street racing and others. It is also notable that many offenses that only apply to pedestrians also incur in demerit points.
Demerit points expire a year after the date of the violation.
The following jurisdictions also apply point systems:
|
breakfast at tiffany's she said i think i | Breakfast at Tiffany 's (song) - Wikipedia
"Breakfast at Tiffany 's '' is a 1995 song recorded by American alternative rock band Deep Blue Something. Originally appearing on the album 11th Song, it was later re-recorded and released on their album Home. It was the band 's only hit, peaking at number five on the Billboard Hot 100. Outside the United States, the song topped the charts in the United Kingdom, and peaked within the top ten of the charts in Australia, Belgium (Flanders), Canada, Germany, the Republic of Ireland and Sweden.
Todd Pipes said in a Q magazine about the promotion of "Breakfast at Tiffany 's '', "As the song had ' breakfast ' in the title, radio stations thought it would be genius to have us on at breakfast time. We 'd be up till 3 am and they 'd wonder why we were pissed off playing at 6 am. '' Follow - up singles failed to match the success of "Breakfast at Tiffany 's '', hence the reason for the band 's classification as a one - hit wonder.
"Breakfast at Tiffany 's '' is sung from the point of view of a man whose girlfriend is on the verge of breaking up with him because the two have nothing in common. Desperate to find something, the man brings up the Audrey Hepburn film Breakfast at Tiffany 's, and his girlfriend recalls that they "both kinda liked it. '' He argues that this should serve as enough motivation for them to work out their problems based on the notion that love will always find a way to make things work.
The film Roman Holiday inspired the lyrics of the song, but songwriter Todd Pipes thought that one of Hepburn 's other films would make a better song title.
Brian Wahlert called Breakfast at Tiffany 's "a cute, catchy song that should fit in well on adult contemporary, Top - 40 and alternative radio '' with memorable melody that makes it "a perfect single, along with the mildly repetitive, conversational lyrics of the chorus and the bright, acoustic guitar ''. However, Tom Sinclair of Entertainment Weekly was unimpressed. He called it "possibly the year 's most innocuous single, ' Breakfast at Tiffany 's ' is distressingly prosaic pop from a wimpy - sounding Texas quartet ''; he added that it lacked any "musical piquancy ''. The Houston Press listed the song as the second worst by an artist from Texas, after Vanilla Ice 's "Ice Ice Baby ''.
VH1 and Blender ranked the song # 6 on their list of the "50 Most Awesomely Bad Songs Ever ''.
The music video features the band members arriving to a breakfast table and being served by butlers, beside the curb in front of Tiffany & Co. in Midtown Manhattan. At the end of the video a young woman dressed like Holly Golightly (Audrey Hepburn 's character from the film) walks past on the sidewalk, and takes off her sunglasses.
Side A
Side B
shipments figures based on certification alone
|
who sings you get the best of my love | Best of My Love (Eagles song) - wikipedia
"Best of My Love '' is a song written by Don Henley, Glenn Frey, and J.D. Souther. It was originally recorded by the Eagles (with Henley singing lead vocals), and included on their 1974 album On the Border. The song was released as the third single from the album, and it became the band 's first Billboard Hot 100 number 1 single in March 1975. The song also topped the easy listening (adult contemporary) chart for one week a month earlier. Billboard ranked it as the number 12 song for 1975.
In 2009, J.D. Souther said of the writing of "Best of My Love '': "Glenn found the tune; the tune I think came from a Fred Neil record... We were working on that album (On the Border) and came to London. The three of us were writing it and were on deadline to get it finished. I do n't know where we got the inspiration. '' Glenn Frey recalled: "I was playing acoustic guitar one afternoon in Laurel Canyon, and I was trying to figure out a tuning that Joni Mitchell had shown me a couple of days earlier. I got lost and ended up with the guitar tuning for what would later turn out to be ' The Best of My Love. ' '' According to Henley, much of the lyrics was written while in a booth in Dan Tana 's Restaurant close to the Troubadour. The maître d ' of Dan Tana, Guido, was thanked in the liner notes of the album.
"Best of My Love '' was recorded at Olympic Studios in London. The Eagles had begun working on On the Border with producer Glyn Johns who had helmed their Eagles debut album and the follow - up Desperado album. Despite the success of their debut album the Eagles (Frey specifically) were unhappy over Johns ' preference for country rock and toning down their own rock aspirations, and their dissatisfaction with Johns was reinforced by the similarly honed Desperado album which was a comparative failure and Johns ' no - drug policy during the recording. After six weeks in London -- which yielded "Best of My Love '' and one other usable track, "You Never Cry Like a Lover '' -- the Eagles discontinued working with Johns, then spending eight weeks touring in Europe and the USA and completing the recording of On the Border at the Record Plant in their hometown of Los Angeles with Bill Szymczyk producing. "Best of My Love '' was remixed by Szymczyk.
Frey was reluctant to release "Best of My Love '' as a single and held off its release for some time. The release of the Eagles ' "Best of My Love '' as a single has been attributed to the track 's airplay at WKMI - AM in Kalamazoo MI, where radio dj Jim Higgs - also station music & program director - began playing the track off its parent album On the Border soon after that album 's release in the spring of 1974, favoring "Best of My Love '' over the official single releases "Already Gone '' and "James Dean ''. Advised by Higgs of the strong positive response of WKMI 's listeners to "Best of My Love '', Asylum Records gave the track a limited single release of 1000 copies available only in the Kalamazoo area, with reaction to this test - release securing the full release of "Best of My Love '' as a single on November 5, 1974.
However, when the single was finally released, Asylum Records had truncated the song so that it would be more radio - friendly, but had done so without the band 's knowledge or approval. It caused considerable anger in the band, and Henley demanded that the single be pulled from stores. The song however would become the most successful of their singles released so far, giving the band their first number 1 single. When the song was judged to have sold a million copies, the Eagles ' manager, Irving Azoff, sent to Asylum Records a gold record with a piece cut out, mounted on a plaque with a caption that said "The Golden Hacksaw Award ''.
Prior to the release of the Eagles version as a single, John Lees released his version of "Best of My Love '' as a single in 1974. The song was also recorded by Brooks & Dunn in the Eagles ' tribute album Common Thread: The Songs of the Eagles. It was also included in their compilation album Playlist: The Very Best of Brooks & Dunn. South African trumpeter Hugh Masekela covered the song in his 1976 album Melody Maker.
|
who are the largest banks in the us | List of largest banks in the United States - wikipedia
The following table lists the 100 largest banks in the United States by assets as of March 31, 2018.
|
can you take the bar exam in california without going to law school | Admission to the bar in the United states - wikipedia
Admission to the bar in the United States is the granting of permission by a particular court system to a lawyer to practice law in that system. Each U.S state and similar jurisdiction (e.g., territories under federal control) has its own court system and sets its own rules for bar admission (or privilege to practice law), which can lead to different admission standards among states. In most cases, a person who is "admitted '' to the bar is thereby a "member '' of the particular bar.
In the canonical case, lawyers seeking admission must earn a Juris Doctor degree from a law school approved by the jurisdiction, and then pass a bar exam administered by it. Typically, there is also a character and fitness evaluation, which includes a background check. However, there are exceptions to each of these requirements. A lawyer who is admitted in one state is not automatically allowed to practice in any other. Some states have reciprocal agreements that allow attorneys from other states to practice without sitting for another full bar exam; such agreements differ significantly among the states.
The use of the term "bar '' to mean "the whole body of lawyers, the legal profession '' comes ultimately from English custom. In the early 16th century, a railing divided the hall in the Inns of Court, with students occupying the body of the hall and readers or Benchers on the other side. Students who officially became lawyers were "called to the bar '', crossing the symbolic physical barrier and thus "admitted to the bar ''. Later, this was popularly assumed to mean the wooden railing marking off the area around the judge 's seat in a courtroom, where prisoners stood for arraignment and where a barrister stood to plead. In modern courtrooms, a railing may still be in place to enclose the space which is occupied by legal counsel as well as the criminal defendants and civil litigants who have business pending before the court.
The first bar exam in what is now the United States was instituted by Delaware Colony in 1763, as an oral examination before a judge. The other American colonies soon followed suit. By the late 19th century, the examinations were administered by committees of attorneys, and they eventually changed from an oral examination to a written one.
Today, each state has its own rules which are the ultimate authority concerning admission to its bar. Generally, admission to a bar requires that the candidate do the following:
Alabama, California, Connecticut, Georgia, Massachusetts, West Virginia and Tennessee allow individuals to take the bar exam upon graduation from law schools approved by state bodies but not accredited by the American Bar Association. The state of New York makes special provision for persons educated to degree - level in common law from overseas, with most LLB degree holders being qualified to take the bar exam and, upon passing, be admitted to the bar. But in certain states (e.g., Arizona), one may not be allowed to actually take the bar exam unless one 's law school is accredited by the ABA, and this requirement has withstood constitutional attack: thus, graduates of a law school without ABA accreditation may not sit for the Arizona bar, although they may take the bar in other states.
In California, certain law schools are registered with the Committee of Bar Examiners of The State Bar of California. Such schools, though not accredited by either the ABA or the Committee on Bar Examiners, are authorized to grant the Juris Doctor (J.D.) law degree. Students at these schools must take and pass the First - Year Law Students ' Examination (commonly referred to as the "Baby Bar '') administered by the CBE. Upon successful passing of the "Baby Bar, '' those students may continue with their law studies to obtain their J.D. degree. Students at law schools accredited by either the ABA or CBE are exempt from having to take and pass the Baby Bar.
In California, Tennessee, Vermont, Virginia, Washington, and Wyoming an applicant who has not attended law school may take the bar exam after study under a judge or practicing attorney for an extended period of time. This method is known as "reading law '' or "reading the law ''.
New York requires that applicants who are reading the law have at least one year of law school study (Rule 520.4 for the Admission of Attorneys).
Maine allows students with two years of law school to serve an apprenticeship in lieu of completing their third year.
Most attorneys seek and obtain admission only to the bar of one state, and then rely upon pro hac vice admissions for the occasional out - of - state matter. However, many new attorneys do seek admission in multiple states, either by taking multiple bar exams or applying for reciprocity. This is common for those living and working in metro areas which sprawl into multiple states, such as Washington, D.C. and New York City. Attorneys based in predominantly rural states or rural areas near state borders frequently seek admission in multiple states in order to enlarge their client base.
Note that in states that allow reciprocity, admission on motion may have conditions that do not apply to those admitted by examination. For example, attorneys admitted on motion in Virginia are required to show evidence of the intent to practice full - time in Virginia and are prohibited from maintaining an office in any other jurisdiction. Also, their licenses automatically expire when they no longer maintain an office in Virginia.
Admission to a state 's bar is not necessarily the same as membership in that state 's bar association. There are two kinds of state bar associations:
Thirty - two states and the District of Columbia require membership in the state 's bar association to practice law there. These states have what is called having a mandatory, unified, or integrated bar.
For example, the State Bar of Texas is an agency of the judiciary and is under the administrative control of the Texas Supreme Court, and is composed of those persons licensed to practice law in Texas; each such person is required by law to join the State Bar by registering with the clerk of the Texas Supreme Court.
The State Bar of California is another example of an integrated bar as is The Florida Bar.
A voluntary bar association is a private organization of lawyers. Each may have social, educational, and lobbying functions, but does not regulate the practice of law or admit lawyers to practice or discipline lawyers. An example of this is the New York State Bar Association.
There is a statewide voluntary bar association in each of the eighteen states that have no mandatory or integrated bar association. There are also many voluntary bar associations organized by geographic area (e.g., Chicago Bar Association), interest group or practice area (e.g., Federal Communications Bar Association), or ethnic or identity community (e.g., Hispanic National Bar Association).
The American Bar Association (ABA) is a nationwide voluntary bar association with the largest membership in the United States. The National Bar Association was formed in 1925 to focus on the interests of African - American lawyers after they were denied membership by the ABA.
Admission to a state bar does not automatically entitle an individual to practice in federal courts, such as the United States district courts or United States court of appeals. In general, an attorney is admitted to the bar of these federal courts upon payment of a fee and taking an oath of admission. An attorney must apply to each district separately. For instance, a Texas attorney who practices in federal courts throughout the state would have to be admitted separately to the Northern District of Texas, the Eastern District, the Southern District, and the Western District. To handle a federal appeal, the attorney would also be required to be admitted separately to the Fifth Circuit Court of Appeals for general appeals and to the Federal Circuit for appeals that fall within that court 's jurisdiction. As the bankruptcy courts are divisions of the district courts, admission to a particular district court usually includes automatic admission to the corresponding bankruptcy court. The bankruptcy courts require that attorneys attend training sessions on electronic filing before they may file motions.
Some federal district courts have extra admission requirements. For instance, the Southern District of Texas requires attorneys seeking admission to attend a class on that District 's practice and procedures. For some time, the Southern District of Florida administered an entrance exam, but that requirement was eliminated by Court order in February 2012. The District of Rhode Island requires candidates to attend classes and to pass an examination.
An attorney wishing to practice before the Supreme Court of the United States must apply to do so, must be admitted to the bar of the highest court of a state for three years, must be sponsored by two attorneys already admitted to the Supreme Court bar, must pay a fee and must take either a spoken or written oath.
Various specialized courts with subject - matter jurisdiction, including the United States Tax Court, have separate admission requirements. The Tax Court is unusual in that a non-attorney may be admitted to practice. However, the non-attorney must take and pass an examination administered by the Court to be admitted, while attorneys are not required to take the exam. Most members of the Tax Court bar are attorneys.
Admission to the Court of Appeals for the Federal Circuit is open to any attorney admitted to practice and in good standing with the U.S. Supreme Court, any of the other federal courts of appeal, any federal district court, the highest court of any state, the Court of International Trade, the Court of Federal Claims, the Court of Appeals for Veterans Claims, or the District of Columbia Court of Appeals. An oath and fee are required.
Some federal courts also have voluntary bar associations associated with them. For example, the Bar Association of the Fifth Federal Circuit, the Bar Association of the Third Federal Circuit, or the Association of the Bar of the United States Court of Appeals for the Eighth Circuit all serve attorneys admitted to practice before specific federal courts of appeals.
56 districts (around 60 % of all district courts) require an attorney to be admitted to practice in the state where the district court sits. The other 39 districts (around 40 % of all district courts) extend admission to certain lawyers admitted in other states, although conditions vary from court to court. Only 13 districts extend admission to attorneys admitted to any U.S. state bar. This requirement is not necessarily consistent within a state. For example, in Ohio, the Southern District generally requires membership in the Ohio state bar for full admission, while full admission to the Northern District is open to all attorneys in good standing with any U.S. jurisdiction. The District of Vermont requires membership in the Vermont State Bar or membership in the Bar of a federal district court in the First and Second Circuits. The District of Connecticut, within the Second Circuit, will admit any member of the Connecticut bar or of the bar of any United States District Court.
Persons wishing to "prosecute '' patent applications (i.e., represent clients in the process of obtaining a patent) must first pass the USPTO registration examination, frequently referred to as the "patent bar. '' Detailed information about applying for the registration examination is available in the USPTO 's General Requirements Bulletin. Although only registered patent attorneys or patent agents can prosecute patent applications in the USPTO, passing the patent bar is not necessary to advise clients on patent infringement, to litigate patent issues in court, or to prosecute trademark applications.
A J.D. degree is not required to sit for the patent bar. Lawyers who pass the patent bar exam may refer to themselves as a patent attorney (rules of legal ethics prohibit lawyers from using the title "patent attorney '' unless they are admitted to practice before the USPTO). While patent lawyers have a relevant four - year degree and many have graduate technical degrees, patent litigation attorneys do not have to be patent attorneys, although some are. On the other hand, non-lawyers who pass the patent bar are referred to as "patent agents. '' Patent agents may not hold themselves out as licensed attorneys.
Applicants must have U.S. citizenship, permanent residency (a Green Card), or a valid work visa for a patent - related job. An applicant on a work visa, upon passing the exam, is only given "limited recognition '' to perform work for the employer listed on the work visa. Only U.S. citizens can maintain their registration in the patent bar while they are working outside the United States. Additionally, the USPTO requires that applicants to the patent bar have earned a bachelor 's degree. Applicants are categorized as having earned an accredited "bachelor 's degree in a recognized technical subject '' (category A), having earned a "bachelor 's degree in another subject '' with sufficient credits to qualify for the exam (category B), or having "practical engineering or scientific experience '' (category C).
Applicants in "category A '' must have an engineering or "hard science '' degree in a field listed in the General Requirements Bulletin. Note that the degree field as shown on the diploma must be exactly as it appears on the list; for example, "aerospace engineering '' does not qualify under category A, while "aeronautical engineering '' does. A computer science degree is accepted under "category A '' as long as it is received from an Accreditation Board for Engineering and Technology (ABET) - accredited or CSAB - accredited program.
Applicants in "category B '' must have earned a bachelor 's degree, and must have sufficient credits in science and engineering courses to meet the USPTO 's requirements; the number of credits depends on the specific discipline. The coursework must include a minimum of eight credit - hours of acceptable classes in either chemistry or physics. Each course being relied upon by the applicant for credit is evaluated by the USPTO 's Office of Enrollment and Discipline for suitability; see the General Requirements Bulletin for the details. Engineering and Computer Science majors whose degree programs do not meet "category A '' requirements (typically due to the named field of the degree or, especially in computer science, lack of program accreditation) can apply under "category B. ''
Applicants in "category C '' may present evidence of passing the Fundamentals of Engineering exam as proof of technical education. They must also have a bachelor 's degree. Although the admission requirements allow applicants to substitute proof of technical experience for technical education, this is rarely done in practice.
Service as a member of a military service 's Judge Advocate General 's Corps requires graduation from an ABA - accredited law school, a license to practice law in any state or territory of the United States, and training at the specialized law school of one of the three military services (The Judge Advocate General 's Legal Center and School for the Army, the Naval Justice School for the Navy, Marine Corps, and Coast Guard, and the Air Force Judge Advocate General School for the Air Force).
In a court - martial, the accused is always provided JAG Corps defense counsel at no expense to the accused, but is also entitled to retain private civilian counsel at his or her own expense. Civilian counsel must either be a member of both a federal bar and a state bar, or must be otherwise authorized to practice law by a recognized licensing authority and certified by the military judge as having sufficient familiarity with criminal law as applicable in courts - martial.
The American legal system is unusual in that, with few exceptions, it has no formal apprenticeship or clinical training requirements between the period of academic legal training and the bar exam, or even after the bar exam. Two exceptions are Delaware and Vermont, which require that candidates for admission serve a full - time clerkship of at least five months (Delaware) or three months (Vermont) in the office of a lawyer previously admitted in that state before being eligible to take the oath of admission. New Jersey has a similar requirement, with the addition of training and instruction.
On October 12, 2005, the Washington State Supreme Court adopted amendments to Admission to Practice Rule 5 and 18, mandating that, prior to admission, Bar applicants must complete a minimum of four hours of approved pre-admission education.
Some law schools have tried to rectify this lack of experience by requiring supervised "Public Service Requirements '' of all graduates. States that encourage law students to undergo clinical training or perform public service in the form of pro bono representation may allow students to appear and practice in limited court settings under the supervision of an admitted attorney. For example, in New York 's Third Appellate Department, "Any officer or agency of the state... or any legal aid organization... may make application to the presiding justice of this court for an order authorizing the employment or utilization of law students who have completed at least two semesters of law school and eligible law school graduates as law interns to render and perform legal services... which the officer, agency or organization making the application is authorized to perform. '' Similarly, New York 's state Department of Labor allows law students to practice in unemployment benefits hearings before the agency.
In addition to the educational and bar examination requirements, most states also require an applicant to demonstrate good moral character. Character Committees look to an applicant 's history to determine whether the person will be fit to practice law in the future. This history may include prior criminal arrests or convictions, academic honor code violations, prior bankruptcies or evidence of financial irresponsibility, addictions or psychiatric disorders, sexual misconduct, prior civil lawsuits or driving history. In recent years, such investigations have increasingly focused on the extent of an applicant 's financial debt, as increased student loans have prompted concern for whether a new lawyer will honor legal or financial obligations. For example, in early 2009, a person who had passed the New York bar and had over $400,000 in unpaid student loans was denied admission by the New York Supreme Court, Appellate Division due to excessive indebtedness, despite being recommended for admission by the state 's character and fitness committee. He moved to void the denial, but the court upheld its original decision in November 2009, by which time his debt had accumulated to nearly $500,000. More recently, the Court of Appeals of Maryland rejected the application of a candidate who displayed a pattern of financial irresponsibility, applied for a car loan with false information, and failed to disclose a recent bankruptcy.
When applying to take a state 's bar examination, applicants are required to complete extensive questionnaires seeking the disclosure of significant personal, financial and professional information. For example, in Virginia, each applicant must complete a 24 - page questionnaire and may appear before a committee for an interview if the committee initially rejects their application. The same is true in the State of Maryland, and in many other jurisdictions, where the state 's supreme court has the ultimate authority to determine whether an applicant will be admitted to the bar. In completing the bar application, and at all stages of this process, honesty is paramount. An applicant who fails to disclose material facts, no matter how embarrassing or problematic, will greatly jeopardize the applicant 's chance of practicing law.
Once all prerequisites have been satisfied, an attorney must actually be admitted. The mechanics of this vary widely. For example, in California, the admittee simply takes an oath before any state judge or notary public, who then co-signs the admission form. Upon receiving the signed form, the State Bar of California adds the new admittee to a list of applicants recommended for admission to the bar which is automatically ratified by the Supreme Court of California at its next regular weekly conference; then everyone on the list is added to the official roll of attorneys. The State Bar also holds large - scale formal admission ceremonies in conjunction with the U.S. Court of Appeals for the Ninth Circuit and the federal district courts, usually in the same convention centers where new admittees took the bar examination, but these are optional.
In other jurisdictions, such as the District of Columbia, new admittees must attend a special session of court in person to take the oath of admission in open court; they can not take the oath before any available judge or notary public.
A successful applicant is permitted to practice law after being sworn in as an officer of the Court; in most states, that means they may begin filing pleadings and appearing as counsel of record in any trial or appellate court in the state. Upon admission, a new lawyer is issued a certificate of admission, usually from the state 's highest court, and a membership card attesting to admission.
Two states are exceptions to the general rule of admission by the state 's highest court:
In most states, lawyers are also issued a unique bar identification number. In states like California where unauthorized practice of law is a major problem, the state bar number must appear on all documents submitted by a lawyer.
|
secret service code names for presidents and first ladies | Secret Service code name - wikipedia
The United States Secret Service uses code names for U.S. presidents, first ladies, and other prominent persons and locations. The use of such names was originally for security purposes and dates to a time when sensitive electronic communications were not routinely encrypted; today, the names simply serve for purposes of brevity, clarity, and tradition. The Secret Service does not choose these names, however. The White House Communications Agency assigns them. WHCA was originally created as the White House Signal Detachment under Franklin Roosevelt.
The WHCA, an agency of the White House Military Office, is headquartered at Joint Base Anacostia - Bolling and consists of six staff elements and seven organizational units. WHCA also has supporting detachments in Washington, D.C. and various locations throughout the United States of America.
According to established protocol, good codewords are unambiguous words that can be easily pronounced and readily understood by those who transmit and receive voice messages by radio or telephone regardless of their native language. Traditionally, all family members ' code names start with the same letter.
The codenames change over time for security purposes, but are often publicly known. For security, codenames are generally picked from a list of such ' good ' words, but avoiding the use of common words which could likely be intended to mean their normal definitions.
U.S. Secret Service codenames are often given to high - profile political candidates (such as Presidential and Vice Presidential candidates), and their respective families and spouses who are assigned U.S. Secret Service protection. These codenames often differ from those held if they are elected or those from prior periods if they held positions needing codenames.
U.S. Secret Service codenames are not only given to people, they are often given to places, locations and even objects, such as aircraft like Air Force One, and vehicles such as the Presidential State Car.
In popular culture, the practice of assigning codenames is often used to provide additional verisimilitude in fictional works about the executive branch, or high - ranking governmental figures.
|
when did they move the nba 3 point line | Three - point field goal - wikipedia
A three - point field goal (also a trey / 3 - pointer) is a field goal in a basketball game made from beyond the three - point line, a designated arc surrounding the basket. A successful attempt is worth three points, in contrast to the two points awarded for field goals made within the three - point line and the one point for each made free throw.
The distance from the basket to the three - point line varies by competition level: in the National Basketball Association (NBA) the arc is 23 feet 9 inches (7.24 m) from the basket; in FIBA and the WNBA (the latter uses FIBA 's three - point line standard) the arc is 6.75 metres or 22 feet 1 ⁄ inches from the basket; and in the National Collegiate Athletic Association (NCAA) the arc is 20 feet 9 inches (6.32 m) from the basket. In the NBA and FIBA / WNBA, the three - point line becomes parallel to each sideline at the points where the arc is 3 feet (0.91 m) from each sideline; as a result the distance from the basket gradually decreases to a minimum of 22 feet (6.71 m). In the NCAA the arc is continuous for 180 ° around the basket. There are more variations (see main article).
In 3x3, a FIBA - sanctioned variant of the half - court 3 - on - 3 game, the "three - point '' line exists, but shots from behind the line are only worth 2 points. All other shots are worth 1 point.
The three - point line was first tested at the collegiate level in a 1945 NCAA game between Columbia and Fordham but it was not kept as a rule. At the direction of Abe Saperstein, the American Basketball League became the first basketball league to institute the rule in 1961. Its three - point line was a radius of 25 feet (7.62 m) from the baskets, except along the sides. The Eastern Professional Basketball League followed in its 1963 -- 64 season.
The three - point shot later became popularized by the American Basketball Association after its introduction in the 1967 -- 68 season. ABA commissioner George Mikan stated the three - pointer "would give the smaller player a chance to score and open up the defense to make the game more enjoyable for the fans. '' During the 1970s, the ABA used the three - point shot, along with the slam dunk, as a marketing tool to compete with the National Basketball Association (NBA).
In the 1979 -- 80 season, after having tested it in the previous pre-season, the NBA adopted the three - point line despite the view of many that it was a gimmick. Chris Ford of the Boston Celtics is widely credited with making the first three - point shot in NBA history on October 12, 1979. Kevin Grevey of the Washington Bullets also made one on the same day.
The sport 's international governing body, FIBA, introduced the three - point line in 1984, at a distance of 6.25 m (20 ft 6 in).
The NCAA 's Southern Conference became the first collegiate conference to use the three - point rule, adopting a 22 - foot (6.71 m) line for the 1980 -- 81 season. Ronnie Carr of Western Carolina University was the first to score a three - point field goal in college basketball history on November 29, 1980. Over the following five years, NCAA conferences differed in their use of the rule and distance required for a three - pointer. The line was as close as 17 ft 9 in (5.41 m) in the Atlantic Coast Conference, and as far away as 22 feet in the Big Sky Conference. Used in conference play, it was adopted by the NCAA for the 1986 -- 87 men 's season at 19 ft 9 in (6.02 m), and was first used in the NCAA Tournament in 1987. In the same 1986 -- 87 season, the NCAA adopted the three - pointer in women 's basketball on an experimental basis, using the same 19 ft 9 in distance, and made its use mandatory beginning in 1987 -- 88. In 2007, the NCAA lengthened the men 's three - point distance to 20 ft 9 in (6.32 m), with the rule coming into effect at the beginning of the 2008 -- 09 season. The NCAA women 's three - point distance was moved to match the men 's distance in 2011 -- 12. American high schools, along with elementary and middle schools, adopted a 19 ft 9 in (6.02 m) line nationally in 1987, a year after the NCAA. The NCAA used the FIBA three - point line (see below) in the 2018 National Invitation Tournament.
During the 1994 -- 95, 1995 -- 96, and 1996 -- 97 seasons, the NBA attempted to address decreased scoring by shortening the distance of the line from 23 ft 9 in (7.24 m) (22 ft (6.71 m) at the corners) to a uniform 22 ft (6.71 m) around the basket. From the 1997 -- 98 season on, the NBA reverted the line to its original distance of 23 ft 9 in (22 ft at the corners, with a 3 inch differential). Ray Allen is currently the NBA all - time leader in career made three - pointers with 2,973.
In 2008, FIBA announced that the distance would be increased by 50 cm (19.69 in) to 6.75 m (22 ft 1 ⁄ in), with the change being phased in beginning in October 2010. In December 2012, the WNBA announced that it would be using FIBA 's distance, too, as of the 2013 season. The NBA has discussed adding a four - point line, according to president Rod Thorn.
In the NBA, three - point field goals have become increasingly more frequent along the years, with effectiveness increasing slightly. The 1979 - 80 season had an average 2.2 three - point goals per game and 6.6 attempts (28 % effectiveness). The 1989 - 90 season had an average 4.8 three - point goals per game and 13.7 attempts (35 % effectiveness). The 2009 - 10 season had an average 6.4 three - point goals per game and 18.1 attempts (36 % effectiveness). The 2016 - 17 season had an average 9.7 three - point goals per game and 27.0 attempts (36 % effectiveness).
A three - point line consists of an arc at a set radius measured from the point on the floor directly below the center of the basket, and two parallel lines equidistant from each sideline extending from the nearest end line to the point at which they intersect the arc. In the NBA and FIBA standard, the arc spans the width of the court until it is a specified minimum distance from each sideline. The three - point line then becomes parallel to the sidelines from those points to the baseline. The unusual formation of the three - point line at these levels allows players some space from which to attempt a three - point shot at the corners of the court; the arc would be less than 2 feet (0.61 m) from each sideline at the corners if it was a continuous arc. In the NCAA and American high school standards, the arc spans 180 ° around the basket, then becomes parallel to the sidelines from the plane of the basket center to the baseline (5 feet 3 inches or 1.60 metres). The distance of the three - point line to the center of the hoop varies by level:
A player 's feet must be completely behind the three - point line at the time of the shot or jump in order to make a three - point attempt; if the player 's feet are on or in front of the line, it is a two - point attempt. A player is allowed to jump from outside the line and land inside the line to make a three - point attempt, as long as the ball is released in mid-air.
An official raises his / her arm with three fingers extended to signal the shot attempt. If the attempt is successful, he / she raises his / her other arm with all fingers fully extended in manner similar to a football official signifying successful field goal to indicate the three - point goal. The official must recognize it for it to count as three points. Instant replay has sometimes been used, depending on league rules. The NBA, WNBA, FIBA and the NCAA specifically allow replay for this purpose. In NBA, FIBA, and WNBA games, video replay does not have to occur immediately following a shot; play can continue and the officials can adjust the scoring later in the game, after reviewing the video. However, in late game situations, play may be paused pending a review.
If a shooter is fouled while attempting a three - pointer and subsequently misses the shot, the shooter is awarded three free - throw attempts. If a player completes a three - pointer while being fouled, the player is awarded one free - throw for a possible 4 - point play. Conceivably, if a player completed a three - pointer while being fouled, and that foul was ruled as either a Flagrant 1 or a Flagrant 2 foul, the player would be awarded two free throws for a possible 5 - point play.
Major League Lacrosse features a two - point line which forms a 15 - yard (14 m) arc around the front of the goal. Shots taken from behind this line count for two points, as opposed to the standard one point.
In gridiron football, a standard field goal is worth three points; various professional and semi-pro leagues have experimented with four - point field goals. NFL Europe and the Stars Football League adopted a rule similar to basketball 's three - point line in which an additional point was awarded for longer field goals; in both leagues any field goal of 50 yards (46 m) or more was worth four points. The Arena Football League awards four points for any successful drop kicked field goal (like the three - point shot, the drop kick is more challenging than a standard place kick, as the bounce of the ball makes a kick less predictable, and arena football also uses narrower goal posts for all kicks than the outdoor game does).
During the existence of the World Hockey Association in the 1970s, there were proposals for two - point hockey goals for shots taken beyond an established distance (one proposal was a 44 - foot (13.4 m) arc, which would have intersected the faceoff circles), but this proposal gained little support and faded after the WHA merged with the NHL. It was widely believed that long - distance shots in hockey had little direct relation to skill (usually resulting more from goalies ' vision being screened or obscured), plus with the lower scoring intrinsic to the sport a two - point goal was seen as disruptive of the structure of the game.
The Super Goal is a similar concept in Australian rules football, in which a 50 - meter (55 yd) arc determines the value of a goal; within the arc, it is the usual 6 points, but 9 points are scored for a "super goal '' scored from outside the arc. To date the super goal is only used in pre-season games and not in the season proper.
The National Professional Soccer League II, which awarded two points for all goals except those on the power play, also used a three - point line, drawn 45 feet (14 m) from the goal. It has since been adopted by some other indoor soccer leagues.
|
who won the missouri valley conference this year | 2017 -- 18 Missouri Valley conference Men 's Basketball season - wikipedia
The 2017 -- 18 Missouri Valley Conference men 's basketball season began with practices in October 2017, followed by the start of the 2017 -- 18 NCAA Division I men 's basketball season in November. Conference play began in late December 2017 and concluded in March with the Missouri Valley Conference Tournament at Scottrade Center in St. Louis, Missouri.
With a win against Evansville on February 18, 2018, Loyola clinched at least a share of its first - ever Missouri Valley Conference regular season championship. With a win over Southern Illinois on February 21, the Ramblers clinched the outright MVC championship.
Loyola defeated Illinois State in the championship game to win the MVC Tournament and received the conference 's automatic bid to the NCAA Tournament.
This season marked the first season with Valparaiso as a member of the conference. The Crusaders were invited to join the conference after Wichita State left the conference to join the American Athletic Conference.
Loyola received the conference 's only bid to the NCAA Tournament and advanced to the Final Four before losing to Michigan.
Drake was the only other conference school that received a bid to a postseason tournament, receiving a bid to the CollegeInsider.com Tournament where they went 1 -- 1.
Drake 's fourth - year head coach Ray Giacoletti resigned on December 6, 2016 after the first eight games of the season. Assistant coach Jeff Rutter was named interim head coach. Following the season, the school chose not to keep Jeff Rutter as head coach and hired Niko Medved, former head coach at Furman, as the Bulldogs ' new head coach.
Notes:
Source
Source
This table summarizes the head - to - head results between teams in conference play. Each team played 18 conference games, playing each team twice.
Throughout the regular season, the Missouri Valley Conference named a player and newcomer of the week.
Ryan Taylor, Evansville
Source
Teams were seeded by conference record, with ties broken by record between the tied teams followed by overall adjusted RPI, if necessary. The top six seeds received first - round byes.
* denotes overtime period
The winner of the MVC Tournament, Loyola, received the conference 's automatic bid to the 2018 NCAA Division I Men 's Basketball Tournament.
|
who is supreme leader in new star wars | Supreme Leader Snoke - wikipedia
Supreme Leader Snoke is a fictional character in the Star Wars franchise. He is a CGI character voiced and performed by Andy Serkis. Introduced in the 2015 film Star Wars: The Force Awakens, Snoke is the Supreme Leader of the First Order, a military junta resurrected from the fallen Galactic Empire, which seeks to reclaim control of the galaxy. Powerful with the Force, he seduces Ben Solo to the dark side by telling him that he can be the next Darth Vader, and Solo serves him as the commander Kylo Ren. In Star Wars: The Last Jedi (2017), Ren assassinates Snoke, replacing him as Supreme Leader.
Snoke 's appearance changed throughout principal photography and post-production of The Force Awakens. Serkis said, "It 's the first time I 've been on set not yet knowing what the character 's gon na look like. I mean, talk about secrecy! '' According to the actor, the character 's appearance, voice, and movements evolved as he and the film 's writer / director J.J. Abrams challenged the visual effects team.
The Force Awakens senior sculptor Ivan Manzella wrote in The Art of Star Wars: The Force Awakens that "Snoke almost became a female at one point. J.J. picked out a maquette he liked and then we took it to a full - size version, sculpted in plasteline. J.J. and (creature creative supervisor Neal Scanlan) did n't want him to be old and decrepit, like the Emperor. ''
While Serkis secretly joined the project in February 2014, his casting in The Force Awakens was first announced on April 29, 2014. When asked about his role in July 2014, he joked, "I 'm not Yoda. '' In May 2015, a StarWars.com interview with photographer Annie Leibovitz about her The Force Awakens shoot for Vanity Fair revealed that Serkis would be playing a CGI character named Supreme Leader Snoke, and featured an image of the actor in motion capture gear. Serkis had previously played several CGI characters, including Gollum in The Lord of the Rings film series (2001 -- 2003), the titular ape in 2005 's King Kong, and Caesar in the Planet of the Apes reboot series.
In November 2015, Serkis said of the process of creating Snoke:
When we first started working on it, (Abrams) had some rough notions of how Snoke was gon na look, but it really had n't been fully - formed and it almost came out of discussion and performance... We shot on set of course, and I was in the scenes I have with other actors, but the beauty of this process is you can go back and reiterate, keep informing and honing beats and moments. So J.J., after we shot last year, we 've had a series of sessions where I 'd be in London at The Imaginarium, my studio, while he 's been directing from L.A., and we 've literally been creating further additions and iterations to the character. That 's been fascinating. And in the meantime I 've been able to see the look and design of the character grow and change as the performances change. So it 's been really exciting in that respect.
According to Serkis 's costar Lupita Nyong'o, who played the CGI character Maz Kanata in The Force Awakens, the actor coached her on performance - capture work, telling Nyong'o that "a motion - capture character you develop the same way as any other. You have to understand who the character is and what makes them who they are. '' Serkis said of filming:
It was quite an unusual situation. I worked specifically with Domhnall Gleeson and with Adam Driver. My first day was basically standing on a 25 - foot podium doing Lord Snoke without the faintest idea what he looked like... or in fact who he was! I was very high up, totally on my own, away from everybody else, but acting with them... we used sort of a "Kongolizer '' method of having sound come out of speakers to give a sense of scale and distance for the character. So it was very challenging and scary, in fact probably one of my most scary film experiences I 've ever had.
Snoke, whom Abrams called "a powerful figure on the Dark Side of the Force '', is the leader of the evil First Order. He is master to the film 's primary villain, Kylo Ren. Serkis described Snoke as "quite an enigmatic character, and strangely vulnerable at the same time as being quite powerful. Obviously he has a huge agenda. He has suffered a lot of damage. '' Serkis called Snoke "a new character in this universe '', adding "I think it 'd be fair to say that he is aware of the past to a great degree. ''
Explaining why CGI was the only way to create Snoke 's unique appearance, Serkis said before the film 's premiere, "The scale of him, for instance, is one reason. He is large. He appears tall. And also just the facial design -- you could n't have gotten there with prosthetics... he has a very distinctive, idiosyncratic bone and facial structure. '' Chief of creature and droid effects Neal Scanlan said, "This character is much better executed as a CGI character. That 's just a practical reality when he 's 7 - foot - something tall; he 's very, very thin. '' Snoke 's "scarred, cavernous face '' was not revealed before the release of the film, in which he appears as a "massive, ominous hologram ''. The character 's deep voice was first heard in the teaser trailer released on November 28, 2014.
Robbie Collin of The Telegraph described the disfigured and skeletal Snoke as a "sepulchral horror '', Richard Roeper of the Chicago Sun - Times called him "hissing and grotesque '', and Andrew O'Hehir of Salon dubbed the character "a spectral demonic figure ''. Variety 's Justin Chang wrote that Snoke resembled "a plus - sized, more articulate Gollum '', and Chris Nashawaty of Entertainment Weekly described him as "essentially Emperor Palpatine crossed with one of the aliens from Close Encounters ''. Stephanie Zacharek of Time called the character "a giant, scary, noseless dude who sits placidly in an oversized chair like a dark - lord version of the Lincoln Monument. ''
There are multiple fan theories regarding the origins and identity of Snoke, including that he may be Darth Plagueis, a Sith Lord anecdotally mentioned in Revenge of the Sith and the master of Palpatine, possessing the power to prevent death; "the Operator '' Gallius Rax, a mysterious First Order manipulator from Chuck Wendig 's Aftermath novel trilogy; or Ezra Bridger, a main character from the animated series Star Wars Rebels. Pablo Hidalgo of the Lucasfilm Story Group dismissed the Darth Plagueis theory in May 2016, and Rax 's death in the 2017 novel Aftermath: Empire 's End disproved that theory as well.
In his first appearance in the film, Snoke is introduced as Supreme Leader of the First Order, and master to Kylo Ren. Seduced to the dark side by Snoke, the masked Kylo is really Ben Solo, the son of Han Solo and Leia Organa. Snoke senses an "awakening '' in the Force, and warns Ren that the limits of his power will be tested when he faces his father in pursuit of the wayward droid BB - 8. Leia reveals to her estranged husband Han that it was Snoke 's influence which corrupted their son. After Ren is defeated in a lightsaber duel with Rey, Snoke orders General Hux to collect Ren and ready him to complete his training.
Snoke appears in the 2015 novelization of The Force Awakens by Alan Dean Foster. In the novel, Leia tells Han in more detail how Snoke, aware that their son would be "strong with the Force '' and possess "equal potential for good or evil '', had long watched Ben and manipulated events to draw him to the dark side. An unplayable Lego minifigure version of Snoke appears in cutscenes in the 2016 video game Lego Star Wars: The Force Awakens.
Following the events of The Force Awakens, Snoke reprimands Hux for his failings as a military leader, and Ren for his failure to find and kill Luke Skywalker. Ren brings Rey to Snoke, who tortures her for information on Luke 's location. He reveals to Ren and Rey that he created the psychic bond between them as part of a plan to destroy Luke. Snoke then orders Ren to kill Rey, and gloats over his total control over his apprentice, only for Ren to use the Force to activate Luke 's lightsaber, which Rey had been using, from a distance and slice Snoke in half with it, killing him. After he and Rey defeat Snoke 's Praetorian Guards, Ren declares himself the new Supreme Leader of the First Order, pinning Snoke 's death on Rey to cover his actions.
Todd McCarthy of The Hollywood Reporter wrote, "Supreme Leader Snoke is a larger - than - life, vaguely Harry Potter - ish hologram voiced with deep gravity by Andy Serkis; the full weight of this character 's malignancy and dramatic power will presumably be better assessed in subsequent episodes. '' Richard Roeper of the Chicago Sun - Times called Serkis "the undisputed champion of the performance - capture roles ''. Though praising the "unobtrusive sophistication '' of the visual effects used to portray the character, Variety 's Justin Chang said that Serkis is "fine but not galvanizing '' in the role. Lindsay Bahr of the Associated Press deemed Snoke one of the "less memorable '' characters in The Force Awakens.
Serkis was nominated for an MTV Movie Award for Best Virtual Performance for the role.
|
complete list of happiest countries in the world | World happiness Report - wikipedia
The World Happiness Report is an annual publication of the United Nations Sustainable Development Solutions Network which contains rankings of national happiness and analysis of the data from various perspectives. The World Happiness Report is edited by John F. Helliwell, Richard Layard and Jeffrey Sachs. The 2017 edition added three associate editors; Jan - Emmanuel De Neve, Haifang Huang, and Shun Wang. Authors of chapters include Richard Easterlin, Edward F. Diener, Martine Durand, Nicole Fortin, Jon Hall, Valerie Møller, and many others.
In July 2011, the UN General Assembly adopted resolution 65 / 309 Happiness: Towards a Holistic Definition of Development inviting member countries to measure the happiness of their people and to use the data to help guide public policy. On April 2, 2012, this was followed by the first UN High Level Meeting called Wellbeing and Happiness: Defining a New Economic Paradigm, which was chaired by UN Secretary General Ban Ki - moon and Prime Minister Jigme Thinley of Bhutan, a nation that adopted gross national happiness instead of gross domestic product as their main development indicator.
The first World Happiness Report was released on April 1, 2012 as a foundational text for the UN High Level Meeting: Well - being and Happiness: Defining a New Economic Paradigm, drawing international attention. The report outlined the state of world happiness, causes of happiness and misery, and policy implications highlighted by case studies. In 2013, the second World Happiness Report was issued, and since then has been issued on an annual basis with the exception of 2014. The report primarily uses data from the Gallup World Poll. Each annual report is available to the public to download on the World Happiness Report website.
In the reports, experts in fields including economics, psychology, survey analysis, and national statistics, describe how measurements of well - being can be used effectively to assess the progress of nations, and other topics. Each report is organized by chapters that delve deeper into issues relating to happiness, including mental illness, the objective benefits of happiness, the importance of ethics, policy implications, and links with the Organisation for Economic Co-operation and Development 's (OECD) approach to measuring subjective well - being and other international and national efforts.
As of March 2018, Finland was ranked the happiest country in the world.
World Happiness Reports were issued in 2012, 2013, 2015, 2016 (an update), 2017 and 2018. In addition to ranking countries happiness and well - being levels, each report has contributing authors and most focus on a subject. The data used to rank countries in each report is drawn from the Gallup World Poll, as well as other sources such as the World Values Survey, in some of the reports. The Gallup World Poll questionnaire measures 14 areas within its core questions: (1) business & economic, (2) citizen engagement, (3) communications & technology, (4) diversity (social issues), (5) education & families, (6) emotions (well - being), (7) environment & energy, (8) food & shelter, (9) government and politics, (10) law & order (safety), (11) health, (12) religion and ethics, (13) transportation, and (14) work.
The 2018 reiteration was released on 14 March and focused on the relation between happiness and migration. As per 2018 Happiness Report, Finland is the happiest country in the world, with Norway, Denmark, Iceland, and Switzerland holding the next top positions. The World Happiness Report 2018 ranks 156 countries by their happiness levels, and 117 countries by the happiness of their immigrants. The main focus of this year 's report, in addition to its usual ranking of the levels and changes in happiness around the world, is on migration within and between countries. The overall rankings of country happiness are based on the pooled results from Gallup World Poll surveys from 2015 -- 2017, and show both change and stability. Four different countries have held the top spot in the last four reports: Denmark, Switzerland, Norway and now Finland. All the top countries tend to have high values for all six of the key variables that have been found to support well - being: income, healthy life expectancy, social support, freedom, trust and generosity. Among the top countries, differences are small enough that year - to - year changes in the rankings are to be expected.
The analysis of happiness changes from 2008 -- 2010 to 2015 -- 2015 shows Togo as the biggest gainer, moving up 17 places in the overall rankings from the 2015. The biggest loser is Venezuela, down 2.2 points. Five of the report 's seven chapters deal primarily with migration, as summarized in Chapter 1. For both domestic and international migrants, the report studies the happiness of those migrants and their host communities, and also of those in the countryside or in the country of origin. The results are generally positive. Perhaps the most striking finding of the whole report is that a ranking of countries according to the happiness of their immigrant populations is almost exactly the same as for the rest of the population. The immigrant happiness rankings are based on the full span of Gallup data from 2005 to 2017, sufficient to have 117 countries with more than 100 immigrant respondents. The ten happiest countries in the overall rankings also make up ten of the top eleven spots in the ranking of immigrant happiness. Finland is at the top of both rankings in this report, with the happiest immigrants, and the happiest population in general. While convergence to local happiness levels is quite rapid, it is not complete, as there is a ' footprint ' effect based on the happiness in each source country. This effect ranges from 10 % to 25 %. This footprint effect explains why immigrant happiness is less than that of the locals in the happiest countries, while being greater in the least happy countries.
The 2016 World Happiness Report - Rome Addition was issued in two parts as an update. Part one had four chapters: (1) Setting the Stage, (2) The Distribution of World Happiness, (3) Promoting Secular Ethics, and (4) Happiness and Sustainable Development: Concepts and Evidence. Part two has six chapters: (1) Inside the Life Satisfaction Blackbox, (2) Human Flourishing, the Common Good, and Catholic Social Teaching, (3) The Challenges of Public Happiness: An Historical - Methodological Reconstruction, (4) The Geography of Parenthood and Well - Being: Do Children Make Us Happy, Where and Why?, and (5) Multidimensional Well - Being in Contemporary Europe: An Analysis of the Use of a Self - Organizing Map Applied to Share Data.
Chapter 1, Setting the Stage is written by John F. Helliwell, Richard Layard, and Jeffrey Sachs. This chapter briefly surveys the happiness movement ("Increasingly, happiness is considered to be the proper measure of social progress and the goal of public policy. '') gives an overview of the 2016 reports and synopsis of both parts of the 2016 Update Rome Addition.
Chapter 2, The Distribution of World Happiness is written by John F. Helliwell, Hailing Huang, and Shun Wang. This chapter reports happiness levels of countries and proposes the use of inequalities of happiness among individuals as a better measure for inequality than income inequality, and that all people in a population fare better in terms of happiness when there is less inequality in happiness in their region. It includes data from the World Health Organization and World Development Indicators, as well as Gallup World Poll. It debunks the notion that people rapidly adapt to changes in life circumstances and quickly return to an initial life satisfaction baseline, finding instead that changes in life circumstances such as government policies, major life events (unemployment, major disability) and immigration change people 's baseline life satisfaction levels. This chapter also addresses the measure for affect (feelings), finding that positive affect (happiness, laughter, enjoyment) has much "large and highly significant impact '' on life satisfaction than negative affect (worry, sadness, anger). The chapter also examines differences in happiness levels explained by the factors of (1) social support, (2) income, (3) healthy life, (4) trust in government and business, (5) perceived freedom to make life decisions and (6) generosity.
Chapter 3, Promoting Secular Ethics is written by Richard Layard, This chapter argues for a revival of an ethical life and world, harkening to times when religious organizations were a dominant force. It calls on secular non-profit organizations to promote "ethical living in a way that provides inspiration, uplift, joy and mutual respect '', and gives examples of implementation by a non-profit founded by Richard Layard, the chapter author, Action for Happiness, which offers online information from positive psychology and Buddhist teachings.
Chapter 4, Happiness and Sustainable Development: Concepts and Evidence is written by Jeffrey Sachs. This chapter identifies ways that sustainable development indicators (economic, social and environmental factors) can be used to explain variations in happiness. It concludes with a report about an appeal to include subjective well - being indicators into the UN Sustainable Development Goals (SDGs).
Part Two 2016 Special Rome Edition was edited by Jeffrey Sacks, Leonardo Becchetti and Anthony Arnett.
Chapter 1, Inside the Life Satisfaction Blackbox is written by Leonardo Becchetti, Luisa Carrado, and Paolo Sama. This chapter proposes using quality of life measurements (a broader range of variables that life evaluation) in lieu of or in addition to overall life evaluations in future World Happiness Reports.
Chapter 2, Human Flourishing, the Common Good, and Catholic Social Teaching is written by Anthony Annett. This chapter contains explanations for three theories: (1) It is human nature to broadly define happiness and understand the connection between happiness and the common good, (2) that the current understanding of individuality is stripped of ties to the common good, and (3) that there is a need to restore the common good as central value for society. The chapter also proposes Catholic school teachings as a model for restoring the common good as a dominant value.
Chapter 3, The Challenges of Public Happiness: An Historical - Methodological Reconstruction is written by Luigino Bruni and Stefano Zemagni. This chapter contemplates Aristotelian concepts of happiness and virtue as they pertain to and support the findings in the World Happiness Reports regarding the impact of social support, trust in government, and equality of happiness.
Chapter 4, The Geography of Parenthood and Well - Being. Do Children Make Us Happy, Where and Why? is written by Luca Stanca. This chapter examines other research findings that children do not add happiness to parents. Using data from the World Values Survey, it finds that, with the exception of widowed parents, having children has a negative effect on life satisfaction for parents in 2 / 3 of the 105 countries studied, with parents in richer countries suffering more. Once parents are old, life satisfaction increases. The chapter concludes that "existing evidence is not conclusive '' and a statement that the causes for the low life satisfaction levels may be that for richer countries, having children is valued less, and in poorer countries, people suffer in financial and time costs when they have children.
Chapter 5, Multidimensional Well - Being in Contemporary Europe: Analysis of the Use of Self - Organizing Map Allied to SHARE Data is written by Mario Lucchini, Luca Crivelli and Sara della Bella. This chapter contains a study of well - being data from older European adults. It finds that this chapter 's study results were consistent with the World Happiness Report 2016 update: positive affect (feelings) have a stronger impact on a person 's satisfaction with life than do negative affect (feelings).
The 2015 World Happiness Report has eight chapters: (1) Setting the Stage, (2) The Geography of World Happiness, (3) How Does Subjective Well - being Vary Around the World by Gender and Age?, (4) How to Make Policy When Happiness is the Goal, (5) Neuroscience of Happiness, (6) Healthy Young Minds Transforming the Mental Health of Children, (7) Human Values, Civil Economy, and Subjective Well - being, and (8) Investing in Social Capital.
Chapter 1, Setting the Stage is written by John F. Helliwell, Richard Layard and Jeffrey Sachs. This chapter celebrates the success of the happiness movement ("Happiness is increasingly considered a proper means of social progress and public policy. ''), citing the OECD Guidelines on Measuring Subjective Well - being, a referendum in the EU requiring member nations to measure happiness, and the success of the World Happiness reports (with readership at about 1.5 million), and the adoption of happiness by the government of the United Arab Emirates, and other areas. It sets an aspiration of the inclusion of subjective well - being into the 2015 Sustainable Development Goals (not fulfilled), and outlines the 2015 report. It also address the use of the term Happiness, identifying the cons (narrowness of the term, breath of the term, flakiness), and defining the use of the term for the reasons that the 2011 UN General Assembly Resolution 65 / 309 Happiness Towards A Holistic Approach to Development and April 2012 UN High Level Meeting: Well - being and Happiness: Defining a New Economic Paradigm, Bhutan 's Gross National Happiness philosophy, the term 's "convening and attention attracting power, '' and the asset in a "double usage of happiness '' as an emotional report and life evaluation.
Chapter 2, The Geography of Happiness is written by John F. Helliwell, Hailing Huang and Shun Wang. This chapter reports the happiness of nations measured by life evaluations. It includes color coded maps and an analysis of six factors the account for the differences: (1) social support in terms of someone to count on in times of need, (2) GDP per capita (income), (3) live expectancy (in terms of healthy years), (4) sense of corruption in government and business (trust), (5) perceived freedom to make life decisions, and (6) generosity. The first three factors were found to have the biggest impact on a population 's happiness. Crisis (natural disasters and economic crisis) the quality of governance, and social support were found to be the key drivers for changes in national happiness levels, with the happiness of nations undergoing a crisis in which people have a strong sense of social support falling less than nations where people do not have a strong sense of social support.
Chapter 3, How Does Subjective Well - being Vary Around the Globe by Gender and Age? is written by Nicole Fortin, John F. Helliwell and Shun Wang. This chapter uses data for 12 experiences: happiness (the emotion), smiling or laughing, enjoyment, feeling safe at night, feeling well rested, and feeling interested, as well as anger, worry, sadness, depression, stress and pain to examine differences by gender and age. Findings reported include that there is not a lot of difference in life evaluations between men and women across nations or within ages in a nation (women have slightly higher life evaluations than men: 0.09 on a ten - point scale). It reports that overall happiness falls into a U shape with age on the x axis and happiness on the y, with the low point being middle age (45 - 50) for most nations (in some happiness does not go up much in later life, so the shape is more of a downhill slide), and that the U shape holds for feeling well rested in all regions. If finds that that men generally feel safer at night than women but, when comparing countries, people in Latin America have the lowest sense of safety at night, while people in East Asia and Western Europe have the highest sense of safety at night. It also finds that as women age their sense of happiness declines and stress increases but worry decreases, as all people age their laughter, enjoyment and finding something of interest also declines, that anger is felt everywhere almost equally by men and women, stress peaks in the Middle Ages, and women experience depression more than men. It finds that where older people are happier, there is a sense of social support, freedom to make life choices and generosity (and income does not factor in as heavily as these three factors).
Chapter 4, How to Make Policy When Happiness is the Goal is written by Richard Layard and Gus O'Donnell. This chapter advocates for a "new form of cost - benefit analysis '' for government expenditures in which a "critical level of extra happiness '' yielded by a project is established. It contemplates the prioritization of increasing happiness of the happy vs. reducing misery of the miserable, as well as the issues of discount rate (weight) for the happiness of future generations. It includes a technical annex with equations for calculating the maximization for happiness in public expenditure, tax policy, regulations, the distribution of happiness and a discount rate.
Chapter 5, Neuroscience of Happiness is written by Richard J. Dawson and Brianna S. Schuyler. This chapter reports on research in brain science and happiness, identifying four aspects that account for happiness: (1) sustained positive emotion, (2) recovery of negative emotion (resilience), (3) empathy, altruism and pro-social behavior, and (4) mindfulness (mind - wandering / affective sickness). It concludes that the brain 's elasticity indicates that one can change one 's sense of happiness and life satisfaction (separate but overlapping positive consequences) levels by experiencing and practicing mindfulness, kindness, and generosity; and calls for more research on these topics.
Chapter 6, Healthy Young Minds: Transforming the Mental Health of Children is written by Richard Layard and Ann Hagell. This chapter identifies emotional development as of primary importance, (compared to academic and behavioral factors) in a child 's development and determination of whether a child will be a happy and well - functioning adult. It then focuses on the issue of mental illness in children, citing the statistic that while worldwide 10 % of the world 's children (approximately 200 million) suffer from diagnosable mental health problems, even in the richest nations, only one quarter of these children of them are in treatment. It identifies the action steps to treating children with mental health problems: local community - lead child well - being programs, training health care professions to identify mental health problems in children, parity of esteem for mental and physical problems and treatment, access to evidence - based mental health treatment for families and children, promotion of well - being in schools with well - being codes that inform the organizational behavior of schools, training teachers to identify mental health in children, teachings of life skills, measuring of children 's well - being by schools, development of free apps available internationally to treat mental illness in teens, and inclusion of mental health with the goal of physical health in the Sustainable Development goals. The chapter lists the benefits of treating children 's mental health: improved educational performance, reduction in youth crimes, improved earnings and employment in adulthood, and better parenting of the next generation.
Chapter 7, Human Values, Civil Economy and Subjective Well - being is written by Leonardo Bechhetti, Luigino Bruni and Stefano Zamagni. This chapter begins with a critique of the field of economics ("Economics today looks like physics before the discovery of electrons ''), identifying reductionism in which humans are conceived of as 100 % self - interested individuals (economic reductionism), profit maximization is prioritized over all other interests (corporate reductionism), and societal values are narrowly identified with GDP and ignore environmental, cultural, spiritual and relational aspects (value reductionism). The chapter them focuses on a theoretical approach termed "Civil Economy paradigm '', and research about it demonstrating that going beyond reductionism leads to greater socialization for people and communities, and a rise in priority of the values of reciprocity, friendship, trustworthiness, and benevolence. It makes the argument that positive social relationships (trust, benevolence, shared social identities) yield happiness and positive economic outcomes. It ends with recommendations for move from the dominant model of elite - competitive democracy to a participatory / deliberative model of democracy with bottom - up political and economic participation and incentives for non-selfish actions (altruistic people) and corporations with wider goals than pure profit (ethical and environmentally responsible corporations).
Chapter 8, Investing in Social Capital is written by Jeffrey Sachs. This chapter focuses on "pro-sociality '' ("individuals making decisions for the common good that may conflict with short - run egoistic incentives ''). It identifies pro-social behaviors: honesty, benevolence, cooperation and trustworthiness. It recommends investment in social capital through education, moral instruction, professional codes of conduct, public censure and condemnation of violators of public trust, and public policies to narrow income inequalities for countries where there is generalized distrust of government and business, pervasive corruption and lawless behavior (such as tax evasion).
The 2013 World Happiness Report has eight chapters: (1) Introduction, (2) World Happiness: Trends, Explanations and Distribution, (3) Mental Illness and Unhappiness, (4) The Objective Benefits of Subjective Well - being, (5) Restoring Virtue Ethics in the Quest for Happiness, (6) Using Well - being as a Guide to Policy, (7) The OECD Approach to Measuring Subjective Well - being, and (8) From Capabilities to Contentment: Testing the Links between Human Development and Life Satisfaction.
Chapter 1, Introduction is written by John F. Helliwell, Richard Layard and Jeffrey Sachs. It synopsizes the chapters and gives a discussion of the term happiness.
Chapter 2, World Happiness: Trends, Explanations and Distributions is written by John F. Helliwell and Shun Wang. It provides ratings among countries and regions for satisfaction with life using the Cantril Ladder, positive and negative affect (emotions), and log of GDP per capita, years of healthy life expectancy, having someone to count on in times of trouble, perceptions of corruption, prevalence of generosity, and freedom to make life choices.
Chapter 3, Mental Illness and Unhappiness is written by Richard Layard, Dan Chisholm, Vikram Patel, and Shekhar Saxel. It identifies the far ranging prevalence of mental illness around the world (10 % of the world 's population at one time) and provides the evidence showing that "mental illness is a highly influential - and... the single biggest - determinant of misery. '' It concludes with examples of interventions implemented by countries around the world.
Chapter 4, The Objective Benefits of Subjective Well - being is written by Jan - Emmanuel de Neve, Ed Diener, Louis Tay and Cody Xuereb. It provides an explanation of the benefits of subjective well - being (happiness) on health & longevity, income, productivity & organizational behavior, and individual & social behavior. It touches on the role of happiness in human evolution through rewarding behaviors that increase evolutionary success and beneficial to survival.
Chapter 5, Restoring Virtue Ethics in the Quest for Happiness is written by Jeffrey Sachs. It argues that "a renewed focus on the role of ethics, and in particular of virtuous behavior, in happiness could lead us to new and effective strategies for raising individual, national and global well - being, '' looking to the eightfold noble path (the teachings of the dharma handed down in the Buddhist tradition that encompass wise view / understanding, wise intention, wise speech, wise action, wise livelihood, and effort, concentration and mindfulness), Aristotelian philosophy (people are social animals, "with individual happiness secured only within a political community... (which) should organize its institutions to promote virtuous behavior), and Christian doctrine of St. Thomas Aquinas ('' placing happiness in the context of servicing God 's will "). It gives an explanation of the evolution of the field of economics up t the "failures of hyper - commercialism '' and suggests an antidote based on four global ethical values: (1) non-violence and respect for life, (2) justice and solidarity, (3) honesty and tolerance, and (4) mutual esteem and partnership.
Chapter 6, Using Well - being as Guide to Public Policy is written by Gus O'Donnell. This chapter gives a status report on the issues governments grapple with in adopting well - being and happiness measures and goals for policy, from understanding the data or establishing whether a specific policy improves well - being, to figuring out how to "incorporate well - being into standard policy making. '' It provides examples of efforts to measure happiness and well - being from Bhutan, New Zealand, South Africa, the UK, and cities and communities in the USA, Canada, Australia and Tasmania. It identifies the key policy areas of health, transport and education for policy makers to focus on and includes discussions about interpersonal comparability (concentrating on "getting people out of misery '' instead of making happy people happier), discount rate (do we invest more in happiness for people today or in the future?) and putting a monetary value on happiness for policy trade off decisions (e.g. If "a 10 % reduction in noise increase SWB by one unit, then we can infer that a 10 % reduction is "worth '' $1,000 '' when $1,000 would increase a person 's SWB by one unit).
Chapter 7, The OECD Approach to Measuring Subjective Well - being is written by Martine Durand and Conal Smith. This chapter was written the same year the OECD issued its Guidelines on Measuring Subjective Well - being, and is a synopsis of such. It includes a definition for subjective well - being: life evaluation (a person 's reflection on their life and life circumstances), affect (positive and negative emotions) and eudaimonia; core measures, a discussion on data collection processes, survey and sample design, other aspects of using subjective well - being metrics, and ideas on how policy - makers can use subjective well - being data. It surveys the status of wealthy countries subjective well - being data collection process, and identifies future directions of experimentation and better income measures, citing the Easterlin Paradox as the basis for this call.
Chapter 8, From Capabilities to Contentment: Testing the Links between Human Development and Life Satisfaction is written by Jon Hall. This chapter explains the components of human development using objective metrics: (1) education, health and command over income and nutrition resources, (2) participation and freedom, (3) human security, (4) equity, and (5) sustainability; key findings of the Human Development Index (HDI) ("weak relationship between economic growth and changes in health and education '' as well as life expectancy), and examines the relationship between the HDI and happiness, finding that (1) components of the HDI "correlate strongly with better life evaluations, '' and (2) there is a strong relationship between life evaluation and the "non-income HDI. '' It contemplates measurement of conditions of life beyond the HDI that are important to well - being: (1) better working conditions, (2) security against crime and physical violence, (3) participation in economic and political activities, (4) freedom and (5) inequality. The concludes with the statements that the HDI and SWB have similar approaches and importantly connected, with the two disciplines offering alternative and complementary views of development.
The 2012 World Happiness Report was issued at the UN High Level Meeting Well - being and Happiness: Defining a New Economic Paradigm by editors John F. Helliwell, Richard Layard and Jeffrey Sachs. Part one has an introduction (chapter 1) and three chapters: (2) the State of World Happiness, (3) Causes of Happiness and Misery, Some Policy Implications. Part two has three chapters, each a case study, of Bhutan, the United Kingdom Office of National Statistics, and the OECD.
Chapter 1, The Introduction is by Jeffrey Sachs and references Buddha and Aristotle, identifies today 's era as the anthropocene, and identifies the reasons GDP is not a sufficient measure to guide governments and society.
Chapter 2, The State of World Happiness, is written by John F. Helliwell and Shun Wang, and contains a discussion of subjective well - being measures that ranges from the validity of subjective well - being measures to the seriousness of happiness, happiness set points and cultural comparisons, and it includes data from the Gallup World Poll, European Social Survey, and the World Values Survey.
Chapter 3, The Causes of Happiness and Misery is written by Richard Layard, Andrew Clark, and Claudia Senik, and contemplates research on the impact on happiness of the external factors of income, work, community and governance, values and religion, as well as the internal factors of mental health, physical health, family experience, education, and gender and age.
Chapter 4, Some Policy Implications, written by John F. Helliwell, Richard Layard and Jeffrey Sachs, calls for a greater understanding on how governments can measure happiness, the determinants of happiness, and use of happiness data and findings about determinants for policy purposes. It also highlights the role of GDP ("GDP is important but not all that is important '') as a guide to policy makers, the importance that policy makers should place on providing opportunities for employment; the role of happiness in policy making ("Making happiness an objective of governments would not therefore lead to the "servile society, '' and indeed quite the contrary... Happiness comes from an opportunity to mold one 's own future, and thus depends on a robust level of freedom. "); the role of values and religion ('' In well - functioning societies there is widespread support for the universal value that we should treat others as we would like them to treat us. We need to cultivate social norms so that the rich and powerful are never given a feeling of impunity vis - à - vis the rest of society. "); calls for wider access to psychological therapies in a section on mental health citing the fact that one third of all families are affected by mental illness; identifies improvements in physical health as "probably the single most important factor that has improved human happiness '' and calls out the rich - poor gap in health care between rich and poor countries; calls on workplace and governmental policies that encourage work - life balance and reduce stress, including family support and child care; and states that "Universal access to education is widely judged to be a basic human right... '' The chapter concludes with a philosophical discussion.
Chapter 5, Case Study: Bhutan Gross National Happiness and the GNH Index is written by Karma Ura, Sabine Alkire, and Tsoki Zangmo. It gives a short history of the development of the Gross National Happiness (GNH) concept in Bhutan, and an explanation of the GNH index, data collection and data analysis process, including the rating methodology to determine if an individual experiences happiness sufficiency levels, as well as the policy and lifestyle implications
Chapter 6, Case Study: ONS Measuring Subjective Well - being: The UK Office of National Statistics Experience is written by Stephen Hicks. It covers the basis for the creation of the Measuring National Well - being Programme in the UK 's Office of National Statistics (ONS), and the development of their methodology for measuring well - being.
Chapter 5, Case Study OECD Guidelines on Measuring Subjective Well - being is an explanation about the process and rationale the OECD was undertaking to develop its Guidelines on Measuring Subjective Well - being, which it issued in 2013.
Data is collected from people in over 150 countries. Each variable measured reveals a populated - weighted average score on a scale running from 0 to 10 that is tracked over time and compared against other countries. These variables currently include: real GDP per capita, social support, healthy life expectancy, freedom to make life choices, generosity, and perceptions of corruption. Each country is also compared against a hypothetical nation called Dystopia. Dystopia represents the lowest national averages for each key variable and is, along with residual error, used as a regression benchmark.
As per the 2018 Happiness Index, Finland is the happiest country in the world. Norway, Denmark, Iceland and Switzerland hold the next top positions. The report was published on 14 March 2018 by UN. The full report can be read at 2018 Report. The World Happiness Report is a landmark survey of the state of global happiness. The World Happiness Report 2018, which ranks 156 countries by their happiness levels, and 117 countries by the happiness of their immigrants, was released on March 14th at a launch event at the Pontifical Academy of Sciences in the Vatican.
The 2017 report features the happiness score averaged over the years 2014 - 2016. For that timespan, Norway was the overall happiest country in the world, even though oil prices had dropped. Close behind were Denmark, Iceland and Switzerland in a tight pack. Four of the top five countries follow the Nordic model. All the top ten countries had high scores in the six categories. The ranked follow - on countries in the top ten are: Finland, the Netherlands, Canada, New Zealand, Australia, and Sweden.
Table of data for 2017:
Legend:
Italics: States with limited recognition and disputed territories
From an econometric perspective, some statisticians argue the statistical methodology mentioned in the first world happiness report using 9 domains is unreliable.
Other argue that the Word Happiness Report model uses a limited subset of indicators used by other models and does not use an Index function like peer econometric models such as Gross National Well - being Index 2005, Sustainable Society Index of 2008, OECD Better Life Index of 2011, and Bhutan Gross National Happiness Index of 2012, and Social Progress Index of 2013.
Other critics points out that Happiness Surveys are contradictory in Ranking because of the varying methodologies. They also argue that the surveys are inherently flawed. "No matter how carefully parsed the data may be, a survey based on unreliable answers is n't worth a lot. '' For instance, "A 2012 Gallup survey on happiest countries had a completely different list, with Panama first, followed by Paraguay, El Salvador, and Venezuela '' They also cite a Pew survey of 43 countries in 2014 (which excluded most of Europe) had Mexico, Israel and Venezuela finishing first, second and third ''
Others point out that the ranking results is counterintuitive when it come to some dimensions, for "instance if rate of suicide is used as a metric for measuring unhappiness, (the opposite of happiness), then quite some of the countries which are ranked among the top 20 happiest countries in the world will also feature among the top 20 with the highest suicide rates in the world. ''
From a philosophical perspective, critics argue that measuring of happiness of a grouping of people is misleading because happiness is an individual matter. They state "the Dalai Lama, Ghandi, Tolstoy and several others, happiness is an individual choice that is independent of the society, its structures and enabling or dis - enabling conditions and not something to be measured using variables that can only capture a nation 's well - being. This means therefore that one can not really talk of a happy or unhappy nation, but of happy or unhappy individuals. ''
|
who won the most matches in ipl 2008 | List of Indian Premier League records and statistics - wikipedia
This is an overall list of statistics and records of the Indian Premier League, a Twenty20 cricket competition based in India.
Note:
Last updated: 21 May 2017
Last updated: 21 May 2017
Last updated: 21 May 2017
Last updated: 21 May 2017
Last updated: 21 May 2017
Last updated: 8 April 2018
Last updated: 21 May 2017
Last updated: 21 May 2017
Last updated: 21 May 2017
Last updated: 21 May 2017
Last updated: 8 April 2018
Minimum balls faced -- 125
Last updated: 21 May 2017
Last updated: 21 May 2017
Last updated: 21 May 2017
Last updated: 21 May 2017
Last updated: 21 May 2017
Last updated: 21 May 2017
Last updated: 21 May 2017
Last updated: 21 May 2017
Hat - tricks in IPL
Last updated: 21 May 2017
Last updated: 21 May 2017
Last updated: 21 May 2017
Last updated: 21 May 2017
|
which states have more than one nfl team | List of American and Canadian cities by number of major professional sports franchises - wikipedia
This is a list of metropolitan areas in the United States and Canada categorized by the number of major professional sports franchises in their metropolitan areas.
The major professional sports leagues, or simply major leagues, in the United States and Canada are the highest professional competitions of team sports in the two countries. Although individual sports such as golf, tennis, and auto racing are also very popular, the term is usually limited to team sports.
The term "major league '' was first used in 1921 in reference to Major League Baseball (MLB), the top level of professional American baseball. Today, the major northern North America professional team sports leagues are Major League Baseball (MLB), the National Basketball Association (NBA), the National Football League (NFL), and the National Hockey League (NHL).
These four leagues are also commonly referred to as the Big Four. Each of these is the richest professional club competition in its sport worldwide. The best players can become cultural icons in both countries and elsewhere in the world, because the leagues enjoy a significant place in popular culture in the U.S. and Canada. The NFL has 32 teams, and the others have 30 each, although the NHL will expand in its 2017 -- 18 season with the Vegas Golden Knights in Las Vegas, Nevada, a market the Big Four had historically avoided because of the city 's notorious reputation of being a hub for illegal gambling. The vast majority of major league teams are concentrated in the most populous metropolitan areas of the United States.
Baseball, football, and hockey have had professional leagues for over 100 years; early leagues such as the National Association, Ohio League, and National Hockey Association formed the basis of the modern MLB, NFL and NHL respectively. Basketball is a relatively new development; the NBA evolved from the National Basketball League and its splinter group the Basketball Association of America, taking on its current form in 1949.
Other notable leagues include Major League Soccer (MLS) and the Canadian Football League (CFL), both of which compare to the other four leagues in certain metrics but not in others. Every major league, including the CFL and MLS, draws 15,000 or more fans in attendance per game on average as of 2015. Therefore, this list includes a ranking by teams in the Big Four (B4), and a separate ranking also including teams in the CFL and MLS, called the Big Six (B6).
Though teams are listed here by metropolitan area, the distribution and support of teams within an area can reveal regional fractures below that level, whether by neighborhood, rival cities within a media market or separate markets entirely. Baseball teams provide illustrations for several of these models. In New York City, the Yankees are popularly dubbed the "Bronx Bombers '' for their home borough and generally command the loyalties of fans from the Bronx, parts of Brooklyn, Staten Island, Manhattan, Long Island, parts of North Jersey and Westchester County, New York, while the Mets play in Queens and draw support from Queens, Brooklyn and parts of Long Island, revealing a split by neighborhood. The San Francisco Giants and Oakland Athletics represent rival cities within the Bay Area, a single media market. Though the Washington Nationals and Baltimore Orioles share a metro area, their cities anchor separate media markets and hold distinctly separate cultural identities. In Los Angeles, the Lakers and Clippers share an arena (Staples Center), and media coverage is split amongst different broadcasters in the metro area.
With the Vegas Golden Knights, based in the Las Vegas Valley, the 23rd largest market in North America, having joined the National Hockey League in 2017, the largest urban areas without a team in one of the big four leagues are the 32nd - ranked Austin region and the 37th - ranked Virginia Beach - Norfolk region. The smallest metropolitan area to have a Big Four team is Green Bay (Green Bay Packers), which is the 146th largest metropolitan area, though much of its fan base is drawn from Milwaukee, which is 120 miles away and the 38th largest market. The smallest stand - alone metropolitan area to have a Big Four team is the 78th - largest market, Winnipeg (Winnipeg Jets), while the 54th - largest market, New Orleans, is the smallest metropolitan area to have more than one Big Four team (New Orleans Pelicans and New Orleans Saints). Foxboro, Massachusetts, population 16,685 as of the 2010 Census, is a small town which hosts two major - league teams (the New England Patriots and the New England Revolution.) Foxboro is considered part of the Boston metropolitan area, even though it is actually slightly closer to Providence, Rhode Island.
The following list contains all urban areas in the United States and Canada containing at least one team in any of the six major leagues. The table contains the population rank based on their urban population as compiled by Demographia, the number of teams in the big four leagues (B4) and the big six leagues (B6), and the city 's teams in the National Football League (NFL), Major League Baseball (MLB), the National Basketball Association (NBA), the National Hockey League (NHL), Major League Soccer (MLS), and the Canadian Football League (CFL).
The number of Big Six teams based on their home state is shown in the map below:
The number of Big Six teams based on their home state / province / territory is shown in the map below:
|
in romeo and juliet why does rosaline not love romeo | Rosaline - wikipedia
Rosaline (/ ˈrɒzəlɪn / or / ˈrɒzəliːn /) is an unseen character and niece of Lord Capulet in William Shakespeare 's tragedy Romeo and Juliet (1597). Although silent, her role is important. Romeo is at first deeply in love with Rosaline and expresses his dismay at her not loving him back. Romeo first spots Juliet while trying to catch a glimpse of Rosaline at a gathering hosted by the Capulet family.
Scholars generally compare Romeo 's short - lived love of Rosaline with his later love of Juliet. The poetry Shakespeare writes for Rosaline is much weaker than that for Juliet. Scholars believe Romeo 's early experience with Rosaline prepares him for his relationship with Juliet. Later performances of Romeo and Juliet have painted different pictures of Romeo and Rosaline 's relationship, as filmmakers have experimented with making Rosaline a more visible character.
Before Romeo meets Juliet, he loves Rosaline, Capulet 's niece and Juliet 's cousin. He describes her as wonderfully beautiful: "The all - seeing sun / ne'er saw her match since first the world begun. '' Rosaline, however, chooses to remain chaste; Romeo says: "She hath forsworn to love, and in that vow / Do I live dead that live to tell it now. '' This is the source of his depression, and he makes his friends unhappy; Mercutio comments: "That same pale, hard - hearted wench, that Rosaline, torments him so that he will sure run mad. '' Benvolio urges Romeo to sneak into a Capulet gathering where, he claims, Rosaline will look like "a crow '' alongside the other beautiful women. Romeo agrees, but doubts Benvolio 's assessment. After Romeo sees Juliet his feelings suddenly change: "Did my heart love till now? Forswear it, sight / For I ne'er saw true beauty till this night. '' Because their relationship is sudden and secret, Romeo 's friends and Friar Laurence continue to speak of his affection for Rosaline throughout much of the play.
Rosaline is a variant of Rosalind, a name of Old French origin: (hros = "horse '', lind = "soft, tender ''). When it was imported into English it was thought to be from the Latin rosa linda ("lovely rose ''). Romeo sees Rosaline as the embodiment of the rose because of her name and her apparent perfections. The name Rosaline commonly appears in Petrarchan sonnets, a form of poetry Romeo uses to woo Juliet and to describe both Rosaline and Juliet. Since Rosaline is unattainable, she is a perfect subject for this style; but Romeo 's attempt at it is forced and weak. By the time he meets Juliet his poetic ability has improved considerably.
Gender studies critics have argued that Rosaline 's name suggests that Romeo never really forgets her but rather replaces her with Juliet. Thus, when Juliet cries "What 's in a name? A rose by any other name would smell as sweet, '' she is ironically expressing Romeo 's own view of her as a substitute for Rosaline. That is to say, Rosaline, replaced in name only by Juliet, is just as sweet to Romeo. Gender critics also note that the arguments used to dissuade Romeo from pursuing Rosaline are similar to the themes of Shakespeare 's procreation sonnets. In these sonnets Shakespeare urges the man (who can be equated with Romeo) to find a woman with whom to procreate -- a duty he owes to society. Rosaline, it seems, is distant and unavailable except in the mind, similarly bringing no hope of offspring. As Benvolio argues, she is best replaced by someone who will reciprocate. Rosaline reveals similarities to the subject of the sonnets when she refuses to break her vow of chastity. Her name may be referred to in the first sonnet when the young man is described as "beauties Rose. '' This line ties the young man to both Rosaline and Romeo in Juliet 's "What 's in a name? '' soliloquy. When Juliet says "... that which we call a rose / By any other name would smell as sweet '', she may be raising the question of whether there is any difference between the beauty of a man and the beauty of a woman.
Rosaline is used as a name for only one other Shakespearean character -- the one of the main female figures in Love 's Labours Lost (1598), although Rosalind is the name of the main female character in As You Like It. Scholars have found similarities between them: both are described as beautiful, and both have a way of avoiding men 's romantic advances. Rosaline in Love 's Labours Lost constantly rebuffs her suitor 's advances and Romeo 's Rosaline remains distant and chaste in his brief descriptions of her. These similarities have led some to wonder whether they are based on a woman Shakespeare actually knew, possibly the Dark Lady described in his sonnets, but there is no strong evidence of this connection.
Analysts note that Rosaline acts as a plot device, by motivating Romeo to sneak into the Capulet party where he will meet Juliet. Without her, their meeting would be unlikely. Rosaline thus acts as the impetus to bring the "star - cross 'd lovers '' to their deaths -- she is crucial in shaping their fate (a common theme of the play). Ironically, she remains oblivious of her role.
Literary critics often compare Romeo 's love for Rosaline with his feelings for Juliet. Some see Romeo 's supposed love for Rosaline as childish as compared with his true love for Juliet. Others argue that the apparent difference in Romeo 's feelings shows Shakespeare 's improving skill. Since Shakespeare is thought to have written early drafts of the play in 1591, and then picked them up again in 1597 to create the final copy, the change in Romeo 's language for Rosaline and Juliet may mirror Shakespeare 's increased skill as a playwright: the younger Shakespeare describing Rosaline, and the more experienced describing Juliet. In this view, a careful look at the play reveals that Romeo 's love for Rosaline is not as petty as usually imagined.
Critics also note the ways in which Romeo 's relationship with Rosaline prepares him for meeting Juliet. Before meeting Rosaline, Romeo despises all Capulets, but afterwards looks upon them more favourably. He experiences the dual feelings of hate and love in the one relationship. This prepares him for the more mature relationship with Juliet -- one fraught by the feud between Montagues and Capulets. Romeo expresses the conflict of love and hate in Act 1, Scene 1, comparing his love for Rosaline with the feud between the two houses:
Here 's much to do with hate, but more with love.
Why, then, O brawling love! O loving hate! O any thing, of nothing first create! O heavy lightness! serious vanity! Mis - shapen chaos of well - seeming forms! Feather of lead, bright smoke, cold fire, sick health! Still - waking sleep, that is not what it is! This love feel I, that feel no love in this. Dost thou not laugh?
Psychoanalytic critics see signs of repressed childhood trauma in Romeo 's love for Rosaline. She is of a rival house and is sworn to chastity. Thus he is in an impossible situation, one which will continue his trauma if he remains in it. Although he acknowledges its ridiculous nature, he refuses to stop loving her. Psychoanalysts view this as a re-enactment of his failed relationship with his mother. Rosaline 's absence is symbolic of his mother 's absence and lack of affection for him. Romeo 's love for Juliet is similarly hopeless, for she is a Capulet and Romeo pursues his relationship with her; the difference being that Juliet reciprocates. This does not seem likely seeing as his mother died of grief after his banishment, indicating that she probably loved him deeply.
Rosaline has been portrayed in various ways over the centuries. Theophilus Cibber 's 1748 version of Romeo and Juliet replaced references to Rosaline with references to Juliet. This, according to critics, took out the "love at first sight '' moment at the Capulet feast. In the 1750s, actor and theatre director David Garrick also eliminated references to Rosaline from his performances, as many saw Romeo 's quick replacement of her as immoral. However, in Franco Zeffirelli 's 1968 film version of Romeo and Juliet, Romeo sees Rosaline (played by Paola Tedesco) first at the Capulet feast and then Juliet, of whom he becomes immediately enamoured. This scene suggests that love is short and superficial. Rosaline also appears in Renato Castellani 's 1954 film version. In a brief non-Shakespearean scene, Rosaline (Dagmar Josipovitch) gives Romeo a mask at Capulet 's celebration, and urges him to leave disguised before harm comes to him. Other filmmakers keep Rosaline off - camera in stricter accordance with Shakespeare 's script. Rosaline also appears in the 2013 film adaptation of Romeo & Juliet.
Robert Nathan 's 1966 romantic comedy, Juliet in Mantua, presents Rosaline as a fully developed character. In this sequel, in which Romeo and Juliet did not die, the pair live ten years later in exile in Mantua. After they are forgiven and return to Verona, they learn that Rosaline is now married to County Paris, and both couples must confront their disillusionment with their marriages. Another play, After Juliet, written by Scottish playwright Sharman Macdonald, tells the story of Rosaline after Romeo dies. A main character in this play, she struggles with her loss and turns away the advances of Benvolio, who has fallen in love with her. Macdonald 's daughter, Keira Knightley, played Rosaline in the play 's 1999 premiere. The 2012 young adult novel "When You Were Mine '' by Rebecca Serle sets Rosaline 's story in a contemporary high school. Rosaline and Romeo (renamed Rob) have been best friends since childhood and are just beginning to fall in love when Rosaline 's cousin, Juliet, moves back into town and sets her sights on Rob. Important character in sequel named "Plague on both your houses! '' (1994) by Grigori Gorin.
Still Star - Crossed (2017) -- period drama television series created by Heather Mitchell and based on the book with same name by Melinda Taub. It follows Rosaline (played by Lashana Lynch). She and Benvolio of Montague are betrothed against their will by Prince Escalus, to end the feud between the two families. Both resolve to find a way to end the violence without having the union.
|
article vi of the us constitution establishes that federal law is | Supremacy Clause - wikipedia
The Supremacy Clause of the United States Constitution (Article VI, Clause 2) establishes that the Constitution, federal laws made pursuant to it, and treaties made under its authority, constitute the supreme law of the land. It provides that state courts are bound by the supreme law; in case of conflict between federal and state law, the federal law must be applied. Even state constitutions are subordinate to federal law. In essence, it is a conflict - of - laws rule specifying that certain federal acts take priority over any state acts that conflict with federal law. In this respect, the Supremacy Clause follows the lead of Article XIII of the Articles of Confederation, which provided that "Every State shall abide by the determination of the United States in Congress Assembled, on all questions which by this confederation are submitted to them. '' A constitutional provision announcing the supremacy of federal law, the Supremacy Clause assumes the underlying priority of federal authority, at least when that authority is expressed in the Constitution itself. No matter what the federal government or the states might wish to do, they have to stay within the boundaries of the Constitution. This makes the Supremacy Clause the cornerstone of the whole American political structure.
This Constitution, and the Laws of the United States which shall be made in Pursuance thereof; and all Treaties made, or which shall be made, under the Authority of the United States, shall be the supreme Law of the Land; and the Judges in every State shall be bound thereby, any Thing in the Constitution or Laws of any State to the Contrary notwithstanding.
In Federalist No. 44, James Madison defends the Supremacy Clause as vital to the functioning of the nation. He noted that state legislatures were invested with all powers not specifically defined in the Constitution, but also said that having the federal government subservient to various state constitutions would be an inversion of the principles of government, concluding that if supremacy were not established "it would have seen the authority of the whole society everywhere subordinate to the authority of the parts; it would have seen a monster, in which the head was under the direction of the members ''.
The constitutional principle derived from the Supremacy Clause is federal preemption. Preemption applies regardless of whether the conflicting laws come from legislatures, courts, administrative agencies, or constitutions. For example, the Voting Rights Act of 1965, an act of Congress, preempts state constitutions, and Food and Drug Administration regulations may preempt state court judgments in cases involving prescription drugs.
Congress has preempted state regulation in many areas. In some cases, such as the 1976 Medical Device Regulation Act, Congress preempted all state regulation. In others, such as labels on prescription drugs, Congress allowed federal regulatory agencies to set federal minimum standards, but did not preempt state regulations imposing more stringent standards than those imposed by federal regulators. Where rules or regulations do not clearly state whether or not preemption should apply, the Supreme Court tries to follow lawmakers ' intent, and prefers interpretations that avoid preempting state laws.
In Ware v. Hylton, 3 U.S. (3 Dall.) 199 (1796), the United States Supreme Court for the first time applied the Supremacy Clause to strike down a state statute. Virginia had passed a statute during the Revolutionary War allowing the state to confiscate debt payments by Virginia citizens to British creditors. The Supreme Court found that this Virginia statute was inconsistent with the Treaty of Paris with Britain, which protected the rights of British creditors. Relying on the Supremacy Clause, the Supreme Court held that the treaty superseded Virginia 's statute, and that it was the duty of the courts to declare Virginia 's statute "null and void ''.
In Marbury v. Madison, 5 U.S. 137 (1803), the Supreme Court held that Congress can not pass laws that are contrary to the Constitution, and it is the role of the Judicial system to interpret what the Constitution permits. Citing the Supremacy Clause, the Court found Section 13 of the Judiciary Act of 1789 to be unconstitutional to the extent it purported to enlarge the original jurisdiction of the Supreme Court beyond that permitted by the Constitution.
In Martin v. Hunter 's Lessee, 14 U.S. 304 (1816), and Cohens v. Virginia, 19 U.S. 264 (1821), the Supreme Court held that the Supremacy Clause and the judicial power granted in Article III give the Supreme Court the ultimate power to review state court decisions involving issues arising under the Constitution and laws of the United States. Therefore, the Supreme Court has the final say in matters involving federal law, including constitutional interpretation, and can overrule decisions by state courts.
In McCulloch v. Maryland, 17 U.S. (4 Wheat.) 316 (1819), the Supreme Court reviewed a tax levied by Maryland on the federally incorporated Bank of the United States. The Court found that if a state had the power to tax a federally incorporated institution, then the state effectively had the power to destroy the federal institution, thereby thwarting the intent and purpose of Congress. This would make the states superior to the federal government. The Court found that this would be inconsistent with the Supremacy Clause, which makes federal law superior to state law. The Court therefore held that Maryland 's tax on the bank was unconstitutional because the tax violated the Supremacy Clause.
In Ableman v. Booth, 62 U.S. 506 (1859), the Supreme Court held that state courts can not issue rulings that contradict the decisions of federal courts, citing the Supremacy Clause, and overturning a decision by the Supreme Court of Wisconsin. Specifically, the court found it was illegal for state officials to interfere with the work of U.S. Marshals enforcing the Fugitive Slave Act or to order the release of federal prisoners held for violation of that Act. The Supreme Court reasoned that because the Supremacy Clause established federal law as the law of the land, the Wisconsin courts could not nullify the judgments of a federal court. The Supreme Court held that under Article III of the Constitution, the federal courts have the final jurisdiction in all cases involving the Constitution and laws of the United States, and that the states therefore can not interfere with federal court judgments.
In Pennsylvania v. Nelson, 350 U.S. 497 (1956) the Supreme Court struck down the Pennsylvania Sedition Act, which made advocating the forceful overthrow of the federal government a crime under Pennsylvania state law. The Supreme Court held that when federal interest in an area of law is sufficiently dominant, federal law must be assumed to preclude enforcement of state laws on the same subject; and a state law is not to be declared a help when state law goes farther than Congress has seen fit to go.
In Reid v. Covert, 354 U.S. 1 (1957), the Supreme Court held that international treaties and laws made pursuant to them must comply with the Constitution.
In Cooper v. Aaron, 358 U.S. 1 (1958), the Supreme Court rejected attempts by Arkansas to nullify the Court 's school desegregation decision, Brown v. Board of Education. The state of Arkansas, acting on a theory of states ' rights, had adopted several statutes designed to nullify the desegregation ruling. The Supreme Court relied on the Supremacy Clause to hold that the federal law controlled and could not be nullified by state statutes or officials.
In Edgar v. MITE Corp., 457 U.S. 624 (1982), the Supreme Court ruled: "A state statute is void to the extent that it actually conflicts with a valid Federal statute ''. In effect, this means that a State law will be found to violate the Supremacy Clause when either of the following two conditions (or both) exist:
In 1920, the Supreme Court applied the Supremacy Clause to international treaties, holding in the case of Missouri v. Holland, 252 U.S. 416, that the Federal government 's ability to make treaties is supreme over any state concerns that such treaties might abrogate states ' rights arising under the Tenth Amendment.
The Supreme Court has also held that only specific, "unmistakable '' acts of Congress may be held to trigger the Supremacy Clause. Montana had imposed a 30 percent tax on most sub-bituminous coal mined there. The Commonwealth Edison Company and other utility companies argued, in part, that the Montana tax "frustrated '' the broad goals of the federal energy policy. However, in the case of Commonwealth Edison Co. v. Montana, 453 U.S. 609 (1981), the Supreme Court disagreed. Any appeal to claims about "national policy '', the Court said, were insufficient to overturn a state law under the Supremacy Clause unless "the nature of the regulated subject matter permits no other conclusion, or that the Congress has unmistakably so ordained ''.
However, in the case of California v. ARC America Corp., 490 U.S. 93 (1989), the Supreme Court held that if Congress expressly intended to act in an area, this would trigger the enforcement of the Supremacy Clause, and hence nullify the state action. The Supreme Court further found in Crosby v. National Foreign Trade Council, 530 U.S. 363 (2000), that even when a state law is not in direct conflict with a federal law, the state law could still be found unconstitutional under the Supremacy Clause if the "state law is an obstacle to the accomplishment and execution of Congress 's full purposes and objectives ''. Congress need not expressly assert any preemption over state laws either, because Congress may implicitly assume this preemption under the Constitution.
|
when as the great wall of china built | History of the Great Wall of China - Wikipedia
The history of the Great Wall of China began when fortifications built by various states during the Spring and Autumn (771 -- 476 BC) and Warring States periods (475 -- 221 BC) were connected by the first emperor of China, Qin Shi Huang, to protect his newly founded Qin dynasty (221 -- 206 BC) against incursions by nomads from Inner Asia. The walls were built of rammed earth, constructed using forced labour, and by 212 BC ran from Gansu to the coast of southern Manchuria.
Later dynasties adopted different policies towards northern frontier defense. The Han (202 BC -- 220 AD), the Northern Qi (550 -- 574), the Sui (589 -- 618), and particularly the Ming (1369 -- 1644) were among those that rebuilt, re-manned, and expanded the Walls, although they rarely followed Qin 's routes. The Han extended the fortifications furthest to the west, the Qi built about 1,600 kilometres (990 mi) of new walls, while the Sui mobilised over a million men in their wall - building efforts. Conversely, the Tang (618 -- 907), the Song (960 -- 1279), the Yuan (1271 -- 1368), and the Qing (1644 -- 1911) mostly did not build frontier walls, instead opting for other solutions to the Inner Asian threat like military campaigning and diplomacy.
Although a useful deterrent against raids, at several points throughout its history the Great Wall failed to stop enemies, including in 1644 when the Manchu Qing marched through the gates of Shanhai Pass and replaced the most ardent of the wall - building dynasties, the Ming, as rulers of China.
The Great Wall of China visible today largely dates from the Ming dynasty, as they rebuilt much of the wall in stone and brick, often extending its line through challenging terrain. Some sections remain in relatively good condition or have been renovated, while others have been damaged or destroyed for ideological reasons, deconstructed for their building materials, or lost due to the ravages of time. For long an object of fascination for foreigners, the wall is now a revered national symbol and a popular tourist destination.
The conflict between the Chinese and the nomads, from which the need for the Great Wall arose, stemmed from differences in geography. The 15 '' isohyet marks the extent of settled agriculture, dividing the fertile fields of China to the south and the semi-arid grasslands of Inner Asia to the north. The climates and the topography of the two regions led to distinct modes of societal development.
According to the model by sinologist Karl August Wittfogel, the loess soils of Shaanxi made it possible for the Chinese to develop irrigated agriculture early on. Although this allowed them to expand into the lower reaches of the Yellow River valley, such extensive waterworks on an ever - increasing scale required collective labour, something that could only be managed by some form of bureaucracy. Thus the scholar - bureaucrats came to the fore to keep track of the income and expenses of the granaries. Walled cities grew up around the granaries for reasons of defence along with ease of administration; they kept invaders out and ensured that citizens remained within. These cities combined to become feudal states, which eventually united to become an empire. Likewise, according to this model, walls not only enveloped cities as time went by, but also lined the borders of the feudal states and eventually the whole Chinese empire to provide protection against raids from the agrarian northern steppes.
The steppe societies of Inner Asia, whose climate favoured a pastoral economy, stood in stark contrast to the Chinese mode of development. As animal herds are migratory by nature, communities could not afford to be stationary and therefore evolved as nomads. According to the influential Mongolist Owen Lattimore this lifestyle proved to be incompatible with the Chinese economic model. As the steppe population grew, pastoral agriculture alone could not support the population, and tribal alliances needed to be maintained by material rewards. For these needs, the nomads had to turn to the settled societies to get grains, metal tools, and luxury goods, which they could not produce by themselves. If denied trade by the settled peoples, the nomads would resort to raiding or even conquest.
Potential nomadic incursion from three main areas of Inner Asia caused concern to northern China: Mongolia to the north, Manchuria to the northeast, and Xinjiang to the northwest. Of the three, China 's chief concern since the earliest times had been Mongolia -- the home of many of the country 's fiercest enemies including the Xiongnu, the Xianbei, the Khitans, and the Mongols. The Gobi Desert, which accounts for two - thirds of Mongolia 's area, divided the main northern and southern grazing lands and pushed the pastoral nomads to the fringes of the steppe. On the southern side (Inner Mongolia), this pressure brought the nomads into contact with China.
For the most part, barring intermittent passes and valleys (the major one being the corridor through Zhangjiakou and the Juyong Pass), the North China Plain remained shielded from the Mongolian steppe by the Yin Mountains. However, if this defence were breached, China 's flat terrain offered no protection to the cities on the plain, including the imperial capitals of Beijing, Kaifeng, and Luoyang. Heading west along the Yin Mountains, the range ends where the Yellow River circles northwards upstream in the area known as the Ordos Loop -- technically part of the steppe, but capable of irrigated agriculture. Although the Yellow River formed a theoretical natural boundary with the north, such a border so far into the steppe was difficult to maintain. The lands south of the Yellow River -- the Hetao, the Ordos Desert, and the Loess Plateau -- provided no natural barriers on the approach to the Wei River valley, the oft - called cradle of Chinese civilization where the ancient capital Xi'an lay. As such, control of the Ordos remained extremely important for the rulers of China: not only for potential influence over the steppe, but also for the security of China proper. The region 's strategic importance combined with its untenability led many dynasties to place their first walls here.
Although Manchuria is home to the agricultural lands of the Liao River valley, its location beyond the northern mountains relegated it to the relative periphery of Chinese concern. When Chinese state control became weak, at various points in history Manchuria fell under the control of the forest peoples of the area, including the Jurchens and the Manchus. The most crucial route that links Manchuria and the North China Plain is a narrow coastal strip of land, wedged between the Bohai Sea and the Yan Mountains, called the Shanhai Pass (literally the "mountain and sea pass ''). The pass gained much importance during the later dynasties, when the capital was set in Beijing, a mere 300 kilometres (190 miles) away. In addition to the Shanhai Pass, a handful of mountain passes also provide access from Manchuria into China through the Yan Mountains, chief among them the Gubeikou and Xifengkou (Chinese: 喜 峰 口).
Xinjiang, considered part of the Turkestan region, consists of an amalgamation of deserts, oases, and dry steppe barely suitable for agriculture. When influence from the steppe powers of Mongolia waned, the various Central Asian oasis kingdoms and nomadic clans like the Göktürks and Uyghurs were able to form their own states and confederations that threatened China at times. China proper is connected to this area by the Hexi Corridor, a narrow string of oases bounded by the Gobi Desert to the north and the high Tibetan Plateau to the south. In addition to considerations of frontier defence, the Hexi Corridor also formed an important part of the Silk Road trade route. Thus it was also in China 's economic interest to control this stretch of land, and hence the Great Wall 's western terminus is in this corridor -- the Yumen Pass during Han times and the Jiayu Pass during the Ming dynasty and thereafter.
One of the first mentions of a wall built against northern invaders is found in a poem, dated from the seventh century BC, recorded in the Classic of Poetry. The poem tells of a king, now identified as King Xuan (r. 827 -- 782 BC) of the Western Zhou dynasty (1046 -- 771 BC), who commanded General Nan Zhong (南仲) to build a wall in the northern regions to fend off the Xianyun. The Xianyun, whose base of power was in the Ordos region, were regarded as part of the charioteering Rong tribes, and their attacks aimed at the early Zhou capital region of Haojing were probably the reason for King Xuan 's response. Nan Zhong 's campaign was recorded as a great victory. However, only a few years later in 771 BC another branch of the Rong people, the Quanrong, responded to a summons by the renegade Marquess of Shen by over-running the Zhou defences and laying waste to the capital. The cataclysmic event killed King Xuan 's successor King You (795 -- 771 BC), forced the court to move the capital east to Chengzhou (成 周, later known as Luoyang) a year later, and thus ushered in the Eastern Zhou dynasty (770 -- 256 BC). Most importantly, the fall of Western Zhou redistributed power to the states that had acknowledged Zhou 's nominal rulership. The rule of the Eastern Zhou dynasty was marked by bloody interstate anarchy. With smaller states being annexed and larger states waging constant war upon one another, many rulers came to feel the need to erect walls to protect their borders. Of the earliest textual reference to such a wall was the State of Chu 's wall of 656 BC, 1,400 metres (4,600 feet) of which were excavated in southern Henan province in the modern era. However the Chu border fortifications consisted of many individual mountain fortresses; they do not constitute to a lengthy, single wall. The State of Qi also had fortified borders up by the 7th century BC, and the extant portions in Shandong province had been christened the Great Wall of Qi. The State of Wei built two walls, the western one completed in 361 BC and the eastern in 356 BC, with the extant western wall found in Hancheng, Shaanxi. Even non-Chinese peoples built walls, such as the Di state of Zhongshan and the Yiqu Rong, whose walls were intended to defend against the State of Qin.
Of these walls, those of the northern states Yan, Zhao, and Qin were connected by Qin Shi Huang when he united the Chinese states in 221 BC.
The State of Yan, the easternmost of the three northern states, began to erect walls after the general Qin Kai drove the Donghu people back "a thousand li '' during the reign of King Zhao (燕 昭王; r. 311 -- 279 BC). The Yan wall stretched from the Liaodong peninsula, through Chifeng, and into northern Hebei, possibly bringing its western terminus near the Zhao walls. Another Yan wall was erected to the south to defend against the Zhao; it was southwest of present - day Beijing and ran parallel to the Juma River for several dozen miles.
The Zhao walls to the north were built under King Wuling of Zhao (r. 325 -- 299 BC), whose groundbreaking introduction of nomadic cavalry into his army reshaped Chinese warfare and gave Zhao an initial advantage over his opponents. He attacked the Xiongnu tribes of Linhu (林 胡) and Loufan (樓 煩) to the north, then waged war on the state of Zhongshan until it was annexed in 296 BC. In the process, he constructed the northernmost fortified frontier deep in nomadic territory. The Zhao walls were dated in the 1960s to be from King Wuling 's reign: a southern long wall in northern Henan encompassing the Yanmen Pass; a second line of barricades encircling the Ordos Loop, extending from Zhangjiakou in the east to the ancient fortress of Gaoque (高 闕) in the Urad Front Banner; and a third, northernmost line along the southern slopes of the Yin Mountains, extending from Qinghe in the east, passing north of Hohhot, and into Baotou.
Qin was originally a state on the western fringe of the Chinese political sphere, but it grew into a formidable power in the later parts of the Warring States period when it aggressively expanded in all directions. In the north, the state of Wei and the Yiqu built walls to protect themselves from Qin aggression, but were still unable to stop Qin from eating into their territories. The Qin reformist Shang Yang forced the Wei out of their walled area west of the Yellow River in 340 BC, and King Huiwen of Qin (r. 338 -- 311 BC) took 25 Yiqu forts in a northern offensive. When King Huiwen died, his widow the Queen Dowager Xuan acted as regent because the succeeding sons were deemed too young to govern. During the reign of King Zhaoxiang (r. 306 -- 251 BC), the queen dowager apparently entered illicit relations with the Yiqu king and gave birth to two of his sons, but later tricked and killed the Yiqu king. Following that coup, the Qin army marched into Yiqu territory at the queen dowager 's orders; the Qin annihilated the Yiqu remnants and thus came to possess the Ordos region. At this point the Qin built a wall around their new territories to defend against the true nomads even further north, incorporating the Wei walls. As a result, an estimated total of 1,775 kilometres (1,103 mi) of Qin walls (including spurts) extended from southern Gansu to the bank of the Yellow River in the Jungar Banner, close to the border with Zhao at the time.
The walls, known as Changcheng (長城) -- literally "long walls '', but often translated as "Great Wall '' -- were mostly constructed of tamped earth, with some parts built with stones. Where natural barriers like ravines and rivers sufficed for defence, the walls were erected sparingly, but long fortified lines were laid where such advantageous terrains did not exist. Often in addition to the wall, the defensive system included garrisons and beacon towers inside the wall, and watchtowers outside at regular intervals. In terms of defence, the walls were generally effective at countering cavalry shock tactics, but there are doubts as to whether these early walls were actually defensive in nature. Nicola Di Cosmo points out that the northern frontier walls were built far to the north and included traditionally nomadic lands, and so rather than being defensive, the walls indicate the northward expansions of the three northern states and their desire to safeguard their recent territorial acquisitions. This theory is supported by the archeological discovery of nomadic artifacts within the walls, suggesting the presence of pre-existing or conquered barbarian societies. It is entirely possible, as Western scholars like di Cosmo and Lattimore suggest, that nomadic aggression against the Chinese in the coming centuries was partly caused by Chinese expansionism during this period.
In 221 BC, the state of Qin completed its conquest over the other Warring States and united China under Qin Shi Huang, the first emperor of China. These conquests, combined with the Legalist reforms started by Shang Yang in the 4th century BC, transformed China from a loose confederation of feudal states to an authoritarian empire. With the transformation, Qin became able to command a far greater assembly of labourers to be used in public works than the prior feudal kingdoms. Also, once unification was achieved, Qin found itself in possession of a large professional army with no more internal enemies to fight and thus had to find a new use for them. Soon after the conquests, in the year 215 BC, the emperor sent the famed general Meng Tian to the Ordos region to drive out the Xiongnu nomads settled there, who had risen from beyond the fallen marginal states along the northern frontier. Qin 's campaign against the Xiongnu was preemptive in nature, since there was no pressing nomadic menace to be faced at the time; its aim was to annexe the ambiguous territories of the Ordos and to clearly define the Qin 's northern borders. Once the Xiongnu were chased away, Meng Tian introduced 30,000 settler families to colonize the newly conquered territories.
Wall configurations were changed to reflect the new borders under the Qin. General Meng Tian erected walls beyond the northern loop of the Yellow River, effectively linking the border walls of Qin, Zhao, and Yan. Concurrent to the building of the frontier wall was the destruction of the walls within China that used to divide one warring state from another -- contrary to the outer walls, which were built to stabilize the newly united China, the inner walls threatened the unity of the empire. In the following year, 214 BC, Qin Shi Huang ordered new fortifications to be built along the Yellow River to the west of the Ordos while work continued in the north. This work was completed probably by 212 BC, signalled by Qin Shi Huang 's imperial tour of inspection and the construction of the Direct Road (直道) connecting the capital Xianyang with the Ordos. The result was a series of long walls running from Gansu to the seacoast in Manchuria.
Details of the construction were not found in the official histories, but it could be inferred that the construction conditions were made especially difficult by the long stretches of mountains and semi-desert that the Great Wall traversed, the sparse populations of these areas, and the frigid winter climate. Although the walls were rammed earth, so the bulk of the building material could be found in situ, transportation of additional supplies and labour remained difficult for the reasons named above. The sinologist Derk Bodde posits in The Cambridge History of China that "for every man whom Meng Tian could put to work at the scene of actual construction, dozens must have been needed to build approaching roads and to transport supplies. '' This is supported by the Han dynasty statesman Zhufu Yan 's description of Qin Shi Huang 's Ordos project in 128 BC:
... the land was brackish and arid, crops could not be grown on them... At the time, the young men being drafted were forced to haul boats and barges loaded with baggage trains upstream to sustain a steady supply of food and fodder to the front... Commencing at the departure point a man and his animal could carry thirty zhong (about 176 kilograms (388 lb)) of food supply, by the time they arrived at the destination, they merely delivered one dan (about 29 kilograms (64 lb)) of supply... When the populace had become tired and weary they started to dissipate and abscond. The orphans, the frail, the widowed and the seniors were desperately trying to escape from their appallingly derelict state and died on the wayside as they wandered away from their home. People started to revolt.
The settlement of the north continued up to Qin Shi Huang 's death in 210 BC, upon which Meng Tian was ordered to commit suicide in a succession conspiracy. Before killing himself, Meng Tian expressed regret for his walls: "Beginning at Lintao and reaching to Liaodong, I built walls and dug moats for more than ten thousand li; was it not inevitable that I broke the earth 's veins along the way? This then was my offense. ''
Meng Tian 's settlements in the north were abandoned, and the Xiongnu nomads moved back into the Ordos Loop as the Qin empire became consumed by widespread rebellion due to public discontent. Owen Lattimore concluded that the whole project relied upon military power to enforce agriculture on a land more suited for herding, resulting in "the anti-historical paradox of attempting two mutually exclusive forms of development simultaneously '' that was doomed to fail.
In 202 BC, the former peasant Liu Bang emerged victorious from the Chu -- Han Contention that followed the rebellion that toppled the Qin dynasty, and proclaimed himself Emperor of the Han dynasty, becoming known as Emperor Gaozu of Han (r. 202 -- 195 BC) to posterity. Unable to address the problem of the resurgent Xiongnu in the Ordos region through military means, Emperor Gaozu was forced to appease the Xiongnu. In exchange for peace, the Han offered tributes along with princesses to marry off to the Xiongnu chiefs. These diplomatic marriages would become known as heqin, and the terms specified that the Great Wall (determined to be either the Warring States period Qin state wall or a short stretch of wall south of Yanmen Pass) was to serve as the line across which neither party would venture. In 162 BC, Gaozu 's son Emperor Wen clarified the agreement, suggesting the Xiongnu chanyu held authority north of the Wall and the Han emperor held authority south of it. Sima Qian, the author of the Records of the Grand Historian, describes the result of this agreement as one of peace and friendship: "from the chanyu downwards, all the Xiongnu grew friendly with the Han, coming and going along the Long Wall ''. However, Chinese records show that the Xiongnu often did not respect the agreement, as the Xiongnu cavalry numbering up to 100,000 made several intrusions into Han territory despite the intermarriage.
To Chinese minds, the heqin policy was humiliating and ran contrary to the Sinocentric world order like "a person hanging upside down '', as the statesman Jia Yi (d. 169 BC) puts it. These sentiments manifested themselves in the Han court in the form of the pro-war faction, who advocated the reversal of Han 's policy of appeasement. By the reign of Emperor Wu (r. 141 -- 87 BC), the Han felt comfortable enough to go to war with the Xiongnu. After a botched attempt at luring the Xiongnu army into an ambush at the Battle of Mayi in 133 BC, the era of heqin - style appeasement was broken and the Han -- Xiongnu War went into full swing.
As the Han -- Xiongnu War progressed in favour of the Han, the Wall became maintained and extended beyond Qin lines. In 127 BC, General Wei Qing invaded the much - contested Ordos region as far as the Qin fortifications set up by Meng Tian. In this way, Wei Qing reconquered the irrigable lands north of the Ordos and restored the spur of defences protecting that territory from the steppe. In addition to rebuilding the walls, archeologists believe that the Han also erected thousands of kilometres of walls from Hebei to Inner Mongolia during Emperor Wu 's reign. The fortifications here include embankments, beacon stations, and forts, all constructed with a combination of tamped - earth cores and stone frontages. From the Ordos Loop, the sporadic and non-continuous Han Great Wall followed the northern edge of the Hexi Corridor through the cities of Wuwei, Zhangye, and Jiuquan, leading into the Juyan Lake Basin, and terminating in two places: the Yumen Pass in the north, or the Yang Pass to the south, both in the vicinity of Dunhuang. Yumen Pass was the most westerly of all Han Chinese fortifications -- further west than the western terminus of the Ming Great Wall at Jiayu Pass, about 460 kilometres (290 mi) to the east. The garrisons of the watchtowers on the wall were supported by civilian farming and by military agricultural colonies known as tuntian. Behind this line of fortifications, the Han government was able to maintain its settlements and its communications to the Western Regions in central Asia, generally secure from attacks from the north.
The campaigns against the Xiongnu and other nomadic peoples of the west exhausted the imperial treasury, and the expansionist policies were reverted in favour of peace under Emperor Wu 's successors. The peace was largely respected even when the Han throne was usurped by the minister Wang Mang in 9 AD, beginning a brief 15 - year interregnum known as the Xin dynasty (9 -- 23). Despite high tensions between the Xin and the Xiongnu resulting in the deployment of 300,000 men on the Great Wall, no major fighting broke out beyond minor raids. Instead, popular discontent led to banditry and, ultimately, full - scale rebellion. The civil war ended with the Liu clan on the throne again, beginning the Eastern Han dynasty (25 -- 220).
The restorer Emperor Guangwu (r. 25 -- 57 AD) initiated several projects to consolidate his control within the frontier regions. Defense works were established to the east of the Yanmen Pass, with a line of fortifications and beacon fires stretching from Pingcheng County (present - day Datong) through the valley of the Sanggan River to Dai County, Shanxi. By 38 AD, as a result of raids by the Xiongnu further to the west against the Wei River valley, orders were given for a series of walls to be constructed as defences for the Fen River, the southward course of the Yellow River, and the region of the former imperial capital, Chang'an. These constructions were defensive in nature, which marked a shift from the offensive walls of the preceding Emperor Wu and the rulers of the Warring States. By the early 40s AD the northern frontiers of China had undergone drastic change: the line of the imperial frontier followed not the advanced positions conquered by Emperor Wu but the rear defences indicated roughly by the modern (Ming dynasty) Great Wall. The Ordos region, northern Shanxi, and the upper Luan River basin around Chengde were abandoned and left to the control of the Xiongnu. The rest of the frontier remained somewhat intact until the end of the Han dynasty, with the Dunhuang manuscripts (discovered in 1900) indicating that the military establishment in the northwest was maintained for most of the Eastern Han period.
Following the end of the Han dynasty in 220, China disintegrated into warlord states, which in 280 were briefly reunited under the Western Jin dynasty (265 -- 316). There are ambiguous accounts of the Jin rebuilding the Qin wall, but these walls apparently offered no resistance during the Wu Hu uprising, when the nomadic tribes of the steppe evicted the Chinese court from northern China. What followed was a succession of short - lived states in northern China known as the Sixteen Kingdoms, until they were all consolidated by the Xianbei - led Northern Wei dynasty (386 -- 535).
As Northern Wei became more economically dependent on agriculture, the Xianbei emperors made a conscious decision to adopt Chinese customs, including passive methods of frontier defence. In 423, a defence line over 2,000 li (1,080 kilometres (670 mi)) long was built to resist the Rouran; its path roughly followed the old Zhao wall from Chicheng County in Hebei Province to Wuyuan County, Inner Mongolia. In 446, 100,000 men were put to work building an inner wall from Yanqing, passing south of the Wei capital Pingcheng, and ending up near Pingguan on the eastern bank of the Yellow River. The two walls formed the basis of the double - layered Xuanfu -- Datong wall system that protected Beijing a thousand years later during the Ming dynasty.
The Northern Wei collapsed in 535 due to civil insurrection to be eventually succeeded by the Northern Qi (550 -- 575) and Northern Zhou (557 -- 580). Faced with the threat of the Göktürks from the north, from 552 to 556 the Qi built up to 3,000 li (about 1,600 kilometres (990 mi)) of wall from Shanxi to the sea at Shanhai Pass. Over the course of the year 555 alone, 1.8 million men were mobilized to build the Juyong Pass and extend its wall by 450 kilometres (280 mi) through Datong to the eastern banks of the Yellow River. In 557 a secondary wall was built inside the main one. These walls were built quickly from local earth and stones or formed by natural barriers. Two stretches of the stone - and - earth Qi wall still stand in Shanxi today, measuring 3.3 metres (11 ft) wide at their bases and 3.5 metres (11 ft) high on average. In 577 the Northern Zhou conquered the Northern Qi and in 580 made repairs to the existing Qi walls. The route of the Qi and Zhou walls would be mostly followed by the later Ming wall west of Gubeikou, which includes reconstructed walls from Qi and Zhou. In more recent times, the reddish remnants of the Zhou ramparts in Hebei gave rise to the nickname "Red Wall ''.
The Sui took power from the Northern Zhou in 581 before reuniting China in 589. Sui 's founding emperor, Emperor Wen of Sui (r. 581 -- 604), carried out considerable wall construction in 581 in Hebei and Shanxi to defend against Ishbara Qaghan of the Göktürks. The new walls proved insufficient in 582 when Ishbara Qaghan avoided them by riding west to raid Gansu and Shaanxi with 400,000 archers. Between 585 and 588 Emperor Wen sought to close this gap by putting walls up in the Ordos Mountains (between Suide and Lingwu) and Inner Mongolia. In 586 as many as 150,000 men are recorded as involved in the construction. Emperor Wen 's son Emperor Yang (r. 604 -- 618) continued to build walls. In 607 -- 608 he sent over a million men to build a wall from Yulin to near Huhhot to protect the newly refurbished eastern capital Luoyang. Part of the Sui wall survives to this day in Inner Mongolia as earthen ramparts some 2.5 metres (8 ft 2 in) high with towers rising to double that. The dynastic history of Sui estimates that 500,000 people died building the wall, adding to the number of casualties caused by Emperor Yang 's projects including the aforementioned redesign of Luoyang, the Grand Canal, and two ill - fated campaigns against Goguryeo. With the economy strained and the populace resentful, the Sui dynasty erupted in rebellion and ended with the assassination of Emperor Yang in 618.
-- Wang Changling (698 -- 755)
Frontier policy under the Tang dynasty reversed the wall - building activities of most previous dynasties that had occupied northern China since the third century BC, and no extensive wall building took place for the next several hundred years.
Soon after the establishment of the Tang dynasty, during the reign of Emperor Taizong (r. 626 -- 649), the threat of Göktürk tribesmen from the north prompted some court officials to suggest drafting corvée labourers to repair the aging Great Wall. Taizong scoffed at the suggestion, alluding to the Sui walls built in vain: "The Emperor Yang of Sui made the people labor to construct the Great Wall in order to defend against the Turks, but in the end this was of no use. '' Instead of building walls, Taizong claimed he "need merely to establish Li Shiji in Jinyang for the dust on the border to settle. '' Accordingly, Taizong sent talented generals like Li Shiji with mobile armies to the frontier, while fortifications were mostly limited to a series of walled garrisons, such as the euphemistically - named "cities for accepting surrender '' (受降 城, shòuxiáng chéng) that were actually bases from which to launch attacks. As a result of this military strategy, the Tang grew to become one of the largest of all the Chinese empires, destroying the Göktürks of the Eastern Turkic Khaganate and acquiring territory stretching all the way to Kazakhstan.
Nevertheless, records show that in the Kaiyuan era (713 -- 742) of Emperor Xuanzong 's reign, the general Zhang Yue built a wall 90 li (48 kilometres (30 mi)) to the north of Huairong (懷 戎; present - day Huailai County, Hebei), although it remains unclear whether he erected new walls or only reinforced the existing Northern Qi walls.
The Great Wall, or the ruins of it, features prominently in the subset of Tang poetry known as biansai shi (邊塞 詩, "frontier verse '') written by scholar - officials assigned along the frontier. Emphasizing the poets ' loneliness and longing for home while hinting at the pointlessness of their posts, these frontier verses are characterized by imagery of desolate landscapes, including the ruins of the now - neglected Great Wall -- a direct product of Tang 's frontier policy.
Han Chinese power during the tumultuous post-Tang era was represented by the Song dynasty (960 -- 1279), which completed its unification of the Chinese states with the conquest of Wuyue in 971. Turning to the north after this victory, in 979 the Song eliminated the Northern Han, ultimate successors to the Later Jin, but were unable to take the Sixteen Prefectures from the Liao dynasty. As a result of Song 's military aggression, relations between the Song and Liao remained tense and hostile. One of the battlegrounds in the Song -- Liao War was the Great Wall Gap (長城 口), so named because the southern Yan wall of the Warring States period crossed the Juma River here into Liao territory. The Great Wall Gap saw action in 979, 988 -- 989, and 1004, and a Song fortress was built there in 980. Intermittent wars between the Song and the Liao lasted until January 1005, when a truce was called and led to the Treaty of Chanyuan. This agreement, among other things, required the Song to pay tribute to the Liao, recognized the Song and Liao as equals, and demarcated the Song -- Liao border, the course of which became more clearly defined in a series of subsequent bilateral agreements. Several stretches of the old Great Walls, including the Northern Qi Inner Wall near the Hengshan mountain range, became the border between the Song and the Liao.
In the northwest, the Song were in conflict with the Western Xia, since they occupied what the Song considered as Chinese land lost during the Tang dynasty. The Song utilized the walls built during the reign of Qin 's King Zhaoxiang of the Warring States period, making it the Song -- Western Xia border, but the topography of the area was not as sharp and distinct as the Song -- Liao defences to the east. The border general Cao Wei (曹 瑋; 973 -- 1030) deemed the Old Wall itself insufficient to slow a Tangut cavalry attack, and had a deep trench dug alongside. This trench, between 15 and 20 metres (49 and 66 feet) in width and depth, proved an effective defence, but in 1002 the Tanguts caught the Song patrollers off guard and filled the trench to cross the Old Wall. Later, in 1042, the Tanguts turned the trench against the Song by removing the bridges over it, thereby trapping the retreating army of Ge Huaimin (葛懷敏) before annihilating it at the Battle of Dingchuan Fortress (定 川 寨).
Despite the war with the Western Xia, the Song also settled land disputes with them by referring to prior agreements, as with the Liao. However, soon after the Jin dynasty overthrew the Liao dynasty, the Jurchens sacked the Song capital in 1127 during the Jin -- Song wars, causing the Song court to flee south of the Yangtze River. For the next two and a half centuries, the Great Wall played no role in Han Chinese geopolitics.
After the Tang dynasty ended in 907, the northern frontier area remained out of Han Chinese hands until the establishment of the Ming dynasty in 1368. During this period, non-Han "conquest dynasties '' ruled the north: the Khitan Liao dynasty (907 -- 1125) and the succeeding Jurchen Jin dynasty (1115 -- 1234) in the east and the Tangut Western Xia (1038 -- 1227) in the west, all of which had built walls against the north.
In 907, the Khitan chieftain Abaoji succeeded in getting himself appointed khaghan of all Khitan tribes in the north, laying the foundations to what would officially become the Liao dynasty. In 936, the Khitan supported the Shanxi rebel Shi Jingtang in his revolt against the Shatuo Turkic Later Tang, which had destroyed the usurpers of the Tang in 923. The Khitan leader, Abaoji 's second son Yelü Deguang, convinced Shi to found a new dynasty (the Later Jin, 936 -- 946), and received the crucial border region known as the Sixteen Prefectures in return. With the Sixteen Prefectures, the Khitan now possessed all the passes and fortifications that controlled access to the plains of northern China, including the main Great Wall line.
Settling in the transitional area between agricultural lands and the steppe, the Khitans became semi-sedentary like their Xianbei predecessors of the Northern Wei, and started to use Chinese methods of defence. In 1026 walls were built through central Manchuria north of Nong'an County to the banks of the Songhua River.
When the Jurchens, once Liao vassals, rose up to overthrow their masters and established the Jin dynasty, they continued Liao 's wall - building activities with extensive work begun before 1138. Further wall construction took place in 1165 and 1181 under the Jin Emperor Shizhong, and later from 1192 to 1203 during the reign of his successor Emperor Zhangzong.
This long period of wall - building burdened the populace and provoked controversy. Sometime between 1190 and 1196, during Zhangzong 's reign, the high official Zhang Wangong (張萬公) and the Censorate recommended that work on the wall be indefinitely suspended due to a recent drought, noting: "What has been begun is already being flattened by sandstorms, and bullying the people into defence works will simply exhaust them. '' However, Chancellor Wanyan Xiang (完 顏 襄) convinced the emperor of the walls ' merits based on an optimistic cost estimate -- "Although the initial outlay for the walls will be one million strings of cash, when the work is done the frontier will be secure with only half the present number of soldiers needed to defend it, which means that every year you will save three million strings of cash... The benefits will be everlasting '' -- and so construction continued unabated. All this work created an extensive systems of walls, which consisted of a 700 kilometres (430 mi) "outer wall '' from Heilongjiang to Mongolia and a 1,000 kilometres (620 mi) network of "inner walls '' north and northeast of Beijing. Together, they formed a roughly elliptical web of fortifications 1,400 kilometres (870 mi) in length and 440 kilometres (270 mi) in diameter. Some of these walls had inner moats (from 10 to 60 metres (33 to 197 feet) in width), beacon towers, battlements, parapets, and outward - facing semicircular platforms protruding from the wall -- features that set the Jin walls apart from their predecessors.
In the west, the Tanguts took control of the Ordos region, where they established the Western Xia dynasty. Although the Tanguts were not traditionally known for building walls, in 2011 archeologists uncovered 100 kilometres (62 mi) of walls at Ömnögovi Province in Mongolia in what had been Western Xia territory. Radiocarbon analysis showed that they were constructed from 1040 to 1160. The walls were as tall as 2.75 metres (9 ft 0 in) at places when they were discovered, and may have been around 2 metres (6 ft 7 in) taller originally. They were built with mud and saxaul (a desert shrub) in one section, and dark basalt blocks in another, suggesting that the rocks may have been quarried from nearby extinct volcanoes and transported to the construction site. Archaeologists have not yet found traces of human activity around this stretch of wall, which suggests that the Western Xia wall in this location may have been incomplete and not ready for use.
In the 13th century, the Mongol leader Genghis Khan, once a vassal of the Jurchens, rose up against the Jin dynasty. In the ensuing Mongol conquest of the Jin dynasty, the nomadic invaders avoided direct attacks on the Jin fortifications. Instead, when they could, the Mongols simply rode around the walls; an effective example of this tactic is in 1211, when they circumvented the substantial fortress in Zhangjiakou and inflicted a terrible defeat upon the Jin armies at the Battle of Yehuling. The Mongols also took advantage of lingering Liao resentment against the Jin; the Khitan defenders of the garrisons along the Jin walls, such as those in Gubeikou, often preferred to surrender to the Mongols rather than fight them. The only major engagement of note along the main Great Wall line was at the heavily defended Juyong Pass: instead of laying siege, the Mongol general Jebe lured the defenders out into an ambush and charged in through the opened gates. In 1215, Genghis Khan besieged, captured, and sacked the Jin capital of Yanjing (modern - day Beijing). The Jin dynasty eventually collapsed following the siege of Caizhou in 1234. Western Xia had already fallen in 1227, and the Southern Song resisted the Mongols until 1279.
With that, the Yuan dynasty, established by Genghis Khan 's grandson Khublai Khan, became the first foreign dynasty to rule all of China. Despite being the head of the Mongol Empire, Khublai Khan 's rule over China was not free from the threat of the steppe nomads. The Yuan dynasty faced challenges from rival claimants to the title of Great Khan and from rebellious Mongols in the north. Khublai Khan dealt with such threats by using both military blockades and economic sanctions. Although he established garrisons along the steppe frontier from the Juyan Lake Basin in the far west to Yingchang in the east, Khublai Khan and the Yuan emperors after him did not add to the Great Wall (except for the ornate Cloud Platform at Juyong Pass). When the Venetian traveller Marco Polo wrote of his experiences in China during the reign of Khublai Khan, he did not mention a Great Wall.
In 1368, the Hongwu Emperor (Zhu Yuanzhang, r. 1368 -- 98) ousted the Mongol - led Yuan dynasty from China to inaugurate the Ming dynasty. The Mongols fled back to Mongolia, but even after numerous campaigns, the Mongol problem remained.
During his early reign, Hongwu set up the "eight outer garrisons '' close to the steppe and an inner line of forts more suitable for defence. The inner line was the forerunner to the Ming Great Wall. In 1373, as Ming forces encountered setbacks, Hongwu put more emphasis on defence and adopted Hua Yunlong 's (華 雲龍) suggestion to establish garrisons at 130 passes and other strategic points in the Beijing area. More positions were set up in the years up Hongwu 's death in 1398, and watchtowers were manned from the Bohai Sea to Beijing and further onto the Mongolian steppes. These positions, however, were not for a linear defence but rather a regional one in which walls did not feature heavily, and offensive tactics remained the overarching policy at the time. In 1421, the Ming capital was relocated from Nanjing in the south to Beijing in the north, partly to better manage the Mongol situation. Thus defenses were concentrated around Beijing, where stone and earth began to replace rammed earth in strategic passes. A wall was erected by the Ming in Liaodong to protect Han settlers from a possible threat from the Jurched - Mongol Oriyanghan around 1442. In 1467 -- 68, expansion of the wall provided further protection for the region from against attacks by the Jianzhou Jurchens in the northeast.
Meanwhile, the outer defenses were gradually moved inward, thereby sacrificing a vital foothold in the steppe transitional zone. Despite the withdrawal from the steppe, the Ming military remained in a strong position against the nomads until the Tumu Crisis in 1449, which caused the collapse of the early Ming security system. Over half of the campaigning Chinese army perished in the conflict, while the Mongols captured the Zhengtong Emperor. This military debacle shattered the Chinese military might that had so impressed and given pause to the Mongols since the beginning of the dynasty, and caused the Ming to be on the defensive ever after.
The deterioration of the Ming military position in the steppe transitional zone gave rise to nomadic raids into Ming territory, including the crucial Ordos region, on a level unprecedented since the dynasty 's founding. After decades of deliberation between an offensive strategy and an accommodative policy, the decision to build the first major Ming walls in the Ordos was agreed upon as an acceptable compromise the 1470s.
Yu Zijun (余子 俊; 1429 -- 1489) first proposed constructing a wall in the Ordos region in August 1471, but not until 20 December 1472 did the court and emperor approve the plan. The 1473 victory in the Battle of Red Salt Lake (紅 鹽池) by Wang Yue (王 越) deterred Mongol invasions long enough for Yu Zijun to complete his wall project in 1474. This wall, a combined effort between Yu Zijun and Wang Yue, stretched from present day Hengcheng (橫 城) in Lingwu (northwestern Ningxia province) to Huamachi town (花 馬 池 鎮) in Yanchi County, and from there to Qingshuiying (清水 營) in northeastern Shaanxi, a total of more than 2000 li (about 1,100 kilometres (680 mi)) long. Along its length were 800 strong points, sentry posts, beacon - fire towers, and assorted defences. 40,000 men were enlisted for this effort, which was completed in several months at a cost of over one million silver taels. This defence system proved its initial worth in 1482, when a large group of Mongol raiders were trapped within the double lines of fortifications and suffered a defeat by the Ming generals. This was seen as a vindication of Yu Zijun 's strategy of wall - building by the people of the border areas. By the mid-16th century, Yu 's wall in the Ordos had seen expansion into an extensive defence system. It contained two defence lines: Yu 's wall, called the "great border '' (大 邊, dàbiān), and a "secondary border '' (二 邊, èrbiān) built by Yang Yiqing (楊一清; 1454 -- 1530) behind it.
Following the success of the Ordos walls, Yu Zijun proposed construction of a further wall that would extend from the Yellow River bend in the Ordos to the Sihaiye Pass (四海 冶 口; in present - day Yanqing County) near the capital Beijing, running a distance of more than 1300 li (about 700 kilometres (430 mi)). The project received approval in 1485, but Yu 's political enemies harped on the cost overruns and forced Yu to scrap the project and retire the same year. For more than 50 years after Yu 's resignation, political struggle prevented major wall constructions on a scale comparable to Yu 's Ordos project.
However, wall construction continued regardless of court politics during this time. The Ordos walls underwent extension, elaboration, and repair well into the 16th century. Brick and stone started to replace tamped earth as the wall building material, because they offered better protection and durability. This change in material gave rise to a number of necessary accommodations with regard to logistics, and inevitably a drastic increase in costs. Instead of being able to draw on local resources, building projects now required brick - kilns, quarries, and transportation routes to deliver bricks to the work site. Also, masons had to be hired since the local peasantry proved inadequate for the level of sophistication that brick constructions required. Work that originally could be done by one man in a month with earth now required 100 men to do in stone.
With the Ordos now adequately fortified, the Mongols avoided its walls by riding east to invade Datong and Xuanfu (宣 府; present - day Xuanhua, Hebei Province), which were two major garrisons guarding the corridor to Beijing where no walls had been built. The two defence lines of Xuanfu and Datong (abbreviated as "Xuan -- Da '') left by the Northern Qi and the early Ming had deteriorated by this point, and for all intents and purposes the inner line was the capital 's main line of defence.
From 1544 to 1549, Weng Wanda (翁 萬達; 1498 -- 1552) embarked on a defensive building program on a scale unprecedented in Chinese history. Troops were re-deployed along the outer line, new walls and beacon towers were constructed, and fortifications were restored and extended along both lines. Firearms and artillery were mounted on the walls and towers during this time, for both defence and signalling purposes. The project 's completion was announced in the sixth month of 1548. At its height, the Xuan -- Da portion of the Great Wall totalled about 850 kilometres (530 miles) of wall, with some sections being doubled - up with two lines of wall, some tripled or even quadrupled. The outer frontier was now protected by a wall called the "outer border '' (外邊, wàibiān) that extended 380 kilometres (240 mi) from the Yellow River 's edge at the Piantou Pass (偏 頭 關) along the Inner Mongolia border with Shanxi into Hebei province; the "inner border '' wall (內 邊, nèibiān) ran southeast from Piantou Pass for some 400 kilometres (250 mi), ending at the Pingxing Pass; a "river wall '' (河 邊, hébiān) also ran from the Piantou Pass and followed the Yellow River southwards for about 70 kilometres (43 mi).
As with Yu Zijun 's wall in the Ordos, the Mongols shifted their attacks away from the newly strengthened Xuan -- Da sector to less well - protected areas. In the west, Shaanxi province became the target of nomads riding west from the Yellow River loop. The westernmost fortress of Ming China, the Jiayu Pass, saw substantial enhancement with walls starting in 1539, and from there border walls were built discontinuously down the Hexi Corridor to Wuwei, where the low earthen wall split into two. The northern section passed through Zhongwei and Yinchuan, where it met the western edge of the Yellow River loop before connecting with the Ordos walls, while the southern section passed through Lanzhou and continued northeast to Dingbian. The origins and the exact route of this so - called "Tibetan loop '' are still not clear.
In 1550, having once more been refused a request for trade, the Tümed Mongols under Altan Khan invaded the Xuan -- Da region. However, despite several attempts, he could not take Xuanfu due to Weng Wanda 's double fortified line while the garrison at Datong bribed him to not attack there. Instead of continuing to operate in the area, he circled around Weng Wanda 's wall to the relatively lightly defended Gubeikou, northeast of Beijing. From there Altan Khan passed through the defences and raided the suburbs of Beijing. According to one contemporary source, the raid took more than 60,000 lives and an additional 40,000 people became prisoners. As a response to this raid, the focus of the Ming 's northern defences shifted from the Xuan -- Da region to the Jizhou (薊 州 鎮) and Changping Defence Commands (昌平 鎮) where the breach took place. Later in the same year, the dry - stone walls of the Jizhou -- Changping area (abbreviated as "Ji -- Chang '') were replaced by stone and mortar. These allowed the Chinese to build on steeper, more easily defended slopes and facilitated construction of features such as ramparts, crenelations, and peepholes. The effectiveness of the new walls was demonstrated in the failed Mongol raid of 1554, where raiders expecting a repeat of the events of 1550 were surprised by the higher wall and stiff Chinese resistance.
In 1567 Qi Jiguang and Tan Lun, successful generals who fended off the coastal pirates, were reassigned to manage the Ji -- Chang Defense Commands and step up the defences of the capital region. Under their ambitious and energetic management, 1200 brick watchtowers were built along the Great Wall from 1569 to 1571. These included the first large - scale use of hollow watchtowers on the Wall: up until this point, most previous towers along the Great Wall had been solid, with a small hut on top for a sentry to take shelter from the elements and Mongol arrows; the Ji -- Chang towers built from 1569 onwards were hollow brick structures, allowing soldiers interior space to live, store food and water, stockpile weapons, and take shelter from Mongol arrows.
Altan Khan eventually made peace with China when it opened border cities for trade in 1571, alleviating the Mongol need to raid. This, coupled with Qi and Tan 's efforts to secure the frontier, brought a period of relative peace along the border. However, minor raids still happened from time to time when the profits of plunder outweighed those of trade, prompting the Ming to close all gaps along the frontier around Beijing. Areas of difficult terrain once considered impassable were also walled off, leading to the well - known vistas of a stone - faced Great Wall snaking over dramatic landscapes that tourists still see today.
Wall construction continued until the demise of the Ming dynasty in 1644. In the decades that led to the fall of the Ming dynasty, the Ming court and the Great Wall itself had to deal with simultaneous internal rebellions and the Manchu invasions. In addition to their conquest of Liaodong, the Manchus had raided across the Great Wall for the first time in 1629, and again in 1634, 1638, and 1642. Meanwhile, the rebels led by warlord Li Zicheng had been gathering strength. In the early months of 1644, Li Zicheng declared himself the founder of the Shun and marched towards the Ming capital from Shaanxi. His route roughly followed the line of the Great Wall, in order to neutralize its heavily fortified garrisons. The crucial defences of Datong, Xuanfu, and Juyong Pass all surrendered without a fight, and the Chongzhen Emperor hanged himself on 25 April as the Shun army entered Beijing. At this point, the largest remaining Ming fighting force in North China was in Shanhai Pass, where the Great Wall meets the Bohai Sea. Its defender Wu Sangui, wedged between the Shun army within and the Manchus without, decided to surrender to the Manchus and opened the gates for them. The Manchus, having thus entered through the Great Wall, defeated Li Zicheng at the Battle of Shanhai Pass and seized Beijing on June 5. They eventually defeated both the rebel - founded Shun dynasty and the remaining Ming resistance, establishing their rule over all of China as the Qing dynasty.
Opinions about the Wall 's role in the Ming dynasty 's downfall are mixed. Historians such as Arthur Waldron and Julia Lovell are critical of the whole wall - building exercise in light of its ultimate failure in protecting China; the former compared the Great Wall with the failed Maginot Line of the French in World War II. However, independent scholar David Spindler notes that the Wall, being only part of a complex foreign policy, received "disproportionate blame '' because it was the most obvious relic of that policy.
The usefulness of the Great Wall as a defence line against northern nomads became questionable under the Qing dynasty, since their territory encompassed vast areas inside and outside the wall: China proper, Manchuria, and Mongolia were all under Qing control. So instead, the Great Wall became the means to limit Han Chinese movement into the steppes. In the case of Manchuria, considered to be the sacred homeland by the ruling Manchu elites, some parts of the Ming Liaodong Wall were repaired so it could serve to control Han Chinese movement into Manchuria alongside the newly erected Willow Palisade.
Culturally, the wall 's symbolic role as a line between civilized society and barbarism was suppressed by the Qing, who were keen to weaken the Han culturalism that had been propagated by the Ming. As a result, no special attention was paid to the Great Wall until the mid-Qing dynasty, when Westerners started to show interest in the structure.
The existence of a colossal wall in Asia had circulated in the Middle East and the West even before the first Europeans arrived in China by sea. The late antiquity historian Ammianus Marcellinus (330? -- 395?) mentioned "summits of lofty walls '' enclosing the land of Seres, the country that the Romans believed to be at the eastern end of the Silk Road. In legend, the tribes of Gog and Magog were said to have been locked out by Alexander the Great with walls of steel. Later Arab writers and travellers, such as Rashid - al - Din Hamadani (1248 -- 1318) and Ibn Battuta (1304 -- 1377), would erroneously identify the Great Wall in China with the walls of the Alexander romances. Soon after Europeans reached Ming China in the early 16th century, accounts of the Great Wall started to circulate in Europe, even though no European would see it with their own eyes for another century. The work A Treatise of China and the Adjoyning Regions by Gaspar da Cruz (c. 1520 -- 70) offered an early discussion of the Great Wall in which he noted, "a Wall of an hundred leagues in length. And some will affirme to bee more than a hundred leagues. '' Another early account written by Bishop Juan González de Mendoza (1550 -- 1620) reported a wall five hundred leagues long, but suggested that only one hundred leagues were man - made, with the rest natural rock formations. The Jesuit priest Matteo Ricci (1552 -- 1610) mentioned the Great Wall once in his diary, noting the existence of "a tremendous wall four hundred and five miles long '' that formed part of the northern defences of the Ming Empire.
Europeans first witnessed the Great Wall in the early 1600s. Perhaps the first recorded instance of a European actually entering China via the Great Wall came in 1605, when the Portuguese Jesuit brother Bento de Góis reached the northwestern Jiayu Pass from India. Ivan Petlin 's 1619 deposition for his Russian embassy mission offers an early account based on a first - hand encounter with the Great Wall, and mentions that in the course of the journey his embassy travelled alongside the Great Wall for ten days.
Early European accounts were mostly modest and empirical, closely mirroring contemporary Chinese understanding of the Wall. However, when the Ming Great Wall began to take on a shape recognizable today, foreign accounts of the Wall slid into hyperbole. In the Atlas Sinensis published in 1665, the Jesuit Martino Martini described elaborate but atypical stretches of the Great Wall and generalized such fortifications across the whole northern frontier. Furthermore, Martini erroneously identified the Ming Wall as the same wall built by Qin Shi Huang in the 3rd century BC, thereby exaggerating both the Wall 's antiquity and its size. This misconception was compounded by the China Illustrata of Father Athanasius Kircher (1602 -- 80), which provided pictures of the Great Wall as imagined by a European illustrator. All these and other accounts from missionaries in China contributed to the Orientalism of the eighteenth century, in which a mythical China and its exaggerated Great Wall feature prominently. The French philosopher Voltaire (1694 -- 1774), for example, frequently wrote about the Great Wall, although his feelings towards it oscillate between unreserved admiration and condemnation of it as a "monument to fear ''. The Macartney Embassy of 1793 passed through the Great Wall at Gubeikou on the way to see the Qianlong Emperor in Chengde, who was there for the annual imperial hunt. One of the embassy 's members, John Barrow, later founder of the Royal Geographical Society, spuriously calculated that the amount of stone in the Wall was equivalent to "all the dwelling houses of England and Scotland '' and would suffice to encircle the Earth at the equator twice. The illustrations of the Great Wall by Lieutenant Henry William Parish during this mission would be reproduced in influential works such as Thomas Allom 's 1845 China, in a series of views.
Exposure to such works brought many foreign visitors to the Great Wall after China opened its borders as a result of the nation 's defeat in the Opium Wars of the mid-19th century at the hands of Britain and the other Western powers. The Juyong Pass near Beijing and the "Old Dragon Head, '' where the Great Wall meets the sea at the Shanhai Pass, proved popular destinations for these wall watchers.
The travelogues of the later 19th century in turn further contributed to the elaboration and propagation of the Great Wall myth. Examples of this myth 's growth are the false but widespread belief that the Great Wall of China is visible from the Moon or Mars.
The Xinhai Revolution in 1911 forced the abdication of the last Qing Emperor Puyi and ended China 's last imperial dynasty. The revolutionaries, headed by Sun Yat - sen, were concerned with creating a modern sense of national identity in the chaotic post-imperial era. In contrast to Chinese academics such as Liang Qichao, who tried to counter the West 's fantastic version of the Great Wall, Sun Yat - sen held the view that Qin Shi Huang 's wall preserved the Chinese race, and without it Chinese culture would not have developed enough to expand to the south and assimilate foreign conquerors. Such an endorsement from the "Father of Modern China '' started to transform the Great Wall into a national symbol in the Chinese consciousness, though this transformation was hampered by conflicting views of nationalism with regard to the nascent "new China. ''
The failure of the new Republic of China fanned disillusionment with traditional Chinese culture and ushered in the New Culture Movement and the May Fourth Movement of the mid-1910s and 1920s that aimed to dislodge China 's future trajectory from its past. Naturally, the Great Wall of China came under attack as a symbol of the past. For example, an influential writer of this period, Lu Xun, harshly criticized the "mighty and accursed Great Wall '' in a short essay: "In reality, it has never served any purpose than to make countless workers labour to death in vain... (It) surrounds everyone. ''
"The March of the Volunteers ''
The Sino - Japanese conflict (1931 -- 45) gave the Great Wall a new lease of life in the eyes of the Chinese. During the 1933 defence of the Great Wall, inadequately - equipped Chinese soldiers held off double their number of Japanese troops for several months. Using the cover of the Great Wall, the Chinese -- who were at times only armed with broadswords -- were able to beat off a Japanese advance that had the support of aerial bombardment. With the Chinese forces eventually overrun, the subsequent Tanggu Truce stipulated that the Great Wall was to become a demilitarized zone separating China and the newly created Japanese puppet state of Manchukuo. Even so, the determined defence of the Great Wall made it a symbol of Chinese patriotism and the resoluteness of the Chinese people. The Chinese Communist leader Mao Zedong picked up this symbol in his poetry during his "Long March '' escaping from Kuomintang prosecution. Near the end of the trek in 1935, Mao wrote the poem "Mount Liupan '' that contains the well - known line that would be carved in stone along the Great Wall in the present day: "Those who fail to reach the Great Wall are not true men '' (不 到 长城 非 好汉). Another noteworthy reference to the Great Wall is in the song "The March of the Volunteers '', whose words came from a stanza in Tian Han 's 1934 poem entitled "The Great Wall ''. The song, originally from the anti-Japanese movie Children of Troubled Times, enjoyed continued popularity in China and was selected as the provisional national anthem of the People 's Republic of China (PRC) at its establishment in 1949.
In 1952, the scholar - turned - bureaucrat Guo Moruo laid out the first modern proposal to repair the Great Wall. Five years later, the renovated Badaling became the first section to be opened to the public since the establishment of the PRC. The Badaling Great Wall has since become a staple stop for foreign dignitaries who come to China, beginning with Nepali prime minister Bishweshwar Prasad Koirala in 1960, and most notably the American president Richard Nixon in his historic 1972 visit to China. To date, Badaling is still the most visited stretch of the Great Wall.
Other stretches did not fare so well. During the Cultural Revolution (1966 -- 76), hundreds of kilometres of the Great Wall -- already damaged in the wars of the last century and eroded by wind and rain -- were deliberately destroyed by fervent Red Guards who regarded it as part of the "Four Olds '' to be eradicated in the new China. Quarrying machines and even dynamite were used to dismantle the Wall, and the pilfered materials were used for construction.
As China opened up in the 1980s, reformist leader Deng Xiaoping initiated the "Love our China and restore our Great Wall '' campaign (爱 我 中华, 修 我 长城) to repair and preserve the Great Wall. The Great Wall was designated as a UNESCO World Heritage Site in 1987. However, while tourism boomed over the years, slipshod restoration methods have left sections of the Great Wall near Beijing "looking like a Hollywood set '' in the words of the National Geographic News. The less prominent stretches of the Great Wall did not get as much attention. In 2002 the New York - based World Monuments Fund put the Great Wall on its list of the World 's 100 Most Endangered Sites. In 2003 the Chinese government began to enact laws to protect the Great Wall.
In China, one of the first individuals to attempt a multi-dynastic history of the Great Wall was the 17th - century scholar Gu Yanwu. More recently, in the 1930s and 1940s, Wang Guoliang (王國良) and Shou Pengfei (壽 鵬 飛) produced exhaustive studies that culled extant literary records to date and mapped the courses of early border walls. However, these efforts were based solely on written records that contain obscure place names and elusive literary references.
The rise of modern archeology has contributed much to the study of the Great Wall, either in corroborating existing research or in refuting it. However these efforts do not yet give a full picture of the Great Wall 's history, as many wall sites dating to the Period of Disunity (220 -- 589) had been overlaid by the extant Ming Great Wall.
Western scholarship of the Great Wall was, until recently, affected by misconceptions derived from traditional accounts of the Wall. When the Jesuits brought back the first reports of the Wall to the West, European scholars were puzzled that Marco Polo had not mentioned the presumably perennial "Great Wall '' in his Travels. Some 17th - century scholars reasoned that the Wall must have been built in the Ming dynasty, after Marco Polo 's departure. This view was soon replaced by another that argued, against Polo 's own account, that the Venetian merchant had come to China from the south and so did not come into contact with the Wall. Thus, Father Martino Martini 's mistaken claim that the Wall had "lasted right up to the present time without injury or destruction '' since the time of Qin was accepted as fact by the 18th - century philosophes.
Since then, many scholars have operated under the belief that the Great Wall continually defended China 's border against the steppe nomads for two thousand years. For example, the 18th - century sinologist Joseph de Guignes assigned macrohistorical importance to such walls when he advanced the theory that the Qin construction forced the Xiongnu to migrate west to Europe and, becoming known as the Huns, ultimately contributed to the decline of the Roman Empire. Some have attempted to make general statements regarding Chinese society and foreign policy based on the conception of a perennial Great Wall: Karl Marx took the Wall to represent the stagnation of the Chinese society and economy, Owen Lattimore supposed that the Great Wall demonstrated a need to divide the nomadic way of life from the agricultural communities of China, and John K. Fairbank posited that the Wall played a part in upholding the Sinocentric world order.
Despite the significance that the Great Wall seemed to have, scholarly treatment of the Wall itself remained scant during the 20th century. Joseph Needham bemoaned this dearth when he was compiling the section on walls for his Science and Civilisation in China: "There is no lack of travelers ' description of the Great Wall, but studies based on modern scholarship are few and far between, whether in Chinese or Western languages. '' In 1990, Arthur Waldron published the influential The Great Wall: From History to Myth, where he challenged the notion of a unitary Great Wall maintained since antiquity, dismissing it as a modern myth. Waldron 's approach prompted a re-examination of the Wall in Western scholarship. Still, as of 2008, there is not yet a full authoritative text in any language that is devoted to the Great Wall. The reason for this, according to The New Yorker journalist Peter Hessler, is that the Great Wall fits into neither the study of political institutions (favoured by Chinese historians) nor the excavation of tombs (favoured by Chinese archeologists). Some of the void left by academia is being filled by independent research from Great Wall enthusiasts such as ex-Xinhua reporter Cheng Dalin (成 大林) and self - funded scholar David Spindler.
|
what is the typical alchol content of wine | Unit of alcohol - wikipedia
Units of alcohol are used in the United Kingdom (UK) as a measure to quantify the actual alcoholic content within a given volume of an alcoholic beverage, in order to provide guidance on total alcohol consumption.
A number of other countries (including Australia, Canada, New Zealand, and the US) use the concept of a standard drink, the definition of which varies from country to country, for the same purpose. Standard drinks were referred to in the first UK guidelines (1984) that published "safe limits '' for drinking, but these were replaced by references to "alcohol units '' in the 1987 guidelines and the latter term has been used in all subsequent UK guidance.
One unit of alcohol (UK) is defined as 10 millilitres (8 grams) of pure alcohol. Typical drinks (i.e., typical quantities or servings of common alcoholic beverages) may contain 1 -- 3 units of alcohol.
Containers of alcoholic beverages sold directly to UK consumers are normally labelled to indicate the number of units of alcohol in a typical serving of the beverage (optional) and in the full container (can or bottle), as well as information about responsible drinking.
As an approximate guideline, a typical healthy adult can metabolise (break down) about one unit of alcohol per hour, although this may vary depending on sex, age, weight, health, and many other factors.
The number of UK units of alcohol in a drink can be determined by multiplying the volume of the drink (in millilitres) by its percentage ABV, and dividing by 1000.
For example, one imperial pint (568 ml) of beer at 4 % alcohol by volume (ABV) contains:
The formula uses ml ÷ 1000. This results in exactly one unit per percentage point per litre, of any alcoholic beverage.
The formula can be simplified for everyday use by expressing the serving size in centilitres and the alcohol content literally as a percentage.
Thus, a 750 ml bottle of wine at 12 % ABV contains 75 cl × 12 % = 9 units. Alternatively, the serving size in litres multiplied by the alcohol content as a number, the above example giving 0.75 × 12 = 9 units.
Both pieces of input data are usually mentioned in this form on the bottle, so is easy to retrieve.
UK alcohol companies pledged in March 2011 to implement an innovative health labelling scheme to provide more information about responsible drinking on alcohol labels and containers. This voluntary scheme is the first of its kind in Europe and has been developed in conjunction with the UK Department of Health. The pledge stated:
At the end of 2014, 101 companies had committed to the pledge labelling scheme.
There are five elements included within the overall labelling scheme, the first three being mandatory, and the last two optional:
Drinks companies had pledged to display the three mandatory items on 80 % of drinks containers on shelves in the UK off - trade by the end of December 2013. A report published in Nov 2014, confirmed that UK drinks producers had delivered on that pledge with a 79.3 % compliance with the pledge elements as measured by products on shelf. Compared with labels from 2008 on a like - for - like basis, information on Unit alcohol content had increased by 46 %; 91 % of products displayed alcohol and pregnancy warnings (18 % in 2008); and 75 % showed the Chief Medical Officers ' lower risk daily guidelines (6 % in 2008).
It is sometimes misleadingly stated that there is one unit per half - pint of beer, or small glass of wine, or single measure of spirits. However, such statements do not take into account the various strengths and volumes supplied in practice.
For example, the ABV of beer typically varies from 3.5 % to 5.5 %. A typical "medium '' glass of wine with 175 ml at 12 % ABV has 2.1 units. And spirits, although typically 35 -- 40 % ABV, have single measures of 25 ml or 35 ml (so 1 or 1.4 units) depending on location.
The misleading nature of "one unit per half - pint of beer, or small glass of wine, or single measure of spirits '' can lead to people underestimating their alcohol intake.
Most spirits sold in the United Kingdom have 40 % ABV or slightly less. In England, a single pub measure (25 ml) of a spirit contains one unit. However, a larger 35 ml measure is increasingly used (and in particular is standard in Northern Ireland), which contains 1.4 units of alcohol at 40 % ABV. Sellers of spirits by the glass must state the capacity of their standard measure in ml.
On average, it takes about one hour for the body to metabolise (break down) one unit of alcohol. However, this will vary with body weight, sex, age, personal metabolic rate, recent food intake, the type and strength of the alcohol, and medications taken. Alcohol may be metabolised more slowly if liver function is impaired.
From 1992 to 1995, the UK government advised that men should drink no more than 21 units per week, and women no more than 14. (The difference between the sexes was due to the typically lower weight and water - to - body - mass ratio of women.) The Times reported in October 2007 that these limits had been "plucked out of the air '' and had no scientific basis.
This was changed after a government study showed that many people were in effect "saving up '' their units and using them at the end of the week, a phenomenon referred to as binge drinking. Since 1995 the advice was that regular consumption of 3 -- 4 units a day for men, or 2 -- 3 units a day for women, would not pose significant health risks, but that consistently drinking four or more units a day (men), or three or more units a day (women), is not advisable.
An international study of about 6,000 men and 11,000 women for a total of 75,000 person - years found that people who reported that they drank more than a threshold value of 2 units of alcohol a day had a higher risk of fractures than non-drinkers. For example, those who drank over 3 units a day had nearly twice the risk of a hip fracture.
|
what type of economic system does united kingdom have | Economy of the United Kingdom - wikipedia
The economy of the United Kingdom is highly developed and market - oriented. It is the fifth - largest national economy in the world measured by nominal gross domestic product (GDP), ninth - largest measured by purchasing power parity (PPP), and nineteenth - largest measured by GDP per capita, comprising 3.5 % of world GDP.
In 2016, the UK was the tenth - largest goods exporter in the world and the fifth - largest goods importer. It also had the second - largest inward foreign direct investment, and the third - largest outward foreign direct investment. The UK is one of the most globalised economies, and it is composed of the economies of England, Scotland, Wales and Northern Ireland.
The service sector dominates the UK economy, contributing around 80 % of GDP; the financial services industry is particularly important, and London is the world 's largest financial centre. Britain 's aerospace industry is the second - largest national aerospace industry. Its pharmaceutical industry, the tenth - largest in the world, plays an important role in the economy. Of the world 's 500 largest companies, 26 are headquartered in the UK. The economy is boosted by North Sea oil and gas production; its reserves were estimated at 2.8 billion barrels in 2016, although it has been a net importer of oil since 2005. There are significant regional variations in prosperity, with South East England and North East Scotland being the richest areas per capita. The size of London 's economy makes it one of the largest cities by GDP in Europe.
In the 18th century the UK was the first country to industrialise, and during the 19th century it had a dominant role in the global economy, accounting for 9.1 % of the world 's GDP in 1870. From the late 19th century the Second Industrial Revolution was also taking place rapidly in the United States and the German Empire; this presented an increasing economic challenge for the UK. The costs of fighting World War I and World War II further weakened the UK 's relative position. In the 21st century, however, it remains a global power and has an influential role in the world economy.
Government involvement in the British economy is primarily exercised by Her Majesty 's Treasury, headed by the Chancellor of the Exchequer, and the Department for Business, Energy and Industrial Strategy. Since 1979 management of the economy has followed a broadly laissez - faire approach. The Bank of England is the UK 's central bank and since 1997 its Monetary Policy Committee has been responsible for setting interest rates, quantitative easing, and forward guidance.
The currency of the UK is the pound sterling, which is the world 's third - largest reserve currency after the United States dollar and the euro, and is also one of the ten most - valued currencies in the world.
The UK is a member of the Commonwealth, the European Union (currently until 30 March 2019), the G7, the G20, the International Monetary Fund, the Organisation for Security and Co-operation in Europe, the World Bank, the World Trade Organisation, Asian Infrastructure Investment Bank and the United Nations.
After the Second World War, a new Labour government fully nationalised the Bank of England, civil aviation, telephone networks, railways, gas, electricity, and the coal, iron and steel industries, affecting 2.3 million workers. Post-war, the United Kingdom enjoyed a long period without a major recession; there was a rapid growth in prosperity in the 1950s and 1960s, with unemployment staying low and not exceeding 3.5 % until the early 1970s. The annual rate of growth between 1960 and 1973 averaged 2.9 %, although this figure was far behind other European countries such as France, West Germany and Italy.
Deindustrialisation meant the closure of operations in mining, heavy industry, and manufacturing, resulting in the loss of highly paid working - class jobs. The UK 's share of manufacturing output had risen from 9.5 % in 1830, during the Industrial Revolution, to 22.9 % in the 1870s. It fell to 13.6 % by 1913, 10.7 % by 1938, and 4.9 % by 1973. Overseas competition, lack of innovation, trade unionism, the welfare state, loss of the British Empire, and cultural attitudes have all been put forward as explanations for the industrial decline. It reached crisis point in the 1970s, with a worldwide energy crisis, high inflation, and a dramatic influx of low - cost manufactured goods from Asia. Coal mining quickly collapsed and practically disappeared by the 21st century. Railways were decrepit, more textile mills closed than opened, steel employment fell sharply, and the car - making industry suffered. Popular responses varied a great deal; Tim Strangleman et al. found a range of responses from the affected workers: Some nostalgically invoked a glorious industrial past or the bygone empire to cope with their new - found personal economic insecurity, many turned to exclusionary Englishness, and others looked to the European Union for help. By the 2010s, grievances had accumulated enough to have a political impact. The United Kingdom Independence Party (UKIP), based in working - class towns, gained an increasing share of the vote while warning against the dangers of immigration. Political reverberations came to a head in the popular vote in favour of Brexit in 2016.
During the 1973 oil crisis, the 1973 -- 74 stock market crash, and the secondary banking crisis of 1973 -- 75, the British economy fell into the 1973 -- 75 recession and the government of Edward Heath was ousted by the Labour Party under Harold Wilson, which had previously governed from 1964 to 1970. Wilson formed a minority government in March 1974 after the general election on 28 February ended in a hung parliament. Wilson secured a three - seat overall majority in a second election in October that year.
The UK recorded weaker growth than many other European nations in the 1970s; even after the recession, the economy was blighted by rising unemployment and double - digit inflation, which exceeded 20 % more than once and was rarely below 10 % after 1973.
In 1976, the UK was forced to apply for a loan of £ 2.3 billion from the International Monetary Fund. Denis Healey, then Chancellor of the Exchequer, was required to implement public spending cuts and other economic reforms in order to secure the loan, and for a while the British economy improved, with growth of 4.3 % in early 1979. However, following the Winter of Discontent, when the UK was hit by numerous public sector strikes, the government of James Callaghan lost a vote of no confidence in March 1979. This triggered the general election on 3 May 1979 which resulted in Margaret Thatcher 's Conservative Party forming a new government.
A new period of neo-liberal economics began with this election. During the 1980s, many state - owned industries and utilities were privatised, taxes cut, trade union reforms passed and markets deregulated. GDP fell by 5.9 % initially, but growth subsequently returned and rose to an annual rate of 5 % at its peak in 1988, one of the highest rates of any country in Europe.
Thatcher 's modernisation of the economy was far from trouble - free; her battle with inflation, which in 1980 had risen to 21.9 %, resulted in a substantial increase in unemployment from 5.3 % in 1979 to over 10.4 % by the start of 1982, peaking at nearly 11.9 % in 1984 -- a level not seen in Britain since the Great Depression. The rise in unemployment coincided with the early 1980s global recession, after which UK GDP did not reach its pre-recession rate until 1983. In spite of this, Thatcher was re-elected in June 1983 with a landslide majority. Inflation had fallen to 3.7 %, while interest rates were relatively high at 9.56 %.
The increase in unemployment was largely due to the government 's economic policy which resulted in the closure of outdated factories and coal pits. Manufacturing in England and Wales declined from around 38 % of jobs in 1961 to around 22 % in 1981. This trend continued for most of the 1980s, with newer industries and the service sector enjoying significant growth. Many jobs were also lost as manufacturing became more efficient and fewer people were required to work in the sector. Unemployment had fallen below 3 million by the time of Thatcher 's third successive election victory in June 1987; and by the end of 1989 it was down to 1.6 million.
Britain 's economy slid into another global recession in late 1990; it shrank by a total of 6 % from peak to trough, and unemployment increased from around 6.9 % in spring 1990 to nearly 10.7 % by the end of 1993. However, inflation dropped from 10.9 % in 1990 to 1.3 % three years later. The subsequent economic recovery was extremely strong, and unlike after the early 1980s recession, the recovery saw a rapid and substantial fall in unemployment, which was down to 7.2 % by 1997, although the popularity of the Conservative government had failed to improve with the economic upturn. The government won a fourth successive election in 1992 under John Major, who had succeeded Thatcher in November 1990, but soon afterwards came Black Wednesday, which damaged the Conservative government 's reputation for economic competence, and from that stage onwards, the Labour Party was ascendant in the opinion polls, particularly in the immediate aftermath of Tony Blair 's election as party leader in July 1994 after the sudden death of his predecessor John Smith.
Despite two recessions, wages grew consistently by around 2 % per year in real terms from 1980 until 1997, and continued to grow until 2008.
In May 1997, Labour, led by Tony Blair, won the general election by a landslide after 18 years of Conservative government, and inherited a strong economy with low inflation, falling unemployment, and a current account surplus. Four days after the election, Gordon Brown, the new Chancellor of the Exchequer, gave the Bank of England the freedom to control monetary policy, which until then had been directed by the government. During Blair 's 10 years in office there were 40 successive quarters of economic growth, lasting until the second quarter of 2008. GDP growth, which had briefly reached 4 % per year in the early 1990s, gently declining thereafter, was relatively anaemic compared to prior decades, such as the 6.5 % per year peak in the early 1970s, although growth was smoother and more consistent. Annual growth rates averaged 2.68 % between 1992 and 2007, with the finance sector accounting for a greater part than previously. The period saw one of the highest GDP growth rates of any developed economy and the strongest of any European nation. At the same time, household debt rose from £ 420 billion in 1994 to £ 1 trillion in 2004 and £ 1.46 trillion in 2008 -- more than the entire GDP of the UK.
This extended period of growth ended in Q2 of 2008 when the United Kingdom suddenly entered a recession -- its first for nearly two decades -- brought about by the global financial crisis. The UK was particularly vulnerable to the crisis because its financial sector was the most highly leveraged of any major economy. Beginning with the collapse of Northern Rock, which was taken into public ownership in February 2008, other banks had to be partly nationalised. The Royal Bank of Scotland Group, at its peak the fifth - largest bank in the world by market capitalisation, was effectively nationalised in October 2008. By mid-2009, HM Treasury had a 70.33 % controlling shareholding in RBS, and a 43 % shareholding, through UK Financial Investments Limited, in Lloyds Banking Group. The Great Recession, as it came to be known, saw unemployment rise from just over 1.6 million in January 2008 to nearly 2.5 million by October 2009.
The UK had been one of the strongest economies in terms of inflation, interest rates and unemployment, all of which remained lower until the 2008 -- 09 recession. In August 2008 the IMF warned that the country 's outlook had worsened due to a twin shock: financial turmoil and rising commodity prices. Both developments harmed the UK more than most developed countries, as it obtained revenue from exporting financial services while running deficits in goods and commodities, including food. In 2007, the UK had the world 's third largest current account deficit, due mainly to a large deficit in manufactured goods. In May 2008, the IMF advised the UK government to broaden the scope of fiscal policy to promote external balance. The UK 's output per hour worked was on a par with the average for the "old '' EU - 15 countries.
In March 2009, the Bank of England cut interest rates to a historic low of 0.5 % and began quantitative easing to boost lending and shore up the economy. The UK exited the Great Recession in Q4 of 2009 having experienced six consecutive quarters of negative growth, shrinking by 6.03 % from peak to trough, making it the longest recession since records began and the deepest recession since World War II. Support for Labour slumped during the recession, and the general election of 2010 resulted in a coalition government being formed by the Conservatives and the Liberal Democrats, which made deep spending cuts in order to ease the budget deficit.
In 2011, household, financial, and business debts stood at 420 % of GDP in the UK, compared to 279 % in Japan, 253 % in France, 209 % in the United States, 206 % in Canada, and 198 % in Germany. As the world 's most indebted country, spending and investment in the UK were held back after the recession, creating economic malaise. However, it was recognised that government borrowing, which rose from 52 % to 76 % of GDP, helped to avoid a 1930s - style depression. Within three years of the general election, government cuts had led to public sector job losses well into six figures, but the private sector enjoyed strong jobs growth.
The 10 years following the Great Recession were characterised by extremes. In 2015, employment was at its highest since records began, and GDP growth had become the fastest in the Group of Seven (G7) and Europe, but workforce productivity was the worst since the 1820s, with any growth attributed to a fall in working hours. Output per hour worked was 18 % below the average for the rest of the G7. Real wage growth was the worst since the 1860s, and the Governor of the Bank of England described it as a lost decade. Wages fell by 10 % in real terms in the eight years to 2016, whilst they grew in Germany by 14 %, in France by 11 %, and across the OECD by an average of 6.7 %. Between 2009 and 2015, the current account deficit rose to a record high of 5.2 % of GDP (£ 96.2 bn), the highest as a percentage of GDP in the developed world. In Q4 2015, it exceeded 7 %, a level not witnessed during peacetime since records began in 1772. The UK relied on foreign investors to plug the shortfall in its balance of payments.
A rise in unsecured household debt added to questions over the sustainability of the economic recovery in 2016. The Bank of England insisted there was no cause for alarm, despite having said two years earlier that the recovery was "neither balanced nor sustainable ''. It was still very unbalanced, with consumption accounting for 100 % of growth in that year. Households ran an unprecedented deficit of 3 % of GDP. Unemployment continued to fall, resulting in a 42 - year low of 4.4 % in June 2017, but real earnings also fell.
Following the UK 's decision to leave the European Union, the Bank of England cut interest rates to a new historic low of 0.25 % for just over a year to bolster confidence in the economy. It also bought government and corporate bonds, taking the amount of quantitative easing since the start of the Great Recession to £ 435bn.
Government involvement in the economy is primarily exercised by HM Treasury, headed by the Chancellor of the Exchequer. In recent years, the UK economy has been managed in accordance with principles of market liberalisation and low taxation and regulation. Since 1997, the Bank of England 's Monetary Policy Committee, headed by the Governor of the Bank of England, has been responsible for setting interest rates at the level necessary to achieve the overall inflation target for the economy that is set by the Chancellor each year. The Scottish Government, subject to the approval of the Scottish Parliament, has the power to vary the basic rate of income tax payable in Scotland by plus or minus 3 pence in the pound, though this power has not yet been exercised.
In the 20 - year period from 1986 / 87 to 2006 / 07 government spending in the UK averaged around 40 % of GDP. In July 2007, the UK had government debt at 35.5 % of GDP. As a result of the 2007 -- 2010 financial crisis and the late - 2000s global recession, government spending increased to a historically high level of 48 % of GDP in 2009 -- 10, partly as a result of the cost of a series of bank bailouts. In terms of net government debt as a percentage of GDP, at the end of June 2014 public sector net debt excluding financial sector interventions was £ 1304.6 billion, equivalent to 77.3 % of GDP. For the financial year of 2013 -- 2014 public sector net borrowing was £ 93.7 billion. This was £ 13.0 billion higher than in the financial year of 2012 -- 2013.
Taxation in the United Kingdom may involve payments to at least two different levels of government: local government and central government (HM Revenue & Customs). Local government is financed by grants from central government funds, business rates, council tax, and, increasingly, fees and charges such as those from on - street parking. Central government revenues are mainly from income tax, national insurance contributions, value added tax, corporation tax and fuel duty.
Agriculture in the UK is intensive, highly mechanised, and efficient by European standards, producing about 60 % of food needs, with less than 1.6 % of the labour force (535,000 workers). It contributes around 0.6 % of British national value added. Around two - thirds of the production is devoted to livestock, one - third to arable crops. Agriculture is subsidised by the European Union 's Common Agricultural Policy.
The UK retains a significant, though reduced, fishing industry. Its fleets, based in towns such as Kingston upon Hull, Grimsby, Fleetwood, Newlyn, Great Yarmouth, Peterhead, Fraserburgh, and Lowestoft, bring home fish ranging from sole to herring.
The Blue Book 2013 reports that "Agriculture '' added gross value of £ 9,438 million to the UK economy in 2011.
The construction industry of the United Kingdom contributed gross value of £ 86 billion to the UK economy in 2011. The industry employed around 2.2 million people in the fourth quarter of 2009. There were around 194,000 construction firms in the United Kingdom in 2009, of which around 75,400 employed just one person and 62 employed over 1,200 people. In 2009 the construction industry in the UK received total orders of around £ 18.7 billion from the private sector and £ 15.1 billion from the public sector.
The largest construction project in the UK is Crossrail. Due to open in 2018, it will be a new railway line running east to west through London and into the surrounding countryside with a branch to Heathrow Airport. The main feature of the project is construction of 42 km (26 mi) of new tunnels connecting stations in central London. It is also Europe 's biggest construction project with a £ 15 billion projected cost.
Prospective construction projects include the High Speed 2 line between London and the West Midlands and Crossrail 2.
The Blue Book 2013 reports that this sector added gross value of £ 33,289 million to the UK economy in 2011. The United Kingdom is expected to launch the building of new nuclear reactors to replace existing generators and to boost UK 's energy reserves.
In the 1970s, manufacturing accounted for 25 percent of the economy. Total employment in manufacturing fell from 7.1 million in 1979 to 4.5 million in 1992 and only 2.7 million in 2016, when it accounted for 10 % of the economy.
In 2011 the UK manufacturing sector generated approximately £ 140,539 million in gross value added and employed around 2.6 million people. Of the approximately £ 16 billion invested in R&D by UK businesses in 2008, approximately £ 12 billion was by manufacturing businesses. In 2008, the UK was the sixth - largest manufacturer in the world measured by value of output.
In 2008 around 180,000 people in the UK were directly employed in the UK automotive manufacturing sector. In that year the sector had a turnover of £ 52.5 billion, generated £ 26.6 billion of exports and produced around 1.45 million passenger vehicles and 203,000 commercial vehicles. The UK is a major centre for engine manufacturing, and in 2008 around 3.16 million engines were produced in the country.
The aerospace industry of the UK is the second - or third - largest aerospace industry in the world, depending upon the method of measurement. The industry employs around 113,000 people directly and around 276,000 indirectly and has an annual turnover of around £ 20 billion. British companies with a major presence in the industry include BAE Systems (the world 's second - largest defence contractor) and Rolls - Royce (the world 's second - largest aircraft engine maker). Foreign aerospace companies active in the UK include EADS and its Airbus subsidiary, which employs over 13,000 people in the UK.
The pharmaceutical industry employs around 67,000 people in the UK and in 2007 contributed £ 8.4 billion to the UK 's GDP and invested a total of £ 3.9 billion in research and development. In 2007 exports of pharmaceutical products from the UK totalled £ 14.6 billion, creating a trade surplus in pharmaceutical products of £ 4.3 billion. The UK is home to GlaxoSmithKline and AstraZeneca, respectively the world 's third - and seventh - largest pharmaceutical companies.
The Blue Book 2013 reports that this sector added gross value of £ 31,380 million to the UK economy in 2011. In 2007 the UK had a total energy output of 9.5 quadrillion Btus (10 exajoules), of which the composition was oil (38 %), natural gas (36 %), coal (13 %), nuclear (11 %) and other renewables (2 %). In 2009, the UK produced 1.5 million barrels per day (bbl / d) of oil and consumed 1.7 million bbl / d. Production is now in decline and the UK has been a net importer of oil since 2005. As of 2010 the UK has around 3.1 billion barrels of proven crude oil reserves, the largest of any EU member state.
In 2009 the UK was the 13th largest producer of natural gas in the world and the largest producer in the EU. Production is now in decline and the UK has been a net importer of natural gas since 2004. In 2009 the UK produced 19.7 million tons of coal and consumed 60.2 million tons. In 2005 it had proven recoverable coal reserves of 171 million tons. It has been estimated that identified onshore areas have the potential to produce between 7 billion tonnes and 16 billion tonnes of coal through underground coal gasification (UCG). Based on current UK coal consumption, these volumes represent reserves that could last the UK between 200 and 400 years.
The UK is home to a number of large energy companies, including two of the six oil and gas "supermajors '' -- BP and Royal Dutch Shell.
The UK is also rich in a number of natural resources including coal, tin, limestone, iron ore, salt, clay, chalk, gypsum, lead and silica.
The service sector is the dominant sector of the UK economy, and contributes around 80.2 % of GDP as of 2016.
The creative industries accounted for 7 % of gross value added (GVA) in 2005 and grew at an average of 6 % per annum between 1997 and 2005. Key areas include London and the North West of England, which are the two largest creative industry clusters in Europe. According to the British Fashion Council, the fashion industry 's contribution to the UK economy in 2014 is ₤ 26 billion, up from ₤ 21 billion pounds in 2009. The UK is home to the world 's largest advertising company, WPP.
According to The Blue Book 2013 the education sector added gross value of £ 84,556 million in 2011 whilst human health and social work activities added £ 104,026 million in 2011.
In the UK the majority of the healthcare sector consists of the state funded and operated National Health Service (NHS), which accounts for over 80 % of all healthcare spending in the UK and has a workforce of around 1.7 million, making it the largest employer in Europe, and putting it amongst the largest employers in the world. The NHS operates independently in each of the four constituent countries of the UK. The NHS in England is by far the largest of the four parts and had a turnover of £ 92.5 billion in 2008.
In 2007 / 08 higher education institutions in the UK had a total income of £ 23 billion and employed a total of 169,995 staff. In 2007 / 08 there were 2,306,000 higher education students in the UK (1,922,180 in England, 210,180 in Scotland, 125,540 in Wales and 48,200 in Northern Ireland).
The UK financial services industry added gross value of £ 116,363 million to the UK economy in 2011. The UK 's exports of financial and business services make a significant positive contribution towards the country 's balance of payments.
London is a major centre for international business and commerce and is one of the three "command centres '' of the global economy (alongside New York City and Tokyo). There are over 500 banks with offices in London, and it is the leading international centre for banking, insurance, Eurobonds, foreign exchange trading and energy futures. London 's financial services industry is primarily based in the City of London and Canary Wharf. The City houses the London Stock Exchange, the London International Financial Futures and Options Exchange, the London Metal Exchange, Lloyds of London, and the Bank of England. Canary Wharf began development in the 1980s and is now home to major financial institutions such as Barclays Bank, Citigroup and HSBC, as well as the UK Financial Services Authority. London is also a major centre for other business and professional services, and four of the six largest law firms in the world are headquartered there.
Several other major UK cities have large financial sectors and related services. Edinburgh has one of the largest financial centres in Europe and is home to the headquarters of the Royal Bank of Scotland Group and Standard Life. Leeds is now the UK 's largest centre for business and financial services outside London, and the largest centre for legal services in the UK after London.
According to a series of research papers and reports published in the mid-2010s, Britain 's financial firms provide sophisticated methods to launder billions of pounds annually, including money from the proceeds of corruption around the world as well as the world 's drug trade, thus making the City a global hub for illicit finance. According to a Deutsche Bank study published in March 2015, Britain was attracting circa one billion pounds of capital inflows a month not recorded by official statistics, up to 40 percent probably originating from Russia, which implies misreporting by financial institutions, sophisticated tax avoidance, and the UK 's "safe - haven '' reputation.
The Blue Book 2013 reports that this industry added gross value of £ 36,554 million to the UK economy in 2011. Intercontinental Hotels Group (IHG), headquartered in Denham, Buckinghamshire, is currently the world 's largest hotelier, owning and operating hotel brands such as Intercontinental, Holiday Inn and Crowne Plaza. The international arm of Hilton Hotels, the world 's fifth largest hotelier, used to be owned by Ladbrokes Plc, and was headquartered in Watford, Hertfordshire from 1987 to 2005. It was sold to Hilton Hotels Group of the USA in December 2005.
A study in 2014 found that prostitution and associated services added over £ 5 billion to the economy each year.
The Blue Book 2013 reports that this sector added gross value of £ 70,400 million to the UK economy in 2011.
The real estate and renting activities sector includes the letting of dwellings and other related business support activities. The Blue Book 2013 reports that real estate industry added gross value of £ 143,641 million in 2011. Notable real estate companies in the United Kingdom include British Land, Land Securities, and The Peel Group.
The UK property market boomed for the seven years up to 2008, and in some areas property trebled in value over that period. The increase in property prices had a number of causes: low interest rates, credit growth, economic growth, rapid growth in buy - to - let property investment, foreign property investment in London and planning restrictions on the supply of new housing. In England and Wales between 1997 and 2016, average house prices increased by 259 %, while earnings increased by 68 %. An average home cost 3.6 times annual earnings in 1997 compared to 7.6 in 2016.
Rent has nearly doubled as a share of GDP since 1985, and is now larger than the manufacturing sector. In 2014, rent and imputed rent -- an estimate of how much home - owners would pay if they rented their home -- accounted for 12.3 % of GDP.
Tourism is very important to the British economy. With over 32.6 million tourists arriving in 2014, the United Kingdom is ranked as the eighth major tourist destination in the world. London is the second most visited city in the world with 17.4 million visitors in 2014, behind first - placed Hong Kong (27.8 million visitors).
The transport and storage industry added gross value of £ 59,179 million to the UK economy in 2011 and the telecommunication industry added a gross value of £ 25,098 million in the same year.
The UK has a radial road network of 46,904 kilometres (29,145 mi) of main roads, with a motorway network of 3,497 kilometres (2,173 mi). There are a further 213,750 kilometres (132,818 mi) of paved roads. The railway infrastructure company Network Rail owns and operates the majority of the 16,116 km (10,014 mi) railway lines in Great Britain and a further 303 route km (189 route mi) in Northern Ireland is owned and operated by Northern Ireland Railways. Since privatisation, around 20 Train Operating Companies operate the passenger trains. Urban rail networks are well developed in major cities including Glasgow, Liverpool and London. The government is to spend £ 30 billion on a new high - speed railway line, HS2, to be operational by 2026. Crossrail, due to open in December 2018 in London, is Europe 's largest construction project with a £ 15 billion projected cost.
The Highways Agency is the executive agency responsible for trunk roads and motorways in England apart from the privately owned and operated M6 Toll. The Department for Transport states that traffic congestion is one of the most serious transport problems and that it could cost England an extra £ 22 billion in wasted time by 2025 if left unchecked. According to the government - sponsored Eddington report of 2006, congestion is in danger of harming the economy, unless tackled by road pricing and expansion of the transport network.
In the year from October 2009 to September 2010 UK airports handled a total of 211.4 million passengers. In that period the three largest airports were London Heathrow Airport (65.6 million passengers), Gatwick Airport (31.5 million passengers) and London Stansted Airport (18.9 million passengers). London Heathrow Airport, located 24 kilometres (15 mi) west of the capital, has the most international passenger traffic of any airport in the world. and is the hub for the UK flag carrier British Airways, as well as BMI and Virgin Atlantic. London 's six commercial airports form the world 's largest city airport system measured by passenger traffic.
This sector includes the motor trade, auto repairs, personal and household goods industries. The Blue Book 2013 reports that this sector added gross value of £ 151,785 million to the UK economy in 2011.
As of 2016, high - street retail spending accounted for about 33 % of consumer spending and 20 % of GDP. Because 75 % of goods bought in the United Kingdom are made overseas, the sector only accounts for 5.7 % of gross value added to the British economy.
The UK grocery market is dominated by four companies: Asda (owned by Wal - Mart Stores), Morrisons, Sainsbury 's and Tesco.
London is a major retail centre and in 2010 had the highest non-food retail sales of any city in the world, with a total spend of around £ 64.2 billion. The UK - based Tesco is the third - largest retailer in the world measured by revenues (after Wal - Mart Stores and Carrefour) and as of 2011 was the leader in the UK market with around a 30 % share.
London is the world capital for foreign exchange trading, with a global market share of nearly 41 % in 2013 of the daily $5.3 trillion global turnover. The highest daily volume, counted in trillions of dollars US, is reached when New York enters the trade.
The currency of the UK is the pound sterling, represented by the symbol £. The Bank of England is the central bank, responsible for issuing currency. Banks in Scotland and Northern Ireland retain the right to issue their own notes, subject to retaining enough Bank of England notes in reserve to cover the issue. The pound sterling is also used as a reserve currency by other governments and institutions, and is the third - largest after the US dollar and the euro.
The UK chose not to join the euro at the currency 's launch. The government of former Prime Minister Tony Blair had pledged to hold a public referendum to decide on membership should "five economic tests '' be met. Until relatively recently there was debate over whether or not the UK should abolish its currency and adopt the euro. In 2007 the Prime Minister, Gordon Brown, pledged to hold a public referendum based on certain tests he set as Chancellor of the Exchequer. When assessing the tests, Gordon Brown concluded that while the decision was close, the United Kingdom should not yet join the euro. He ruled out membership for the foreseeable future, saying that the decision not to join had been right for the UK and for Europe. In particular, he cited fluctuations in house prices as a barrier to immediate entry. Public opinion polls have shown that a majority of Britons have been opposed to joining the single currency for some considerable time, and this position has hardened further in the last few years. In 2005, more than half (55 %) of the UK were against adopting the currency, while 30 % were in favour. The possibility of joining the euro has become a non-issue since the referendum decision to withdraw from the European Union.
Average for each year, in USD (US dollar) and EUR (euro) per GBP; and inversely: GBP per USD and EUR. (Synthetic Euro XEU before 1999). These averages conceal wide intra-year spreads. The coefficient of variation gives an indication of this. It also shows the extent to which the pound tracks the euro or the dollar. Note the effect of Black Wednesday in late 1992 by comparing the averages for 1992 and for 1993.
The strength of the UK economy varies from country to country and from region to region. Excluding the effects of North Sea oil and gas (which is classified in official statistics as extra-regio), England has the highest gross value added (GVA) and Wales the lowest of the UK 's constituent countries.
Within England, GVA per capita is highest in London. The following table shows the GVA per capita of the nine statistical regions of England.
Two of the richest 10 areas in the European Union are in the United Kingdom. Inner London is number 1 with a GDP per capita of € 65 138, and Berkshire, Buckinghamshire and Oxfordshire is number 7 with a GDP per capita of € 37 379. Edinburgh is also one of the largest financial centres in Europe.
At the other end of the scale, Cornwall has the lowest GVA per head of any county or unitary authority in England, and it has received EU Convergence funding (formerly Objective One funding) since 2000.
The total UK trade (goods and services) deficit widened by £ 3.4 billion to £ 8.7 billion in the three months to January 2018; excluding erratic commodities, the deficit widened by £ 2.6 billion to £ 8.9 billion.
The £ 3.4 billion widening of the total trade (goods and services) deficit was due to a £ 3.2 billion widening of the trade in goods deficit and a £ 0.2 billion narrowing of the trade in services surplus.
The widening of the trade in goods deficit was due mainly to a £ 1.3 billion increase in imports (particularly fuels) from non-EU countries, combined with a £ 1.2 billion decrease in exports (including fuels) to non-EU countries, in the three months to January 2018.
Large decreases in fuels export volumes combined with increases in fuels import prices had the largest impact on the widening of the trade in goods deficit in the three months to January 2018.
Between December 2017 and January 2018, the UK total trade (goods and services) deficit widened by £ 0.6 billion, due primarily to an increase in goods imports including aircraft and cars from non-EU countries and fuels (refined oil) from EU countries.
Comparing the three months to January 2018 with the same period in 2017, the UK total trade (goods and services) deficit widened by £ 0.4 billion; due primarily to increases of £ 5.7 billion and £ 3.1 billion in goods (particularly fuels) and services imports respectively.
Revisions to the total trade (goods and services) balance are mainly upward from January 2017 to December 2017, due mostly to upward revisions to services exports combined with downward revisions to goods imports.
In 2013 the UK was the leading country in Europe for inward foreign direct investment (FDI) with $26.51 bn. This gave it a 19.31 % market share in Europe. In contrast, the UK was second in Europe for outward FDI, with $42.59 bn, giving a 17.24 % share of the European market.
In October 2017, the ONS revised the UK 's balance of payments, changing the net international investment position from a surplus of £ 469bn to a deficit of £ 22bn. Deeper analysis of outward investment revealed that much of what was thought to be foreign debt securities owned by British companies were actually loans to British citizens. Inward investment also dropped, from a surplus of £ 120bn in the first half of 2016 to a deficit of £ 25bn in the same period of 2017. The UK had been relying on a surplus of inward investment to make up for its long - term current account deficit.
Since 1985 103,430 deals with UK participation have been announced. There have been three major waves of increased M&A activity (2000, 2007 and 2017; see graph "M&A in th UK ''). 1999 however, was the year with the highest cumulated value of deals (490. bil GBP, which is about 50 % more than the current peak of 2017). The Finance industry and Energy & Power made up most of the value from 2000 until 2018 (both about 15 %).
Here is a list of the top 10 deals including UK companies. The Vodafone - Mannesmann deal is still the biggest deal in global history.
As a member of the European Union, the UK has negotiated and agreed to numerous EU - wide trade and market policies. According to the 2014 report within the "Balance of EU competences '' review, the majority of the EU trade policies have been beneficial for the UK, despite the proportion of the country 's exports going to the EU falling from 54 percent to 47 percent over the past decade. The total value of exports however has increased in the same period from £ 130 billion (€ 160 billion) to £ 240 billion (€ 275 billion).
In June 2016 the UK voted to leave the EU in a national referendum on its membership of the EU. After the activation of Article 50 of the Lisbon Treaty, the UK is set to leave the EU on Friday 29 March 2019. The future relationship between the UK and EU is under negotiation, although there are still attempts to prevent the UK 's departure from the European Economic Area.
The United Kingdom is a developed country with social welfare infrastructure, thus discussions surrounding poverty tend to use a relatively high minimum income compared to developing countries. According to the OECD, the UK is in the lower half of developed country rankings for poverty rates, doing better than Italy and the US but less well than France, Austria, Hungary, Slovakia and the Nordic countries. Eurostat figures show that the numbers of Britons at risk of poverty has fallen to 15.9 % in 2014, down from 17.1 % in 2010 and 19 % in 2005 (after social transfers were taken into account).
The poverty line in the UK is commonly defined as being 60 % of the median household income. In 2007 -- 2008, this was calculated to be £ 115 per week for single adults with no dependent children; £ 199 per week for couples with no dependent children; £ 195 per week for single adults with two dependent children under 14; and £ 279 per week for couples with two dependent children under 14. In 2007 -- 2008, 13.5 million people, or 22 % of the population, lived below this line. This is a higher level of relative poverty than all but four other EU members. In the same year, 4.0 million children, 31 % of the total, lived in households below the poverty line, after housing costs were taken into account. This is a decrease of 400,000 children since 1998 -- 1999.
|
where did the united states gain independence from | Independence Day (United States) - wikipedia
Independence Day, also referred to as the Fourth of July or July Fourth, is a federal holiday in the United States commemorating the adoption of the Declaration of Independence on July 4, 1776. The Continental Congress declared that the thirteen American colonies regarded themselves as a new nation, the United States of America, and were no longer part of the British Empire. The Congress actually voted to declare independence two days earlier, on July 2.
Independence Day is commonly associated with fireworks, parades, barbecues, carnivals, fairs, picnics, concerts, baseball games, family reunions, and political speeches and ceremonies, in addition to various other public and private events celebrating the history, government, and traditions of the United States. Independence Day is the National Day of the United States.
During the American Revolution, the legal separation of the Thirteen Colonies from Great Britain in 1776 actually occurred on July 2, when the Second Continental Congress voted to approve a resolution of independence that had been proposed in June by Richard Henry Lee of Virginia declaring the United States independent from Great Britain rule. After voting for independence, Congress turned its attention to the Declaration of Independence, a statement explaining this decision, which had been prepared by a Committee of Five, with Thomas Jefferson as its principal author. Congress debated and revised the wording of the Declaration, finally approving it two days later on July 4. A day earlier, John Adams had written to his wife Abigail:
The second day of July, 1776, will be the most memorable epoch in the history of America. I am apt to believe that it will be celebrated by succeeding generations as the great anniversary festival. It ought to be commemorated as the day of deliverance, by solemn acts of devotion to God Almighty. It ought to be solemnized with pomp and parade, with shows, games, sports, guns, bells, bonfires, and illuminations, from one end of this continent to the other, from this time forward forever more.
Adams 's prediction was off by two days. From the outset, Americans celebrated independence on July 4, the date shown on the much - publicized Declaration of Independence, rather than on July 2, the date the resolution of independence was approved in a closed session of Congress.
Historians have long disputed whether members of Congress signed the Declaration of Independence on July 4, even though Thomas Jefferson, John Adams, and Benjamin Franklin all later wrote that they had signed it on that day. Most historians have concluded that the Declaration was signed nearly a month after its adoption, on August 2, 1776, and not on July 4 as is commonly believed.
Coincidentally, both John Adams and Thomas Jefferson, the only signers of the Declaration of Independence later to serve as Presidents of the United States, died on the same day: July 4, 1826, which was the 50th anniversary of the Declaration. Although not a signer of the Declaration of Independence, James Monroe, another Founding Father who was elected as President, also died on July 4, 1831. He was the third President in a row who died on the anniversary of independence. Calvin Coolidge, the 30th President, was born on July 4, 1872; so far he is the only U.S. President to have been born on Independence Day.
Independence Day is a national holiday marked by patriotic displays. Similar to other summer - themed events, Independence Day celebrations often take place outdoors. Independence Day is a federal holiday, so all non-essential federal institutions (such as the postal service and federal courts) are closed on that day. Many politicians make it a point on this day to appear at a public event to praise the nation 's heritage, laws, history, society, and people.
Families often celebrate Independence Day by hosting or attending a picnic or barbecue; many take advantage of the day off and, in some years, a long weekend to gather with relatives or friends. Decorations (e.g., streamers, balloons, and clothing) are generally colored red, white, and blue, the colors of the American flag. Parades are often held in the morning, before family get - togethers, while fireworks displays occur in the evening after dark at such places as parks, fairgrounds, or town squares.
The night before the Fourth was once the focal point of celebrations, marked by raucous gatherings often incorporating bonfires as their centerpiece. In New England, towns competed to build towering pyramids, assembled from barrels and casks. They were lit at nightfall, to usher in the celebration. The highest were in Salem, Massachusetts (on Gallows Hill, the famous site of the execution of 13 women and 6 men for witchcraft in 1692 during the Salem witch trials), where the tradition of celebratory bonfires had persisted, with pyramids composed of as many as forty tiers of barrels. These made the tallest bonfires ever recorded. The custom flourished in the 19th and 20th centuries, and is still practiced in some New England towns.
Independence Day fireworks are often accompanied by patriotic songs such as the national anthem "The Star - Spangled Banner '', "God Bless America '', "America the Beautiful, '' "My Country, ' Tis of Thee, '' "This Land Is Your Land, '' "Stars and Stripes Forever, '' and, regionally, "Yankee Doodle '' in northeastern states and "Dixie '' in southern states. Some of the lyrics recall images of the Revolutionary War or the War of 1812.
Firework shows are held in many states, and many fireworks are sold for personal use or as an alternative to a public show. Safety concerns have led some states to ban fireworks or limit the sizes and types allowed. In addition, local and regional weather conditions may dictate whether the sale or use of fireworks in an area will be allowed. Some local or regional firework sales are limited or prohibited because of dry weather or other specific concerns. On these occasions the public may be prohibited from purchasing or discharging fireworks, but professional displays (such as those at sports events) may still take place, if certain safety precautions have been taken.
A salute of one gun for each state in the United States, called a "salute to the union, '' is fired on Independence Day at noon by any capable military base.
In 2009, New York City had the largest fireworks display in the country, with more than 22 tons of pyrotechnics exploded. It generally holds displays in the East River. Other major displays are in Chicago on Lake Michigan; in San Diego over Mission Bay; in Boston on the Charles River; in St. Louis on the Mississippi River; in San Francisco over the San Francisco Bay; and on the National Mall in Washington, D.C.
During the annual Windsor - Detroit International Freedom Festival, Detroit, Michigan hosts one of the world 's largest fireworks displays, over the Detroit River, to celebrate Independence Day in conjunction with Windsor, Ontario 's celebration of Canada Day.
The first week of July is typically one of the busiest United States travel periods of the year, as many people use what is often a three - day holiday weekend for extended vacation trips.
In addition to a fireworks show, Miami, Florida lights one of its tallest buildings with the patriotic red, white and blue color scheme on Independence Day
New York City 's fireworks display, shown above over the East Village, is sponsored by Macy 's and is the largest in the country
Patriotic trailer shown in theaters celebrating July 4, 1940
A festively decorated Independence day cake.
Lakes are a popular destination for Fourth of July celebrations in the midwest.
The Philippines celebrates July 4 as its Republic Day to commemorate that day in 1946 when it ceased to be a U.S. territory and the United States officially recognized Philippine Independence. July 4 was intentionally chosen by the United States because it corresponds to its Independence Day, and this day was observed in the Philippines as Independence Day until 1962. In 1964, the name of the July 4 holiday was changed to Republic Day. In Rwanda, July 4 is an official holiday known as Liberation Day, commemorating the end of the 1994 Rwandan Genocide in which the U.S. government also played a role. Rebild National Park in Denmark is said to hold the largest July 4 celebrations outside of the United States.
(federal) = federal holidays, (state) = state holidays, (religious) = religious holidays, (week) = weeklong holidays, (month) = monthlong holidays, (36) = Title 36 Observances and Ceremonies Bold indicates major holidays commonly celebrated in the United States, which often represent the major celebrations of the month.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.